I was testing audio playback from YouTube in Safari, and the sound was clipping heavily. At first, I thought it might be due to the poor quality of my small sound system. However, when I took a screenshot and the screenshot sound effect itself produced a loud clipping noise, it became clear that this is not a mechanical problem with my speakers, nor an issue specific to YouTube or Safari. This appears to be a system-wide audio issue in macOS Tahoe 26 - Beta 5.
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls.
func configureForVoIPCall() throws {
try setCategory(
.playAndRecord, mode: .voiceChat,
options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker])
try setActive(true)
}
Using the official SwiftTranscriptionSampleApp from WWDC 2025, speech transcription takes 14+ seconds from audio input to first result, making it unusable for real-time applications.
Environment
iOS: 26.0 Beta
Xcode: Beta 5
Device: iPhone 16 pro
Sample App: Official Apple SwiftTranscriptionSampleApp from WWDC 2025
Configuration Tested
Locale: en-US (properly allocated with AssetInventory.allocate(locale:)) and es-ES
Setup: All optimizations applied (preheating, high priority, model retention)
I started testing in my own app to replace SFSpeech API and include speech detection but after long fights with documentation (this part is quite terrible TBH) I tested the example (https://developer.apple.com/documentation/speech/bringing-advanced-speech-to-text-capabilities-to-your-app) and saw same results.
I added some logs to check the specific time:
🎙️ [20:30:41.532] ✅ Analyzer started successfully - ready to receive audio!
🎙️ [20:30:41.532] Listening for transcription results...
🎙️ [20:30:56.342] 🚀 FIRST TRANSCRIPTION RESULT after 14.810s: 'Hello' (isFinal: false)
Questions
Is this expected performance for iOS 26 Beta, because old SFSpeech is far faster?
Are there additional optimization steps for SpeechTranscriber?
Should we expect significant performance improvements in later betas?
I started playing which transcription of audio files on macOS today, latest beta of Xcode and latest beta of Tahoe. Transcription itself works really well, but for some reason the majority of the results contain no audioTimeRange. I got 22 single-word results with time ranges, spread out all over total file of 53 minutes.
Is there something I can do to improve this? To my understanding, I have followed sample code and instructions very closely, but the SwiftTranscriptionSampleApp and other examples I've seen lead me to believe I should be getting a lot more time ranges than I actually do.
Hi team,
With regards to Call (Live) Translations on VOIP:
Is it possible to invoke live translations within the app? (without going into the Call System UI)
Is it possible to navigate users from app to Call System UI via an API? (and also invoking the new live translations directly)
Will Apple support more languages apart from the current ones? (Currently I see 4 supported languages)
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing.
➡️ The problem:
My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior.
Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins.
So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON.
➡️ What I’ve tried:
— AVAudioSession.interruptionNotification → doesn’t fire
— devicesChangedEventStream → not triggered
I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience
➡️ What I need:
A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings.
Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated.
Thanks.
I ran 5.1 audio tests in both YouTube and Apple Music, and I noticed that when sound is supposed to play from the rear or front surround speakers, it’s also duplicated in the front left and right channels. I’m absolutely sure the issue is with the Apple TV, because I played the same video directly through my TV’s native system, and the channel separation was correct.
Everything used to work perfectly before, so this must be a software issue. I’m currently on tvOS 26 Developer Beta 5, but I’m certain the problem also existed on the stable tvOS 18.5.
I’ve already reset and updated my Apple TV, and I also tried switching the audio format to forced Dolby Atmos 5.1. On the forums, I mostly see complaints about Dolby Atmos not working at all — in my case, everything technically works, but not the way it’s supposed to.
Topic:
Media Technologies
SubTopic:
Audio
AVAudioFormat has no Swift concurrency annotations but the documentation states "Instances of this class are immutable."
This made me always assume it was safe to pass AVAudioFormat instances around. Is this the case? If so can it be marked as Sendable? Am I missing something?
My app is properly configured with MusicKit. I've generated a JWT using my valid credentials (Team ID, Key ID, private key), and I’ve ensured the time settings are correct via NTP.
When I call:
https://api.music.apple.com/v1/catalog/jp/search?term=ado&types=songs
I consistently receive a 500 Internal Server Error.
The JWT is generated using ES256 with valid iat and exp values. I’ve confirmed the token decodes properly using jwt.io, and it's passed via the Authorization: Bearer header.
Things I’ve confirmed:
Key ID, Team ID, private key are correct
App ID is configured with MusicKit capability
JWT is generated and signed correctly
macOS time is synced via NTP
Used both curl and Python to test — same result
Is there anything else I should check on the Apple Developer Console (like App ID, Certificates, or provisioning profile)?
Or could this be a backend issue on Apple’s side?
Any guidance would be appreciated.
I'm developing a photo backup app.
To detect newly added or edited photos since the app launched, I keep a local dictionary in the format [localIdentifier: modification_date].
However, PHAsset.modificationDate is not reliable.
It often changes unexpectedly, possibly due to system operations like iCloud metadata updates.
Is there a more reliable way to detect whether a photo has been modified by user since the last app launch?
I'm thinking about using content hash instead, but I'm not sure how heavy this operation is in terms of performance.
Hello Apple Developer Community,
I am seeking clarification on the intended display behavior of HLS audio tracks within the iOS 26 (or current beta) native player, specifically concerning the NAME and LANGUAGE attributes of the EXT-X-MEDIA tag.
In our HLS manifests, we define alternative audio tracks using EXT-X-MEDIA tags, like so:
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="ja",NAME="AUDIO-1",DEFAULT=YES,AUTOSELECT=YES,URI="audio_ja.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="ja",NAME="AUDIO-2",URI="audio_en.m3u8"
Our observation is that when an audio track is selected and its name is displayed in the native iOS media controls (e.g., Control Center or within a full-screen video player's UI), the value specified in the NAME attribute ("AUDIO-1", "AUDIO-2") does not seem to be used. Instead, the display appears to derive from the LANGUAGE attribute ("ja", "en"), often showing the system's localized string for that language (e.g., "Japanese", "English").
We would like to understand the official or intended behavior regarding this.
Is it the expected behavior for the iOS native player to prioritize the LANGUAGE attribute (or its localized equivalent) over the NAME attribute for displaying the selected audio track's label?
If this is the intended design, what is the recommended best practice for developers who wish to present a custom, human-readable name for audio tracks (beyond the standard language name) in the native iOS UI?
Are there any specific AVPlayer properties or AVMediaSelectionOption considerations that would allow more granular control over this display, or is this entirely managed by the system based on the LANGUAGE attribute?
Any insights or official guidance on this behavior in iOS 26 (and potentially previous versions) would be greatly appreciated.
Thank you for your time and assistance.
Hi all,
I'm working on a custom Metal-based video pipeline using AVCaptureVideoDataOutput, and I've run into an unexpected issue related to exposure.
Setup:
I'm capturing video frames using kCVPixelFormatType_420YpCbCr8BiPlanarFullRange.
In my Metal shader, I:
Convert YCbCr (full range Rec.709) to linear Rec.709 RGB.
Apply Rec.709 → sRGB gamma encoding.
Output to .bgra8Unorm_srgb via MTKView.
Everything renders correctly in terms of colorspace math, but the image appears significantly brighter (~+3 stops EV) compared to AVCaptureVideoPreviewLayer and the native iOS Camera app under the same camera exposure settings.
What I’ve verified:
The color transforms are correct: YCbCr709 to RGB, then linear to sRGB.
I'm not applying any tone mapping or aggressive look LUTs yet.
Camera exposure is locked using:
device.setExposureModeCustom(duration: ..., iso: ...)
The same EV (e.g., ISO 50, 1/125s, f/5.6) on my iPhone appears visually 3 stops brighter than on my digital cameras (Sony/Canon etc).
To match the look of the preview layer or camera app, I have to simulate a ~–3 EV shift in my custom pipeline.
Questions:
Is AVCaptureVideoPreviewLayer applying extra tone mapping, digital gain, or contrast shaping (like OOTF etc)?
Does the camera ISP expose "hotter" (i.e., with more light) internally for the preview layer than what we get in video frame buffers?
Is there a standard way to compensate for this ISP behavior in custom pipelines using AVCaptureVideoDataOutput?
Can this be accounted for using metadata (e.g., exposure bias, gain, gamma curve)?
Topic:
Media Technologies
SubTopic:
Photos & Camera
1, Devices
Apple TV HD tvOS 18.5 (22L572) Mode(A1625)
HomePod Gen1 OS18.5 (22L572)
2, Test cases:
2.1 UIViewController + AVPlayerLayer + AVPlayer
Result: Can not AirPlay to HomePod
Sample code for UIViewController + AVPlayer
2.1 AVPlayerViewController + AVPlayer
Result: Can AirPlay to HomePod
Sample code for AVPlayerViewController + AVPlayer
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files.
Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing.
I've stripped my class down to the following:
private var isAuthed = false
// I call this in a .task {} in my SwiftUI View
public func requestSpeechRecognizerPermission() {
SFSpeechRecognizer.requestAuthorization { authStatus in
Task {
self.isAuthed = authStatus == .authorized
}
}
}
public func transcribe(from url: URL) {
guard isAuthed else { return }
let locale = Locale(identifier: "en-US")
let recognizer = SFSpeechRecognizer(locale: locale)
let recognitionRequest = SFSpeechURLRecognitionRequest(url: url)
// the behaviour occurs whether I set this to true or not, I recently set
// it to true to see if it made a difference
recognizer?.supportsOnDeviceRecognition = true
recognitionRequest.shouldReportPartialResults = true
recognitionRequest.addsPunctuation = true
recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in
guard result != nil else { return }
if result!.isFinal {
//speechRecognitionMetadata is not nil
} else {
//speechRecognitionMetadata is nil
}
}
}
}
Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true.
The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values.
The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
i tried combine speech detector and speech transciber to anlayzer.
but speech detector is not speech module. please help me
When creating and activating AVPictureInPictureController on iPhone 11 running iOS 18.x, the main screen viewport unexpectedly changes from the native 375×812 logical points to 414×896 points (the size of a different device such as iPhone XR/11 Plus). Additionally, a separate UIWindow with zero CGRect is created. This causes UIKit layout calculations, especially related to the keyboard and safeAreaInsets, to malfunction and results in issues like keyboard clipping and general UI misalignment. The problem appears tied specifically to iOS 18+ behavior and does not manifest on earlier iOS versions or other devices consistently.
Steps to Reproduce:
Run the app on iPhone 11 with iOS 18.x (including the latest minor updates).
Instantiate and start an AVPictureInPictureController during app runtime.
Observe that UIScreen.main.bounds changes from 375×812 (native iPhone 11 size) to 414×896.
Notice a secondary UIWindow appears with frame CGRectZero.
Interact with text inputs that cause the keyboard to appear.
The keyboard is clipped or partially obscured, causing UI usability issues.
Layout elements dependent on UIScreen.main.bounds or safeAreaInsets behave incorrectly.
Expected Behavior:
UIScreen.main.bounds should not dynamically change upon PiP controller creation. The coordinate space and viewport should remain accurate to device native dimensions. The keyboard and UI layout should adjust properly with respect to the actual on-screen size and safe areas.
Actual Behavior:
UIScreen.main.bounds changes to 414×896 upon PiP controller creation on iPhone 11 iOS 18+.
A zero-rect secondary UIWindow is created, impacting layout queries.
Keyboard clipping and UI layout bugs occur.
The app viewport and coordinate space are inconsistent with device hardware.
When using AVSampleBufferDisplayLayer to play uncompressed H.264 and H.265 video with B-frames more than 7, frame drops occur. The more B-frames there are, the more noticeable the frame drops become, for example 15 bframes.
Use FFmpeg to transcode a video file with visible timestamps and frame numbers (x264 or x265 ):
ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx264 -x264-params "bframes=15:b-adapt=0" -crf 30 -y x264_bf15.mp4
ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx265 -x265-params "bframes=15:b-adapt=0" -crf 30 -y x265_bf15.mp4
Use the demo player from this repository to reproduce the issue: https://github.com/msfrms/CustomPlayer
frame drops can be observed. And following log can be found in devices console.
mediaserverd <<<< IQ-CA >>>> piqca_gmstats_dump: FIQCA(0x1266f4000) recent frames: enqueued: 184, displayed: 138, dropped: 42, flushed: 0, evicted: 3, >16ms late: 2
PS. I was using iphone11 iOS14.6, to replay this issue.
May I ask why frame drops occur in this case?
Is there any configuration or API usage change that could help fix the frame drop issue?
Many thanks!
Dear Sirs,
I'd like to add an icon to my audio driver based on AudioDriverKit. This icon should show up left of my audio device in the audio devices dialog. For an Audio Server Plugin I managed to do this using the property kAudioDevicePropertyIcon and CFBundleCopyResourceURL(...) but how would you do this with AudioDriverKit? Should I use IOUserAudioCustomProperty or IOUserAudioControl and how would I refer to the Bundle? Is there an example available somewhere?
Thanks and best regards,
Johannes
I’m developing a macOS audio monitoring app using AVAudioEngine, and I’ve run into a critical issue on macOS 26 beta where AVFoundation fails to detect any input devices, and AVAudioEngine.start() throws the familiar error 10877.
FB#: FB19024508
Strange Behavior:
AVAudioEngine.inputNode shows no channels or input format on bus 0.
AVAudioEngine.start() fails with -10877 (AudioUnit connection error).
AVCaptureDevice.DiscoverySession returns zero audio devices.
Microphone permission is granted (authorized), and the app is properly signed and sandboxed with com.apple.security.device.audio-input.
However, CoreAudio HAL does detect all input/output devices:
Using AudioObjectGetPropertyDataSize and AudioObjectGetPropertyData with kAudioHardwarePropertyDevices, I can enumerate 14+ devices, including AirPods, USB DACs, and BlackHole.
This suggests the lower-level audio stack is functional.
I have tried:
Resetting CoreAudio with sudo killall coreaudiod
Rebuilding and re-signing the app
Clearing TCC with tccutil reset Microphone
Running on Apple Silicon and testing Rosetta/native detection via sysctl.proc_translated
Using a fallback mechanism that logs device info from HAL and rotates logs for submission via Feedback Assistant
I have submitted logs and a reproducible test case via Feedback Assitant : FB#: FB19024508]
Recurring crash on install of any app with the new sourceVideoTrackProvider.next()
dyld[41966]: Symbol not found: _$sSo19AVAssetReaderOutputC12AVFoundationE8ProviderC4nextxSgyYaKFTjTu Referenced from: <79AA2BE0-A6B4-32F5-A804-E84BBE5D1AEA> /Users/<username>/Library/Developer/Xcode/DerivedData/TrackProviderCrash-bbbhjptcxnmfdcackxtpucnunxyc/Build/Products/Debug-maccatalyst/TrackProviderCrash.app/Contents/MacOS/TrackProviderCrash.debug.dylib Expected in: <1B847AF9-7973-3B28-95C2-09E73F6DD50B> /usr/lib/swift/libswiftAVFoundation.dylib
Can be reproduced with the current Xcode Beta 4 by running on to MacCatalyst and macOS
https://developer.apple.com/documentation/AVFoundation/converting-projected-video-to-apple-projected-media-profile
Crash goes away of you comment out lines 154-158 and 164-170 which are while let sampleBuffer = try await sourceVideoTrackProvider.next(){/*other code*/}
Can also be reproduced if you add the code below to a MacCatalyst project
import AVKit
let asset: AVURLAsset = .init(url: Bundle.main.url(forResource: "SomeVideo.mp4", withExtension: nil)!)
let videoReader = try! AVAssetReader(asset: asset)
let videoTracks = try! await asset.loadTracks(withMediaCharacteristic: .visual)
// Get the side-by-side video track.
let videoTrack = videoTracks.first!
let videoInputTrack = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: nil)
let sourceVideoTrackProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>> = videoReader.outputProvider(for: videoInputTrack)
//Comment out this
while let sb = try! await sourceVideoTrackProvider.next() {
}