AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

Posts under AVAudioSession tag

77 Posts

Post

Replies

Boosts

Views

Activity

Increased and Mismatched Audio Buffer Sizes on iOS 18 when Sound Recognition or Vocal Shortcuts Is Enabled
Description As of iOS 18, AVAudioSession.setPreferredIOBufferDuration ignores the requested buffer size when Sound Recognition or Vocal Shortcuts is enabled. This results in 1) much larger buffer sizes and 2) mismatched buffer sizes between input and output buffers, which causes ‘glitchy’ audio and increased latency. Additionally, when this issue occurs AVAudioSession.setPreferredIOBufferDuration continues to return ‘true’ and no error is produced. Steps to Reproduce: Enable Vocal Shortcuts on a device running iOS 18. Enable at least one shortcut (e.g. Control Center). Open or clone the example project (https://github.com/cwalo/SoundRecognitionBug) Build and install the example project Attach a headset and launch the application Observe console logs showing a requested buffer size of 0.005805 (256 samples @ 48k) an actual buffer size of 0.023220 (1104 samples @48k - this is regularly the resulting buffer size in all of our tests) Quit the app and detach the headset. Enable mutesOutput in AudioSystem.mm (to avoid feedback) Launch the application Observe Same result from step 4 Mismatched hardware buffer size of 1104 and recorded frame count of 1024 Mismatched playbackCount and recordCount Quit the app and disable vocal shortcuts Launch the app Observe IOBufferDuration matching the requested duration and matched buffer sizes (expected behavior) Expected results: Requested IOBufferDuration is respected or AVAudioSession returns false or error is produced Input and output buffer sizes match Device(s): iPhone 11 Pro, iPad Pro OS: iOS 18.0.1 Environment: Xcode 16.1 FB: FB15715421 Related to: https://forums.developer.apple.com/forums/thread/765477
2
2
828
Dec ’24
AVQueuePlayer Error: LoudnessManager.mm:709 unable to open stream for LoudnessManager plist
Getting this error in iPhone Portrait Mode with notch. Currrently using AVQueuePlayer to play more than 30 mp3 files one by one. All constraint properties are correct but error occures only in Apple iPhone Portrait Mode with notch series. But same code works on same iPhone in Landscape mode. **But I get this error: ** LoudnessManager.mm:709 unable to open stream for LoudnessManager plist Type: Error | Timestamp: 2025-02-07 | Process: | Library: AudioToolbox | Subsystem: com.apple.coreaudio | Category: aqme | TID: 0x42754 LoudnessManager.mm:709 unable to open stream for LoudnessManager plist LoudnessManager.mm:709 unable to open stream for LoudnessManager plist Timestamp: 2025-02-07 | Library: AudioToolbox | Subsystem: com.apple.coreaudio | Category: aqme
0
2
891
Feb ’25
Creating an initial Now Playing state of paused - impossible?
I am working on an app which plays audio - https://youtu.be/VbAfUk_eYl0?si=nJg5ayy2faWE78-g - and one of the features is, on restart, if you had paused playback of a file at the time the app was previously shut down (or were playing one at the time of shutdown), the paused state and position in the file is restored exactly as it was, on restart. The functionality works. However, it seems impossible to get the "now playing" information in iOS into the right state to reflect that via the MediaPlayer API. On restart, handlers are attached to the play/pause/togglePlayPause actions on MPRemoteCommandCenter.shared(), and the map of media info is updated on MPNowPlayingInfoCenter.default().nowPlayingInfo. What happens is that iOS's media view shows the audio as playing and offers a pause button - even though the play action is enabled and the pause action is disabled. Once playback has been initiated (my workaround is to have the pause action toggle the play state, since otherwise you wouldn't be able to initiate playback from controls in a car without initiating it once from a device first). I've created a simplified white-noise-player demo to illustrate the problem - simply build and deploy it, and then start the app, lock your device and look at the playback controls on the lock screen. It will show a pause button - same behavior I've described. https://github.com/timboudreau/ios-play-pause-demo I've tried a few things to narrow down the source of the issue - for example, thinking that not MPNowPlayingInfoPropertyPlaybackProgress and MPMediaItemPropertyPlaybackDuration might be the culprit (since the system interpolates elapsed time and it's recommended to update those properties infrequently) on startup might do the trick, but the result is the same, just without a duration or progress shown. What governs this behavior, and is there some way to explicitly tell the media player API your current state is paused?
0
2
88
Apr ’25
CallKit does not activate audio session with higher probability after upgrading to iOS 18.4.1
Hi, We've noticed that this issue occurs more frequently after upgrading to iOS 18.4.1 and can result in one-way audio. Our app uses CallKit with WebRTC to establish VoIP connections. However, on iOS 18.4.1, CallKit no longer triggers: func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) We're currently comparing the occurrence rate across different iOS versions to better understand the impact. Could you please help analyze the root cause of this issue?
24
1
926
2d
After unholding CallKit, the audio does not restore.
In my application, I use CallKit and have supportsHolding = true set. During my phone call, another call comes in (e.g., GSM). I accept the incoming call and put the current call on hold. If I end the active call myself, everything is fine, and CallKit calls the method provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession). However, if the other party ends the call, the second call remains on hold. In the application, the user clicks on unhold, and I notify CallKit that the hold has ended. But in this case, the didActivate method is not called at all. If I try to activate the audio myself after unhold, I receive the error: Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed} AVAudioSessionErrorInsufficientPriority == NSOSStatusErrorDomain Code: 561017449 What needs to be done for CallKit to activate my audio?
3
1
1.4k
Feb ’25
AVAudioSession gets interrupted when closing a window
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again. This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
3
1
831
Nov ’24
Under certain conditions, using CallKit does not automatically enable the microphone.
Issue: Under certain conditions, using CallKit does not automatically enable the microphone. Steps to Reproduce: 1.Start an outgoing call, then the user manually mutes the audio. 2.Receive a native incoming call, end the current call, then answer the new incoming call.(This order is important.) 3.End the incoming call. 4.Start another outgoing call and observe the microphone; do not manually mute or unmute. Actual Behavior: The audio icon indicates that the audio is unmuted, but the microphone remains off, and the small yellow dot in the top status bar (which represents the microphone) does not appear. Expected Behavior: The microphone should be on, consistent with the audio icon display, and the small yellow dot should appear in the top status bar. Device: iPhone 16 pro & iPhone 15 pro, iOS 18.0+ Can it be reproduced using speakerbox(CallKit Demo)? YES
2
1
420
Mar ’25
Best Approach for Reliable Background Audio Playback with Audio Ducking on Command from Server
I am developing an iOS app that needs to play spoken audio on demand from a server, while ducking the audio of background music from another app (e.g., SoundtrackYourBrand or Apple Music). This must work even when the app is in the background, and the server dictates when and what audio is played. Ideally, the message should be played within a minute of the server requesting it. Current Attempt & Observations I initially tried using Firebase Cloud Messaging (FCM) silent notifications to send a URL to an audio file, which the app would then play using AVPlayer. This works consistently when the app is active, but in the background, it only works about 60% of the time. In cases where it fails, iOS ducks the background music (e.g., from SoundtrackYourBrand) but never plays the spoken audio. Interestingly, when I play the audio without enabling audio ducking, it seems to work 100% of the time from my limited testing, even in the background. The app has background modes enabled for Audio, Background Fetch, and Remote Notifications. Best Approach to Achieve This? I’d like guidance on the best Apple-compliant approach to reliably play audio on command from the server, even when the app is in the background. Some possible paths: Ensuring the app remains active in the background – Are there recommended ways to prevent the app from getting suspended, such as background tasks, a special background mode, or a persistent connection to the server? Alternative triggering mechanisms – Would something like VoIP, Push-to-Talk, or another background service be better suited for this use case? Built-in iOS speech synthesis (AVSpeechSynthesizer) – If playing external audio is unreliable, would generating speech dynamically from text be a more robust approach? Streaming audio instead of sending a URL – Could continuous streaming from the server keep the app active and allow playback at the right moment? I want to ensure the solution is reliable and works 100% of the time when needed. Any recommendations on the best approach for this would be greatly appreciated. Thank you for your time and guidance.
0
1
338
Feb ’25
Audio session activation occasionally fails from CarPlay
I'm working on adding CarPlay support to an audio app and am running into an issue. Occasionally, when a user opens the app from CarPlay while the main app scene is either not connected or is currently in the background, I will receive an error when attempting to activate the audio session. The code below mimics my setup: do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio) try AVAudioSession.sharedInstance().setActive(true) } catch { print(error) // NSOSStatusErrorDomain - 560557684: Session activation failed } That error code maps to AVAudioSession.ErrorCode.cannotInterruptOthers. Once in this state, all subsequent attempts to play different pieces of content will fail. However, things will start working normally if the user opens the app on their phone and tries again from CarPlay (while the app is in the foreground on their phone). I'm not sure why it would behave this way and want to note that I do have the audio background mode capability enabled. Has anyone else encountered this? Are there any workarounds or changes I could make to prevent this from happening?
0
1
117
Apr ’25
AVAudioPlayer/SKAudioNode audio no longer plays after interruption
Hi 👋! We have a SpriteKit-based app where we play AVAudio sounds in three different ways: Effects (incl. UI sounds) with AVAudioPlayer. Long looping tracks with AVAudioPlayer. Short animation effects on the timeline of SpriteKit's SKScene files (effectively SKAudioNode nodes). We've found that when you exit the app or otherwise interrupt audio plays, future audio plays often fail. For example, there's a WebKit-based video trailer inside the app, and if you play it, our looping background music track (2.) will stop playing, and won't resume as you close the trailer (return from WebKit). This is probably due to us not manually restarting the track (so may well be easily fixed). Periodically played AVAudioPlayer audio (1.) are not affected. However, the more concerning thing is that the audio tracks on SKScene file timelines (3.) will no longer play. My hypothesis is that AVAudioEngine gets interrupted, and needs to be restarted for those AVAudioNode elements to regain functionality. Thing is, we don't deal with AVAudioEngine at all currently in the app, meaning it is never initiated to begin with. Obviously things return to normal when you remove the app from short-term memory and restart it. However, it seems many of our users aren't doing this, and often report audio failing presumably due to some interruption in the past without the app ever being cleared from memory. Any idea why timeline-run SKAudioNodes would fail like this? Should the app react to app backgrounding/foregrounding regarding audio? Any help would be very much appreciated ✌️!
0
1
77
May ’25
Audio issue on iPhone 15 pro (iOS18.5)
I have an audio issue on iPhone 15 Pro (iOS18.5). After some steps, the new call will be on one-way audio status, but tap mute and unmute will back to normal. See the attached video and check the "yellow dot indicator" for the audio status. Video link: https://youtube.com/shorts/DqYIIIqtMKI?feature=share I have a similar issue on iOS15 and iOS16, and no issue on iOS17, but now I have this issue on iOS18 with dynamic island model devices. Please check. Thanks.
5
1
184
Jun ’25
Graceful shutdown during background audio playback.
Hello. My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background. From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case. All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction. Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole. Best, John
0
1
69
Jun ’25
Understanding AVAudioTime in AVAudioNodeTapBlock? Is there a way to get time relative to a scheduled Buffer?
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
3
0
1.5k
Dec ’24
My App crashes run-time occasionally when I use AVFoundation and related code
I am developing an app for Vehicle owners with a built-in map navigation feature with voice navigation support. The app works fine without voice navigation but when I use voice navigation it occasionally crashes and it crashes while voice navigation is not in progress. What makes it impossible to diagnose is that even though it crashed 10 times on the flight, I don't see any crash reports in 'Apple Connect'. I tried running it in a simulator and it didn't crash there! but on a real device, when I drive with the app navigating me I crashes abruptly after a few minutes and it's not while the voice navigation is speaking! I also ran the app without AVFoundation and it did not crash then. So I am 100% sure it is something with this AVFoundation framework. If anyone can help find the problem in my following code it would be really helpful. import SwiftUI import AVFoundation struct DirectionHeaderView: View { @Environment(\.colorScheme) var bgMode: ColorScheme var directionSign: String? var nextStepDistance: String var instruction: String @Binding var showDirectionsList: Bool @Binding var height: CGFloat @StateObject var locationDataManager: LocationDataManager @State private var synthesizer = AVSpeechSynthesizer() @State private var audioSession = AVAudioSession.sharedInstance() @State private var lastInstruction: String = "" @State private var utteranceDistance: String = "" @State private var isStepExited = false @State private var range = 20.0 var body: some View { VStack { HStack { VStack { if let directionSign = directionSign { Image(systemName: directionSign) } if !instruction.contains("Re-calculating the route...") { Text("\(nextStepDistance)") .onChange(of: nextStepDistance) { let distance = getDistanceInNumber(distance: nextStepDistance) if distance <= range && !isStepExited { startVoiceNavigation(with: instruction) isStepExited = true } } } } Spacer() Text(instruction) .onAppear { isStepExited = false utteranceDistance = nextStepDistance range = nextStepRange(distance: utteranceDistance) startVoiceNavigation(with: "In \(utteranceDistance), \(instruction)") } .onChange(of: instruction) { isStepExited = false utteranceDistance = nextStepDistance range = nextStepRange(distance: utteranceDistance) startVoiceNavigation(with: "In \(utteranceDistance), \(instruction)") } .padding(10) Spacer() } } .padding(.horizontal,10) .background(bgMode == .dark ? Color.black.gradient : Color.white.gradient) } func startVoiceNavigation(with utterance: String) { if instruction.isEmpty || utterance.isEmpty { return } if instruction.contains("Re-calculating the route...") { synthesizer.stopSpeaking(at: AVSpeechBoundary.immediate) return } let thisUttarance = AVSpeechUtterance(string: utterance) lastInstruction = instruction if audioSession.category == .playback && audioSession.categoryOptions == .mixWithOthers { DispatchQueue.main.async { synthesizer.speak(thisUttarance) } } else { setupAudioSession() DispatchQueue.main.async { synthesizer.speak(thisUttarance) } } } func setupAudioSession() { do { try audioSession.setCategory(AVAudioSession.Category.playback, options: AVAudioSession.CategoryOptions.mixWithOthers) try audioSession.setActive(true) } catch { print("error:\(error.localizedDescription)") } } func nextStepRange(distance: String) -> Double { var thisStepDistance = getDistanceInNumber(distance: distance) if thisStepDistance != 0 { switch thisStepDistance { case 0...200: if locationDataManager.speed >= 90 { return thisStepDistance/1.5 } else { return thisStepDistance/2 } case 201...300: if locationDataManager.speed >= 90 { return 120 } else { return 100 } case 301...500: if locationDataManager.speed >= 90 { return 150 } else { return 125 } case 501...1000: if locationDataManager.speed >= 90 { return 250 } else { return 200 } case 1001...10000: if locationDataManager.speed >= 90 { return 250 } else { return 200 } default: if locationDataManager.speed >= 90 { return 250 } else { return 200 } } } return 200 } func getDistanceInNumber(distance: String) -> Double { var thisStepDistance = 0.0 if distance.contains("km") { let stepDistanceSplits = distance.split(separator: " ") let stepDistanceText = String(stepDistanceSplits[0]) if let dist = Double(stepDistanceText) { thisStepDistance = dist * 1000 } } else { var stepDistanceSplits = distance.split(separator: " ") var stepDistanceText = String(stepDistanceSplits[0]) if let dist = Double(stepDistanceText) { thisStepDistance = dist } } return thisStepDistance } } #Preview { DirectionHeaderView(directionSign: "", nextStepDistance: "", instruction: "", showDirectionsList: .constant(false), height: .constant(0), locationDataManager: LocationDataManager()) }
2
0
480
Nov ’24
MPRemoteCommandCenter not updating play/pause button to proper state on iOS
So I'm using AVAudioEngine. When playing audio I become the 'now playing' app using MPNowPlayingInfoCenter/MPRemoteCommandCenter APIs. When configuring MPRemoteCommandCenter I add a play/pause command target via -addTargetWithHandler on the togglePlayPauseCommand property. Now I also have a play/pause button in my app's UI. When I pause playback from my app's UI (which means I'm the active app, I'm in the foreground), what I do is this: -I pause the AVAudioPlayerNode I'm using with AVAudioEngine. I do not, stop, reset, etc. the AVAudioEngine. I only pause the player node. My thought process here is that the user just pressed pause and it is very likely that he will hit 'play' to resume playback in the near future because My app is in the foreground and the user just hit the pause button. Now if my app moves to the background and if I receive a memory warning I presume it'd make sense to tear down the engine or pause it. Perhaps I'm wrong about this? So when I initially hit the play button from my app's UI I also activate my AVAudioSession. I do this in high priority NSOperation since the documentation warns that "we recommend that applications not activate their session from a thread where a long blocking operation will be problematic." So now I'm playing and I hit pause from my app's UI. Then I quickly bring up the "Now Playing" center and I see I'm the "Now Playing" app but the play-pause button is showing the pause icon instead of the play icon but I'm in the pause state. I do set MPNowPlayingInfoCenter's playbackState to MPNowPlayingPlaybackStatePaused when I pause. Not surprisingly this doesn't work. The documentation states this is for macOS only. So the only way to get MPRemoteCommandCenter to show the "play" image for the play-pause button is to deactivate my AVAudioSession when I pause playback? Since I change the active state of my audio session in a NSOperation because documentation recommends "we recommend that applications not activate their session from a thread where a long blocking operation will be problematic." the play-pause toggle in the remote command center won't immediately update since I'm doing it on another thread. IMO it feels kind of inappropriate for a play-pause button to wait on a NSOperation activating the audio session before updating its UI when I already know my play/paused state, it should update right away like the button in my app does. Wouldn't it be nicer to just use MPNowPlayingInfoCenter's playbackState property on iOS too? If I'm no the longer the now playing app/active audio session it doesn't matter since I'm not in the now playing UI, just ignore it? Also is it recommended that I deactivate my audio session explicitly every time the user pauses audio in my app (when I'm in the foreground)? Also when I do deactivate the audio session I get an error: AVAudioSessionErrorCodeIsBusy (but the button in the now playing center updates to the proper image). I do this : -(void)pause { [self.playerNode pause]; [self runOperationToDeactivateAudioSession]; // This does nothing on iOS: MPNowPlayingInfoCenter *nowPlayingCenter = [MPNowPlayingInfoCenter defaultCenter]; nowPlayingCenter.playbackState = MPNowPlayingPlaybackStatePaused; } So in -runOperationToDeactivateAudioSession I get the AVAudioSessionErrorCodeIsBusy. According to the documentation Starting in iOS 8, if the session has running I/Os at the time that deactivation is requested, the session will be deactivated, but the method will return NO and populate the NSError with the code property set to AVAudioSessionErrorCodeIsBusy to indicate the misuse of the API. So pausing the player node when pausing isn't enough to meet the deactivation criteria. I guess I have to pause or stop the audio engine. I could probably wait until I receive a scene went to background notification or something before deactivating my audio session (which is async, so the button may not update to the correct image in time). This seems like a lot of code to have to write to get a play-pause toggle to update, especially in iPad-multi window scene environment. What's the recommended approach? Should I pause the AudioEngine instead of the player node always? Should I always explicitly deactivate my audio session when the user pauses playback from my app's UI even if I'm in the foreground? I personally like the idea of just being able to set [MPNowPlayingInfoCenter defaultCenter].playbackState = MPNowPlayingPlaybackStatePaused; But maybe that's because that would just make things easier on me. This does feels overcomplicated though. If anyone can share some tips on how I should handle this, I'd appreciate it.
4
0
633
Feb ’25
AVAudioRecorder records silence
We have application using PTT Framework to record audio messages when app is backgrounded. Right now we are using AVAudioRecorder for that purpose. And problem is one specific user has frequent issue - recorded audio contains only silence. I've checked almost everything I can imagine but didn't find any possible reason of issue. Conditions: AVAudioRecorder uses following configuration: [ AVEncoderAudioQualityKey: AVAudioQuality.low.rawValue, AVFormatIDKey : kAudioFormatMPEG4AAC, AVNumberOfChannelsKey: 1, AVSampleRateKey: 16000.0 ] App waits both didBeginTransmitting and didActivate audioSession from PTChannelManager (audio session has playback category at that moment) App does AVAudioSession category change to playAndRecord App gets routeChangeNotification with categoryChange and category = playAndRecord There is no any interruption notifications from AVAudioSession during recording There is no any error notification from AVAudioRecorder Any idea what exactly I do wrong? Is there anything else I should check? Thanks in advance. P.S. it looks like recording audio with AudioUnit has the same issue, but let's exclude it from question atm for simplicity.
3
0
356
Mar ’25
AVAudioEngine. Select input device on macOS
Hello! I'm use AVFoundation for preview video and audio from selected device, and I try use AVAudioEngine for preview audio in real-time, but I can't or I don't understand how select input device? I can hear only my microphone in real-time So far, I'm using AVCaptureAudioPreviewOutput for in real-time hear audio, but I think has delay. On iOS works easy with AVAudioEngine, but on macOS bruh...
1
0
402
Mar ’25
PTT Framework has compatibility issue with .voiceChat AVAudioSession mode
As I've mentioned before our app uses PTT Framework to record and send audio messages. In one of supported by app mode we are using WebRTC.org library for that purpose. Internally WebRTC.org library uses Voice-Processing I/O Unit (kAudioUnitSubType_VoiceProcessingIO subtype) to retrieve audio from mic. According to https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/voicechat using Voice-Processing I/O Unit leads to implicit enabling .voiceChat AVAudioSession mode (i.e. it looks like it's not possible to use Voice-Processing I/O Unit without .voiceChat mode). And problem is following: when user starts outgoing PTT, PTT Framework plays audio notification, but in case of enabled .voiceChat mode that sound is playing distorted or not playing at all. Questions: Is it known issue? Is there any way to workaround it?
17
0
713
Mar ’25