AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

Posts under AVAudioSession tag

82 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

AVAudioSessionErrorCodeCannotInterruptOthers
We are a music app, encountered a scene, there is no way to resume playing music, so I would like to ask about the technical plan, how to achieve it. For example, when playing a video in another app, we pause the music playing and turn off the video, we should resume the music playing. Our code is implemented, so listen AVAudioSessionInterruptionNotification, when we received the notice and judge AVAudioSessionInterruptionOptionShouldResume, we play music came again, Error 560557684(AVAudioSessionErrorCodeCannotInterruptOthers) was reported. We were very confused NSError *error = nil; AVAudioSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:0 error:&error]; [audioSession setActive:YES error:&error]; We compared the apple music app and found that apple music can resume playing. Here is a video of the effects of our app: https://drive.google.com/file/d/1J94S2kxkEpNvG536yzCnKmE7IN3cGzIJ/view?usp=sharing Here's the apple music effect video: https://drive.google.com/file/d/1c1Kdgkn2nhy8SdDvRJAFF2sPvqJ8fL48/view?usp=sharing We want to improve our user experience. How can we do that?
0
0
17
1h
Music app stops playing when switching to the background
Music app stops playing when switching to the background In apps that play music or music files, if you move to the home screen or run another app while the app is running, the music playback stops. Our app does not have the code to stop playing when switching to the background. We are guessing that some people experience this and others do not. We usually guide users to reboot their devices and try again. How can this phenomenon be improved in the code? Or is this a bug or error in the OS?
1
0
136
5d
AVSpeechSynthesizer doesn't notifyOthersOnDeactivation
Hello, I am building a new iOS app which uses AVSpeechSynthesizer and should be able to mix audio nicely with audio from other apps. AVSpeechSynthesizer seems to handle setting the AVAudioSession to active on it's own, but does not deactivate the audio session. This leads to issues, namely that other audio sources remain "ducked" after AVSpeechSynthesizer is done speaking. I have implemented deactivating the audio session myself, which "works", in that it allows other audio sources to become "un-ducked", but it throws this exception each time even though it appears successful. Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed} It appears to be a bug with how AVSpeechSynthesizer handles activating/deactivating the audio session. Below is a minimal example which illustrates the problem. It has two buttons, one which manually deactivates the audio sessions, which throws the exception, but otherwise works, and another button which leaves audio session management to the AVSpeechSynthesizer but does not "un-duck" other audio. If you play some audio from another app (ex: Music), you'll see the button which throws/catches an exception successfully ducks/un-ducks the audio, while the one without attempting to deactivate the session ducks but does not un-duck the audio. import AVFoundation struct ContentView: View { let workingSynthesizer = UnduckingSpeechSynthesizer() let brokenSynthesizer = BrokenSpeechSynthesizer() init() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playback, mode: .voicePrompt, options: [.duckOthers]) } catch { print("Setup error info: \(error)") } } var body: some View { VStack { Button("Works Correctly"){ workingSynthesizer.speak(text: "Hello planet") } Text("-------") Button("Does not work"){ brokenSynthesizer.speak(text: "Hello planet") } } .padding() } } class UnduckingSpeechSynthesizer: NSObject { var synth = AVSpeechSynthesizer() let audioSession = AVAudioSession.sharedInstance() override init(){ super.init() synth.delegate = self } func speak(text: String) { let utterance = AVSpeechUtterance(string: text) synth.speak(utterance) } } extension UnduckingSpeechSynthesizer: AVSpeechSynthesizerDelegate { func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { do { try audioSession.setActive(false, options: .notifyOthersOnDeactivation) } catch { // always throws an error // Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed} print("Deactivate error info: \(error)") } } } class BrokenSpeechSynthesizer { var synth = AVSpeechSynthesizer() let audioSession = AVAudioSession.sharedInstance() func speak(text: String) { let utterance = AVSpeechUtterance(string: text) synth.speak(utterance) } } (I have a separate issue where the first speech attempt takes a few seconds but I don't think it's related)
2
0
176
1w
Help Needed with AVAudioSession in Unity for Consistent Sound Output on iOS Devices
Hello, I hope this message finds you well. I am currently working on a Unity-based iOS application that requires continuous microphone input while also producing sound outputs. For this we need to use iOS echo cancellation, so some sounds need to be played via the iOS layer w/ echo cancellation, I am manually setting up the Audio Session after the app starts. Using the .playAndRecord mode of AVAudioSession. However, I am facing an issue where the volume of the sound output is inconsistent across different iOS devices and scenarios. The process is quite simple, for each AudioClip we are about to play via unity, we copy the buffer data to our iOS Swift layer, which then does all the processing then plays the audio via the native layer. Here are the specific issues I am encountering: The volume level for the game sound effects fluctuate between a normal audible volume and a very low volume. The sound output behaves differently depending on whether the app is launched with the device at full volume or on mute, and if the app is put into background and in foreground afterwards. The volume inconsistency affects my game negatively, as it is very hard to hear some audios, regardless of the device or its initial volume state. I have followed the basic setup for AVAudioSession as per the documentation, but the inconsistencies persist. I'm also aware that Unity uses FMOD to set up the audio routing in iOS, we configure our custom routing after that. We tried tweaking the output volume prior to playing an audio so there isn't much discrepancy, this seems to align the output volume, however there is still some places where the volume is super low, I've looked into the waveforms in Unity and they all seem consistent, there is no reason why the volume would take a dip. private var audioPlayer = AVAudioPlayerNode() @objc public func Play() { audioPlayer.volume = AVAudioSession.sharedInstance().outputVolume * 0.25 audioPlayer.play() } We also explored changing the audio session options to see if we had any luck but unfortunately nothing has changed. private func ConfigAudioSession() { let audioSession = AVAudioSession.sharedInstance(); do { try audioSession.setCategory(.playAndRecord, options: [.mixWithOthers, .allowBluetooth, .defaultToSpeaker]); try audioSession.setMode(.spokenAudio) try audioSession.setActive(true); } catch { //Treat error } } Could anyone provide guidance or suggest best practices to ensure a stable and consistent volume output in this scenario? Any advice on this issue would be greatly appreciated. Thank you in advance for your help!
0
0
213
1w
Two audio streams from specific built-in microphones?
I have an iPad Pro 12.9". I am looking to make an app which can take a simultaneous audio recording from two different microphones at the same time. I want to be able to specify which of the 5 built-in microphones each audio stream should use - ideally one should be from the microphone on the left side of the iPad, and the other should be from one of the mics at the top of the iPad. Is this possible to achieve with the API? The end goal here is to be able to use the two audio streams and do some DSP on the recordings to determine the approximate direction a particular sound comes from.
0
0
177
1w
iOS dosen't call didActivateAudioSession
PLATFORM AND VERSION iOS Development environment: Xcode 15.0, macOS 14.4.1, Objective-C Run-time configuration: iOS 17.2.1, DESCRIPTION OF PROBLEM I am developing an application that uses NetworkExtension (VoIP local push function). But iOS sometimes doesn't call didActivateAudioSession after following sequence. Would you tell me why iOS doesn't call didActivateAudioSession ? (I said "sometimes", but once it occurs, it will occur repeatedly) myApp --- CXStartCallAction --->iOS myApp <-- performStartCallAction callback --- iOS myApp --- AVAudioSession setCategory: AVAudioSessionCategoryPlayAndRecord --->iOS myApp --- AVAudioSession setMode: AVAudioSessionModeVoiceChat --->iOS myApp <-- didActivateAudioSession callback ----iOS I suspect that myApp cannot acquire an AVAudioSession if another app is already using AVAudioSession. [QUESTION1] Is my guess correct? Should I consider another cause? [QUESTION2] If my guess is correct, how can I prove if another app is already using an AVAudioSession? This issue is based on a customer complaint, but the customer said they don't use any other apps. Best Regards,
5
0
282
1w
Volume API returns 0
I am working on a VoIP based PTT app. Uses 'voip' apns notification type to get to know about new incoming PTT call. When my app receives a PTT call, the app plays audio. But the call audio is not heard. While checking the phone volume, the API [[AVAudioSession sharedInstance] outputVolume] returns 0. But clearly the phone volume is not zero. On checking the phone volume by pressing side volume button, the volume is above 50%. This behavior is observed in both app foreground and background scenario. Why does the API return zero volume level ? Is there any other reason why the app volume is not heard ?
0
0
169
1w
Input location of AVAudioSession are different between iPhone
Position of AVAudioSession is different when I use the speaker. try session.setCategory(.playAndRecord, mode: .voiceChat, options: []) try session.overrideOutputAudioPort(.speaker) try session.setActive(true) let route = session.currentRoute route.inputs.forEach{ input in print(input.selectedDataSource?.location) } In iPhone 11(iOS 17.5.1), AVAudioSessionLocation: Lower In iPhone 7 Plus(iOS 15.8.2), AVAudioSessionLocation: Upper What causes this difference in behavior?
0
0
184
2w
AVAudioSessionErrorCodeCannotInterruptOthers
We are to judge the AVAudioSessionInterruptionOptionShouldResume, to restore the audio playback. We have been online for a long time and have been able to resume audio playback normally. But recently we've had a lot of user feedback as to why the audio won't resume playing. Based on this feedback, we checked and found that there were some apps that did not play audio but occupied audio all the time. For example, when a user was using the wechat app, after sending a voice message, we received a notification to resume audio playback, and wechat did not play audio either. But we resume play times wrong AVAudioSessionErrorCodeCannotInterruptOthers. After that, we gave feedback to the wechat app and fixed the problem. But we still have some users feedback this problem, we do not know which app is maliciously occupying audio, so we do not know which aspect to troubleshoot the problem. We pay close attention to user feedback and hope it can help us solve user experience problems.
0
0
163
2w
Disable reverb effect in immersive spaces
I'm developing an app where a user can bring a video or content from a WKWebView into an immersive space using SwiftUI attachments on a RealityView. This works just fine, but I'm having some trouble configuring how the audio from the web content should sound in an immersive space. When in windowed mode, content playing sounds just fine and very natural. The spatial audio effect with head tracking is pronounced and adds depth to content with multichannel or Dolby Atmos audio. When I move the same web view into an immersive space however, the audio becomes excessively echoey, as if a large amount of reverb has been put onto the audio. The spatial audio effect is also decreased, and while still there, is no where near as immersive. I've tried the following: Setting all entities in my space to use channel audio, including the web view attachment. for entity in content.entities { entity.channelAudio = ChannelAudioComponent() entity.ambientAudio = nil entity.spatialAudio = nil } Changing the AVAudioSessionSpatialExperience: And I've also tried every soundstage size and anchoring strategy, large works the best, but doesn't remove that reverb. let experience = AVAudioSessionSpatialExperience.headTracked( soundStageSize: .large, anchoringStrategy: .automatic ) try? AVAudioSession.sharedInstance().setIntendedSpatialExperience(experience) I'm also aware of ReverbComponent in visionOS 2 (which I haven't updated to just yet), but ideally I need a way to configure this for visionOS 1 users too. Am I missing something? Surely there's a way for developers to stop the system messing with the audio and applying these effects? A few of my users have complained that the audio sounds considerably worse in my cinema immersive space compared to in a window.
2
0
318
2w
My loop makes a click noise each time it starts
The loop plays smoothly in audacity but when I run it in the device or simulator it clicks each loop at different intensities. I config the session at App level: let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playback, mode: .default, options: [.mixWithOthers]) try audioSession.setActive(true) } catch { print("Setting category session for AVAudioSession Failed") } And then I made my method on my class: func playSound(soundId: Int) { let sound = ModelData.shared.sounds[soundId] if let bundle = Bundle.main.path(forResource: sound.filename, ofType: "flac") { let backgroundMusic = NSURL(fileURLWithPath: bundle) do { audioPlayer = try AVAudioPlayer(contentsOf:backgroundMusic as URL) audioPlayer?.prepareToPlay() audioPlayer?.numberOfLoops = -1 // for infinite times audioPlayer?.play() isPlayingSounds = true } catch { print(error) } } } Does anyone have any clue? Thanks! PS: If I use AVQueuePlayer and repeat the item the click noise disappear (but its no use, because I would need to repeat it indefinitely without wasting memory), if I use AVLooper I get a silence between loops. All with the same sound. Idk :/ PS2: The same happens with ALAC files.
4
2
836
3w
TestFlight build upload with error ITMS-90338: Non-public API usage
Hello, today when we uploaded a new TestFlight Mac Catalyst build we received an email about the build being invalid: TMS-90338: Non-public API usage - The app references non-public symbols in {app name}: _AVCaptureDeviceTypeBuiltInTelephotoCamera, _AVCaptureDeviceTypeBuiltInTrueDepthCamera, _AVCaptureDeviceTypeBuiltInUltraWideCamera, _AVCaptureSessionInterruptionReasonKey, _AVCaptureSessionInterruptionSystemPressureStateKey, _AVCaptureSystemPressureLevelCritical, _AVCaptureSystemPressureLevelFair, _AVCaptureSystemPressureLevelNominal, _AVCaptureSystemPressureLevelSerious, _AVCaptureSystemPressureLevelShutdown. If method names in your source code match the private Apple APIs listed above, altering your method names will help prevent this app from being flagged in future submissions. In addition, note that one or more of the above APIs may be located in a static library that was included with your app. If so, they must be removed. For further information, visit the Technical Support Information at http://developer.apple.com/support/technical/ We've been uploading builds the same way for months, using the same Xcode 15.2 and dependency versions, and have checked our most recent commits since the last release and nothing was updated around AVFoundation, archiving, etc. Did anything change on Apple's side recently? We use Xcode 15.2 to build/archive/upload and xcodebuild to run all commands.
2
3
365
3w
"Microphone Recording Fails When Launching App from Shortcut (Error Code 561015905)"
I'm experiencing an issue with microphone recording in my app when launched from a Shortcut. The app works correctly when launched directly, but launching it through the Shortcut results in the "Session activation failed" error (code 561015905). Here's what I've done so far: My app has microphone permission granted. The startRecording function sets the audio session category to .playAndRecord. I've implemented error handling within startRecording to catch the error code. The Shortcut workflow includes an action to launch the app (no explicit microphone permission request within the Shortcut). xcode version - 15.2 iphone ios version - 17.4.1
1
0
314
Jun ’24
AVAudioSession.interruptionNotification only delivered once
In my app, I only get one interruption notification when a phone call comes in, and nothing after that. The app uses AVAudioEngine. Is this a bug? A very simple repro is to just register for the notification, but not do anything else with audio: struct ContentView: View { var body: some View { VStack { Image(systemName: "globe") .imageScale(.large) .foregroundStyle(.tint) Text("Hello, world!") } .padding() .onReceive(NotificationCenter.default.publisher(for: AVAudioSession.interruptionNotification)) { event in handleAudioInterruption(event: event) } } private func handleAudioInterruption(event: Notification) { print("handleAudioInterruption") guard let info = event.userInfo, let typeValue = info[AVAudioSessionInterruptionTypeKey] as? UInt, let type = AVAudioSession.InterruptionType(rawValue: typeValue) else { print("missing the stuff") return } if type == .began { print("interruption began") } else if type == .ended { print("interruption ended") guard let optionsValue = info[AVAudioSessionInterruptionOptionKey] as? UInt else { return } if AVAudioSession.InterruptionOptions(rawValue: optionsValue).contains(.shouldResume) { print("should resume") } } } } And do this in the app's init: @main struct InterruptionsApp: App { init() { try! AVAudioSession.sharedInstance().setCategory(.playback, options: []) try! AVAudioSession.sharedInstance().setActive(true) } var body: some Scene { WindowGroup { ContentView() } } }
2
0
414
Jun ’24
Device Volume Changes After Setting AVAudioSession Category
Hi there, I am encountering an issue in my project which utilizes a speech recognizer and occasionally plays audio files. The problem arises when I configure the AVAudioSession and enable voice processing. The system volume changes unexpectedly and becomes uncontrollable. Specifically, the volume is excessively loud on iPhone but quite low on iPad let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .default, options: [.defaultToSpeaker, .allowBluetooth, .interruptSpokenAudioAndMixWithOthers]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) try audioEngine.inputNode.setVoiceProcessingEnabled(true) try audioEngine.outputNode.setVoiceProcessingEnabled(true) I have provided a sample project here: Sample Project. To reproduce the issue, please follow these steps on a real device: Click on "Play recording" to hear the sound at normal volume. Click on "Start recording" to set up the category and speech recognizer. Click on "Stop recording" to stop the recording. Click on "Play recording" again and observe that the sound volume has changed. Thank you for your assistance.
0
0
316
Jun ’24
Check if NSMicrophoneUsageDescription exists
In our third party SDK we would like to use microphone (as optional feature) in case the hosting app allows it. From the docs requestRecordPermission will crash if no NSMicrophoneUsageDescription exists in the hosting app info.plist. Obviously I don't want to crash the app. I would like to check if the hosting app will allow me to call requestRecordPermission before calling it? Is it possible
1
0
280
Jun ’24
USB microphone with high samplerate and AVAudioEngine
Hello, I can't get my head wrapped around the following problem: I have an external USB microphone capable of samplerates of up to 500 kHz. I want to capture the samples and do analysis and display - no playback required. I can not find a way to run the microphone with its maximum samplerate, I always get 48 kHz. I would like to stick to AVAudioEngine if possible. Any pointer welcome. thx! volker
2
0
450
May ’24
Command Center / Dynamic Island missing icons and animations
hello all! I'm setting up a really simple media player in my swiftui app. the code is the following: import AVFoundation import MediaPlayer class AudioPlayerProvider { private var player: AVPlayer init() { self.player = AVPlayer() self.player.automaticallyWaitsToMinimizeStalling = false self.setupAudioSession() self.setupRemoteCommandCenter() } private func setupAudioSession() { do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .default) try AVAudioSession.sharedInstance().setActive(true) } catch { print("Failed to set up audio session: \(error.localizedDescription)") } } private func setupRemoteCommandCenter() { let commandCenter = MPRemoteCommandCenter.shared() commandCenter.playCommand.addTarget { [weak self] _ in guard let self = self else { return .commandFailed } self.play() return .success } commandCenter.pauseCommand.addTarget { [weak self] _ in guard let self = self else { return .commandFailed } self.pause() return .success } } func loadAudio(from urlString: String) { guard let url = URL(string: urlString) else { return } let asset = AVAsset(url: url) let playerItem = AVPlayerItem(asset: asset) self.player.pause() self.player.replaceCurrentItem(with: playerItem) NotificationCenter.default.addObserver(self, selector: #selector(self.streamFinished), name: .AVPlayerItemDidPlayToEndTime, object: self.player.currentItem) } func setMetadata(title: String, artist: String, duration: Double) { var nowPlayingInfo = [ MPMediaItemPropertyTitle: title, MPMediaItemPropertyArtist: artist, MPMediaItemPropertyPlaybackDuration: duration, MPNowPlayingInfoPropertyPlaybackRate: 1.0, ] as [String: Any] MPNowPlayingInfoCenter.default().nowPlayingInfo = nowPlayingInfo } @objc private func streamFinished() { self.player.seek(to: .zero) try? AVAudioSession.sharedInstance().setActive(false) MPNowPlayingInfoCenter.default().playbackState = .stopped } func play() { MPNowPlayingInfoCenter.default().playbackState = .playing self.player.play() } func pause() { MPNowPlayingInfoCenter.default().playbackState = .paused self.player.pause() } } pretty scholastic. The code works when called on views. It also shows up within the lock screen / dynamic island (when in background), but here lies the problems: The play/pause button do not appear neither in the Command Center nor in the dynamic island. If I tap on the position these button should show up, the command works. Just the icons are not appearing. the waveform animation does not animate when playing. Many audio apps are working just fine so is my code lacking something. But I don't know why. What is missing? Thanks in advance!
1
0
377
May ’24
AVAudioSessionErrorCodeCannotInterruptOthers
When I receive the InterruptionBegan notification (the interruption type is AVAudioSessionInterruptionTypeBegan) , I pause playing music. When I receive the InterruptionEnded notification (the interruption type is AVAudioSessionInterruptionTypeEnded), I resume playing music. however, sometimes i has got the error code: AVAudioSessionErrorCodeCannotInterruptOthers (560557684) If some malicious app to take up the audio, which leads to the third party app music playback recovery fails, an error AVAudioSessionErrorCodeCannotInterruptOthers. In this case, can we know which apps are maliciously hogging the audio?
2
0
486
May ’24
Getting 561015905 while trying to initiate recording when the app is in background
I'm trying to start and stop recording when my app is in background periodically. I implemented it using Timer and DispatchQueue. However whenever I am trying to initiate the recording I get this error. This issue does not exist in foreground. Here is the current state of my app and configuration. I have added "Background Modes" capability in the Signing & Capability and I also checked Audio and Self Care. Here is my Info.plist: <plist version="1.0"> <dict> <key>UIBackgroundModes</key> <array> <string>audio</string> </array> <key>WKBackgroundModes</key> <array> <string>self-care</string> </array> </dict> </plist> I also used the AVAudioSession with .record category and activated it. Here is the code snippet: func startPeriodicMonitoring() { let session = AVAudioSession.sharedInstance() do { try session.setCategory(AVAudioSession.Category.record, mode: .default, options: [.mixWithOthers]) try session.setActive(true, options: []) print("Session Activated") print(session) // Start recording. measurementTimer = Timer.scheduledTimer(withTimeInterval: measurementInterval, repeats: true) { _ in self.startMonitoring() DispatchQueue.main.asyncAfter(deadline: .now() + self.recordingDuration) { self.stopMonitoring() } } measurementTimer?.fire() // Start immediately } catch let error { print("Unable to set up the audio session: \(error.localizedDescription)") } } Any thoughts on this? I have tried most of the ways but the issue is still there.
3
0
331
May ’24