AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

88 Posts
Sort by:
Post not yet marked as solved
0 Replies
616 Views
In voip application , when the CallKit is enabled if we try playing a video through AVplayer the video content is updated frame by frame and the audio of the content is not audible . This issue is observed only in iOS 17, any idea how can we resolve this
Posted
by
Post not yet marked as solved
5 Replies
1.8k Views
Hello, I'm facing an issue with Xcode 15 and iOS 17: it seems impossible to get AVAudioEngine's audio input node to work on simulator. inputNode has a 0ch, 0kHz input format, connecting input node to any node or installing a tap on it fails systematically. What we tested: Everything works fine on iOS simulators <= 16.4, even with Xcode 15. Nothing works on iOS simulator 17.0 on Xcode 15. Everything works fine on iOS 17.0 device with Xcode 15. More details on this here: https://github.com/Fesongs/InputNodeFormat Any idea on this? Something I'm missing? Thanks for your help 🙏 Tom PS: I filed a bug on Feedback Assistant, but it usually takes ages to get any answer so I'm also trying here 😉
Posted
by
Post not yet marked as solved
0 Replies
444 Views
I'm trying to simply set my audio to the iPhone earpiece. No success, this seems like it would work. Using iOS 17 on an iPhone 15 Pro. It doesn't throw any errors. Any advice appreciated. @implementation AudioTogglePlugin - (void)setAudioMode:(CDVInvokedUrlCommand *)command { NSLog(@"... audiotoggle test ..."); NSError* err; AVAudioSession *session = [AVAudioSession sharedInstance]; [session setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options:AVAudioSessionCategoryOptionMixWithOthers error:NULL]; [session overrideOutputAudioPort:AVAudioSessionPortOverrideNone error:&err]; [session setActive:YES error:NULL]; NSLog(@"... audiotoggle error ... %@", err); NSLog(@"... audiotoggle route ... %@", [session currentRoute]); } @end The audio route stays on "Speaker": inputs = ( "<AVAudioSessionPortDescription: 0x283ae0d50, type = MicrophoneBuiltIn; name = iPhone Microphone; UID = Built-In Microphone; selectedDataSource = Front>" ); outputs = ( "<AVAudioSessionPortDescription: 0x283ae0dd0, type = Speaker; name = Speaker; UID = Speaker; selectedDataSource = (null)>" )>
Posted
by
Post not yet marked as solved
1 Replies
522 Views
So I have been able to get my app to record and receive PTT on the foreground successfully but when pressed the talk button in the system UI or when my app is in the background, the audio session is never activated hence func channelManager(_ channelManager: PTChannelManager, didDeactivate audioSession: AVAudioSession) never get called. My question is, is there any step or requirement that might prevent the system UI from activating the audio session even though the same code works perfectly fine whenever the app is in the foreground. I appreciate your help in advance.
Posted
by
Post not yet marked as solved
1 Replies
406 Views
Hey, I've been trying to start an audio recording from an intent started from a widget in iOS 17. However every time I run audioRecorder.record() it just returns false. Is it generally possible to start an audio recording from a widget's intent? In the simulator this actually worked, but on device it would not start the recording. Would love to get any pointers to what might be required to make this work.
Posted
by
Post not yet marked as solved
0 Replies
444 Views
I want the audio session to always use the built-in microphone. However, when using the setPreferredInput() method like in this example private func enableBuiltInMic() { // Get the shared audio session. let session = AVAudioSession.sharedInstance() // Find the built-in microphone input. guard let availableInputs = session.availableInputs, let builtInMicInput = availableInputs.first(where: { $0.portType == .builtInMic }) else { print("The device must have a built-in microphone.") return } // Make the built-in microphone input the preferred input. do { try session.setPreferredInput(builtInMicInput) } catch { print("Unable to set the built-in mic as the preferred input.") } } and calling that function once in the initializer, the audio session still switches to the external microphone once one is plugged in. The session's preferredInput is nil again at that point, even if the built-in microphone is still listed in availableInputs. So, why is the preferredInput suddenly reset? when would be the appropriate time to set the preferredInput again? Observing the session’s availableInputs did not work and setting the preferredInput again in the routeChangeNotification handler seems a bad choice as it’s already a bit too late then.
Posted
by
Post not yet marked as solved
1 Replies
550 Views
Hello, I have struggled to resolve issue above question. I could speak utterance when I turn on my iPhone, but when my iPhone goes to background mode(turn off iPhone), It doesn't speak any more. I think it is possible to play audio or speak utterance because I can play music on background status in youtube. Any help please??
Posted
by
Post not yet marked as solved
1 Replies
593 Views
I've integrated MPVolumeView into my view, and it correctly responds to hardware volume changes as expected. However, once I initiate audio streaming using AVAudioEngine to capture microphone audio and AudioUnit for decoding, the MPVolumeView ceases to reflect changes made using the hardware volume buttons. Additionally, even when I adjust the volume using the slider on MPVolumeView, it doesn't change the system volume. Has anyone else encountered this issue? What might be causing MPVolumeView to stop responding to hardware volume changes once streaming starts? For the AVAudioSession.Mode, I use the default setting because using .voiceChat prevents MPVolumeView update from device volume changes permanently. let session = AVAudioSession.sharedInstance() do { try session.setCategory(.playAndRecord, options: [.allowBluetooth]) try session.setActive(true) } catch { print(error.localizedDescription) }
Posted
by
Post marked as solved
1 Replies
728 Views
in iOS 17 (21A5326A) audioSession.setCategory(.playAndRecord, mode: .default,options: .allowBluetooth) does not set input to bluetooth. In iOS 16 it does. Here the steps to reproduce: Create project with storyboard. in info.plist add NSMicrophoneUsageDescription Your microphone will be used to record your speech when you press the "Start Recording" button. put in ViewController: import UIKit import Speech class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. } override public func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) AVAudioSession.sharedInstance().requestRecordPermission { granted in } } func startAudioSession(){ let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .default,options: .allowBluetooth) try audioSession.setActive(true,options: .notifyOthersOnDeactivation) print(audioSession.currentRoute.description) } catch { } } @IBAction func btnTap(_ sender: UIButton) { startAudioSession() } } put button on the Main.storyboard and link it to btnTap Connect bluetooth headset to iphone, start the App and tap button. in iOS 16 see the current route - bluetooth. in iOS 17 see the current route - speaker
Posted
by
Post not yet marked as solved
0 Replies
345 Views
when i found other sounds by using '[AVAudioSession sharedInstance].otherAudioPlaying', if it return true, i will get currentRoute. but i meet some issues, it's outputs & inputs number all > 0 . i want to judge the sound is by inputs devices or output devices? please help!!!!!!!
Posted
by
Post not yet marked as solved
0 Replies
661 Views
From an app that reads audio from the built-in microphone, I'm receiving many crash logs where the AVAudioEngine fails to start again after the app was suspended. Basically, I'm calling these two methods in the app delegate's applicationDidBecomeActive and applicationDidEnterBackground methods respectively: let audioSession = AVAudioSession.sharedInstance() func startAudio() throws { self.audioEngine = AVAudioEngine() try self.audioSession.setCategory(.record, mode: .measurement)} try audioSession.setActive(true) self.audioEngine!.inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil, block: { ... }) self.audioEngine!.prepare() try self.audioEngine!.start() } func stopAudio() throws { self.audioEngine?.stop() self.audioEngine?.inputNode.removeTap(onBus: 0) self.audioEngine = nil try self.audioSession.setActive(false, options: [.notifyOthersOnDeactivation]) } In the crash logs (iOS 16.6) I'm seeing that this works fine several times as the app is opened and closed, but suddenly the audioEngine.start() call fails with the error Error Domain=com.apple.coreaudio.avfaudio Code=-10851 "(null)" UserInfo={failed call=err = AUGraphParser::InitializeActiveNodesInInputChain(ThisGraph, *GetInputNode())} and the audioEngine!.inputNode.outputFormat(forBus: 0) is something like <AVAudioFormat 0x282301c70: 2 ch, 0 Hz, Float32, deinterleaved> . Also, right before installing the tap, audioSession.availableInputs contains an entry of type MicrophoneBuiltIn but audioSession.currentRoute lists no inputs at all. I was not able to reproduce this situation on my own devices yet. Does anyone have an idea why this is happening?
Posted
by
Post not yet marked as solved
0 Replies
691 Views
Hi! I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system). os_workgroup_t os_workgroup{nullptr}; uint32_t os_workgroup_index_size; if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0, &os_workgroup, &os_workgroup_index_size); status != noErr) { throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " + to_string(status)); } However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0. Can anyone help?
Posted
by
Post not yet marked as solved
1 Replies
683 Views
AVSpeechSynthesisVoice.speechVoices() returns voices that are no longer available after upgrading from iOS 16 to iOS 17 (although this has been an issue for a long time, I think). To reproduce: On iOS 16 download 1 or more enhanced voices under “Accessibility > Spoken Content > Voices”. Upgrade to iOS 17 Call AVSpeechSynthesisVoice.speechVoices() and note that the voices installed in step (1) are still present, yet they are no longer downloaded, therefore they don’t work. And there is no property on AVSpeechSynthesisVoice to indicate if the voice is still available or not. This is a problem for apps that allow users to choose among the available system voices. I receive many support emails surrounding iOS upgrades about this issue. I have to tell them to re-download the voices which is not obvious to them. I've created a feedback item for this as well (FB12994908).
Posted
by
Post not yet marked as solved
0 Replies
575 Views
When recording audio over bluetooth from AirPods to iPhone using the AVAudioRecorder the Bluetooth audio codec used is always AAC-ELD independent of the codec to store which is selected in the AVAudioRecorder instance. As far as I know must every Bluetooth device support SBC, hence, it should be possible for the AirPods to transmit the recorded audio using the SBC codec instead of AAC-ELD. However, I could not find any resource on how the request this codec using the AVAudioRecorder or AVAudioEngine. Is it possible to request SBC at all and if yes how?
Posted
by
Post not yet marked as solved
1 Replies
814 Views
AVSpeechSynthesizer was not working. it was working perfect before. below is my code objective - c. -(void)playVoiceMemoforMessageEVO:(NSString*)msg { [[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil]; AVSpeechSynthesizer *synthesizer = [[AVSpeechSynthesizer alloc]init]; AVSpeechUtterance *speechutt = [AVSpeechUtterance speechUtteranceWithString:msg]; speechutt.volume=90.0f; speechutt.rate=0.50f; speechutt.pitchMultiplier=0.80f; [speechutt setRate:0.3f]; speechutt.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-us"]; [synthesizer speakUtterance:speechutt]; } please help me to solve this issue.
Posted
by
Post not yet marked as solved
0 Replies
900 Views
Our app is a game written in Unity where we have most of our audio playback handled by Unity. However, one of our game experiences utilized microphone input for speech recognition, and so in order for us to perform echo cancellation (while the game has audio playback), we setup an audio stream from Unity to native Swift code that performs the mixing of the input/output nodes. We however found that by streaming the audio buffer to our AVAudioSession: The volume of the audio playback appears to output differently When capturing a screen recording of the app, the audio playback being played from AVAudioSession does not get captured at all. Looking to figure out what could be causing the discrepency in playback as well as capture behaviour during screen recordings. We setup the AVAudioSession with this configuration: AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, options: .mixWithOthers) with inputNode.setVoiceProcessingEnabled(true) after connecting our IO and mixer nodes. Any suggestions or ideas on what to look out for would be appreciated!
Posted
by
Post not yet marked as solved
0 Replies
647 Views
I have the Flutter mobile app and I'm using the record flutter package for recording audio. So I'm facing an issue while recording the audio while the phone is locked. App Behavior: First we start the app and connect it to a Bluetooth device Then the app starts looking for the trigger of 1 from the device connected with it. On receiving the trigger from device it start recording. while mobile locked and app is running in background. AVAudioSession_iOS.mm:2367 Failed to set category, error: '!int' Failed to set up audio session: Error Domain=NSOSStatusErrorDomain Code=560557684 "(null)" I'm getting this error when AVAudioSession setting the category. My is for Users security purpose so it need to record background let me know how can I achive this functionality
Posted
by
Post not yet marked as solved
0 Replies
705 Views
The loop plays smoothly in audacity but when I run it in the device or simulator it clicks each loop at different intensities. I config the session at App level: let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playback, mode: .default, options: [.mixWithOthers]) try audioSession.setActive(true) } catch { print("Setting category session for AVAudioSession Failed") } And then I made my method on my class: func playSound(soundId: Int) { let sound = ModelData.shared.sounds[soundId] if let bundle = Bundle.main.path(forResource: sound.filename, ofType: "flac") { let backgroundMusic = NSURL(fileURLWithPath: bundle) do { audioPlayer = try AVAudioPlayer(contentsOf:backgroundMusic as URL) audioPlayer?.prepareToPlay() audioPlayer?.numberOfLoops = -1 // for infinite times audioPlayer?.play() isPlayingSounds = true } catch { print(error) } } } Does anyone have any clue? Thanks! PS: If I use AVQueuePlayer and repeat the item the click noise disappear (but its no use, because I would need to repeat it indefinitely without wasting memory), if I use AVLooper I get a silence between loops. All with the same sound. Idk :/ PS2: The same happens with ALAC files.
Posted
by
Post not yet marked as solved
1 Replies
964 Views
I was play a pcm(24kHz 16bit) file with AudioQueue, and it mix with other sound( 192kHz 24bit) named sound2. Setting for AVAudioSession as: category (AVAudioSessionCategoryPlayback), options (AVAudioSessionCategoryOptionMixWithOthers|AVAudioSessionCategoryOptionDuckOthers) when playing pcm the sound2 should pushed volume lower as setting. BUT, there has a absolutly mute keep 0.5 second when the sound2 become low, and after a 0.5s mute the pushed lower sound came out. It only become in sound2 situation(192kHz, 24bit). if the sound2's quality lower everything is OK.
Posted
by
Post not yet marked as solved
0 Replies
408 Views
I am wondering if it's possible to obtain audio focus when the app is in the background without using the duckOthers option. I tested the Amazon Echo buds with the Alexa app and found that it can obtain audio focus. I am curious about how this is accomplished. I have a BLE device that can connect with my app. After connecting the device and my app, I put my app in the background and play a song from the Spotify app. Then, when I press a button on my BLE device, it sends a BLE command to my app to play music. However, my app cannot obtain audio focus, so the music cannot be played. The only way to make it work is to configure duckOthers. Compare with Echo buds, if we do the same steps, it can get audio focus. Is it because it has the MFI? do { let options: AVAudioSession.CategoryOptions = [.allowBluetoothA2DP, .defaultToSpeaker, .duckOthers] try audioSession.setCategory(.playAndRecord, mode: .spokenAudio, options: options) DDLogDebug("\(LOG_TAG) \(#function) setting category: \(audioSession.category.rawValue), " + "options: \(audioSession.categoryOptions.rawValue)") } catch { DDLogWarn("\(LOG_TAG) \(#function) Failed to configure audio session: \(error.localizedDescription)") } }
Posted
by