AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

84 Posts
Sort by:
Post not yet marked as solved
1 Replies
1.1k Views
I have trouble understanding AVAudioEngine's behaviour when switching audio input sources. Expected Behaviour When switching input sources, AVAudioEngine's inputNode should adopt the new input source seamlessly. Actual Behaviour When switching from AirPods to the iPhone speaker, AVAudioEngine stops working. No audio is routed through anymore. Querying engine.isRunning still returns true. When subsequently switching back to AirPods, it still isn't working, but now engine.isRunning returns false. Stopping and starting the engine on a route change does not help. Neither does calling reset(). Disconnecting and reconnecting the input node does not help, either. The only thing that reliably helps is discarding the whole engine and creating a new one. OS This is on iOS 14, beta 5. I can't test this on previous versions I'm afraid; I only have one device around. Code to Reproduce Here is a minimum code example. Create a simple app project in Xcode (doesn't matter whether you choose SwiftUI or Storyboard), and give it permissions to access the microphone in Info.plist. Create the following file Conductor.swift: import AVFoundation class Conductor { 		static let shared: Conductor = Conductor() 		 		private let _engine = AVAudioEngine? 		 		init() { 				// Session 				let session = AVAudioSession.sharedInstance() 				try? session.setActive(false) 				try! session.setCategory(.playAndRecord, options: [.defaultToSpeaker, 																													 .allowBluetooth, 																													 .allowAirPlay]) 				try! session.setActive(true) 				_engine.connect(_engine.inputNode, to: _engine.mainMixerNode, format: nil) 				_engine.prepare() 		} 		func start() { _engine.start() } } And in AppDelegate, call: Conductor.shared.start() This example will route the input straight to the output. If you don't have headphones, it will trigger a feedback loop. Question What am I missing here? Is this expected behaviour? If so, it does not seem to be documented anywhere.
Posted
by tcwalther.
Last updated
.
Post not yet marked as solved
1 Replies
783 Views
When I change audio unit from VPIO(VoiceProcessingIO) to remoteIO, the volume was reduced and can not restore to normal level. There's a workaround by this post: h ttps://trac.pjsip.org/repos/ticket/1697 I use the workaround in my code and according flowing steps: stop and destory all the running IO AudioUnit (remoteIO and VPIO) run workaround code [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryRecord error:nil]; [[AVAudioSession sharedInstance] setActive:NO error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil]; 3. setup remoteIO and run. and it work perfectly! The volume of remoteIO was go to normally. But sadness happens on iOS 14, the workaround is not work at all.
Posted
by drummer.
Last updated
.
Post not yet marked as solved
0 Replies
233 Views
Hi, Not sure if this is by design or not, but whenever I connect a Bluetooth device to the app the AU callback stops producing frames for several seconds until the device is connected. I'm building a recording app that uses AVAssetWriter with fragmented segments (HLS buffers). When the callback freezes it suppose to create a gap in the audio but for some reason, the segment that is created does not contain an audio gap, and the audio just "jumps" in timestamps.
Posted
by YYfim.
Last updated
.
Post marked as solved
1 Replies
407 Views
We have noticed that sometimes our app spends too much time in the first call of AVAudioSession.sharedInstance and [AVAudioSession setCategory:error:] which we call on app's initialization (during init of apps delegate). I am not sure if the app is stuck in these calls or it simply takes too much time to complete. This probably causes the app to crash due to main thread watchdog. Would it be safe to move these calls to a separate thread?
Posted
by artium.
Last updated
.
Post not yet marked as solved
2 Replies
532 Views
I recently released my first ShazamKit app, but there is one thing that still bothers me. When I started I followed the steps as documented by Apple right here : https://developer.apple.com/documentation/shazamkit/shsession/matching_audio_using_the_built-in_microphone however when I was running this on iPad I receive a lot of high pitched feedback noise when I ran my app with this configuration. I got it to work by commenting out the output node and format and only use the input. But now I want to be able to recognise the song that’s playing from the device that has my app open and was wondering if I need the output nodes for that or if I can do something else to prevent the Mic. Feedback from happening. In short: What can I do to prevent feedback from happening Can I use the output of a device to recognise songs or do I just need to make sure that the microphone can run at the same time as playing music? Other than that I really love the ShazamKit API and can highly recommend to have a go with it! This is the code as documented in the above link (I just added the comments of what broke it for me) func configureAudioEngine() { // Get the native audio format of the engine's input bus. let inputFormat = audioEngine.inputNode.inputFormat(forBus: 0) // THIS CREATES FEEDBACK ON IPAD PRO let outputFormat = AVAudioFormat(standardFormatWithSampleRate: 48000, channels: 1) // Create a mixer node to convert the input. audioEngine.attach(mixerNode) // Attach the mixer to the microphone input and the output of the audio engine. audioEngine.connect(audioEngine.inputNode, to: mixerNode, format: inputFormat) // THIS CREATES FEEDBACK ON IPAD PRO audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: outputFormat) // Install a tap on the mixer node to capture the microphone audio. mixerNode.installTap(onBus: 0, bufferSize: 8192, format: outputFormat) { buffer, audioTime in // Add captured audio to the buffer used for making a match. self.addAudio(buffer: buffer, audioTime: audioTime) } }
Posted Last updated
.
Post not yet marked as solved
3 Replies
1.3k Views
I have been using AVAudioEngine to take audio from the mic and send it out over a WebRTC connection. When I use the iPhone device mic, this seems to work as expected. But if I run the app with bluetooth headphones connected, the engine reports this error when trying to start: [avae]  AVAudioEngine.mm:160   Engine@0x2833e1790: could not initialize, error = -10868 [avae]  AVAEInternal.h:109   [AVAudioEngineGraph.mm:1397:Initialize: (err = AUGraphParser::InitializeActiveNodesInInputChain(ThisGraph, *GetInputNode())): error -10868 Error starting audio engine: The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error -10868.) I see that Error code -10878 is: @constant kAudioUnitErr_FormatNotSupported Returned if an input or output format is not supported ... kAudioUnitErr_FormatNotSupported = -10868 but that doesn't seem like it can be quite correct. I know that the output format is supported because the same format works correctly when my headphones are not attached. And I am pretty sure that the input format is supported because I am able to simply hook up Headphones InputNode -> Mixer -> Headphones OutputNode and correctly hear the audio from the mic. So I can only assume that this means the format conversion is not supported. My Questions: Is this a bug? Is there any way I can work around this? Notes: My full audio graph looks like this, where all the "mixers" are just AVAudioMixerNodes: // InputNode (Mic)  -> Mic Mixer -\ // 																 >-> WebRTC Mixer -> Tap -> WebRTC Framework // AudioPlayer 1 -> Player Mixer  -/ // // AudioPlayer 2 -> Player Mixer -----> LocalOutputMixer -> OutputNode (Device Speakers/Headphones) but the issue still happens even if I simplify down to this: InputNode (Mic)  ->	Mixer -> Tap -> WebRTC Framework Specifically it happens when a single mixer node is connected with an input format and output format as follows: The input format is: (lldb) po audioEngine.inputNode.inputFormat(forBus: 0).streamDescription.pointee ▿ AudioStreamBasicDescription 	- mSampleRate : 16000.0 	- mFormatID : 1819304813 	- mFormatFlags : 41 	- mBytesPerPacket : 4 	- mFramesPerPacket : 1 	- mBytesPerFrame : 4 	- mChannelsPerFrame : 1 	- mBitsPerChannel : 32 	- mReserved : 0 The output format WebRTC expects is: ▿ AudioStreamBasicDescription 	- mSampleRate : 48000.0 	- mFormatID : 1819304813 	- mFormatFlags : 12 	- mBytesPerPacket : 2 	- mFramesPerPacket : 1 	- mBytesPerFrame : 2 	- mChannelsPerFrame : 1 	- mBitsPerChannel : 16 	- mReserved : 0 My headphones are Jaybird Freedom 2.
Posted Last updated
.
Post marked as solved
1 Replies
312 Views
I'm developing an iPhone application and want to playback and record sound from the bottom speaker and microphone without any embedded audio processing. If I use the following code, the sound comes from the top speaker. If I also add "try playbackSession.setCategory(AVAudioSession.Category.multiRoute)" then the sound comes from both the top and bottom speaker. If I use any other setMode than measurement, then there is an embedded audio processing that I want to avoid. playbackSession = AVAudioSession.sharedInstance() do { try playbackSession.overrideOutputAudioPort(AVAudioSession.PortOverride.speaker)       try playbackSession.setMode(AVAudioSession.Mode.measurement))         } catch {print("Playing over the device's speakers failed")         }
Posted Last updated
.
Post not yet marked as solved
0 Replies
279 Views
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly. I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController. If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails. On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses. However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices. My questions are: Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad) Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)
Posted
by jblaker.
Last updated
.
Post not yet marked as solved
5 Replies
1.1k Views
Hello, In my iOS app most users can hear the AVSpeechSynthesisVoice correctly, but some report that it simply does not work. I haven't been able to reproduce the issue locally, but here is how I use the API: let sentence = "the sentence to be told" let synthesizer = AVSpeechSynthesizer() let utterance = AVSpeechUtterance(string: sentence) utterance.voice = AVSpeechSynthesisVoice( 	language: "en-GB" ) utterance.rate = AVSpeechUtteranceDefaultSpeechRate * 1.05 synthesizer.speak(utterance) This works perfectly fine on iOS 13 (tested most minors), all iOS 14 versions, all the devices I could find... but I keep getting reports of people not getting any audio feedback. Do you have any pointers on where to look, or at least reproduce the issue? Thanks
Posted
by MGG9.
Last updated
.
Post not yet marked as solved
0 Replies
241 Views
Hello, we are designing dual mode bluetooth audio earphones. Audio is streamed via bluetooth classic (like any other earphone), but we can also connect to the earphones via BLE to get some specific sensor data. For information, the earphones are not MFI. We have also developped a special iOS application that should automatically connect to the earphones via BLE (when the user launch the app), if the earphones are connected to the smartphone in bluetooth classic. We are able to find out if the smartphone is connected to our earphones using AVAudioSession.currentRoute.outputs and scan for BLE peripherals to find our earphones BLE peripheral. Yet, I don't know how to make sure that a found BLE peripheral corresponds to the connected Bluetooth classic device. WHat about if two earphones are nearby ? Is there any way to check that, so that our application can automatically connect to the BLE peripheral, without needing the user to select to manually select and connect to it ? I've heard about the Cross Transport Key Derivation in the What's New In Core Bluetooth WWDC session of 2019, but I'm not sure wether this would allow me to do what I want. Thank you for your help
Posted Last updated
.
Post not yet marked as solved
0 Replies
242 Views
I'm working on the improvement of Voice Over support for the application which delivers language tests to end-users. The app asks a user to provide a verbal response to test questions that are recorded by the app. Tests have a form of timed assessment so the time span allocated for response providing is absolutely critical (cannot be extended). When a test is taken with a VO enabled there is a chance that a user will activate an element and VO readout will be recorded by the app as a part of a response which will impact the user's score. As we didn't find any option to silence/disable VO during the recording we minimized the number of active elements during the test (there is only one left). However, there is still a chance that a user will activate it or refresh a focus and the response will be impacted. Do any of you guys have some experience with similar cases? Is there anything we missed? All the accommodation-related recommendations suggest letting the user start the recording/playback whenever he/she is ready but this is not an option for us due to the nature of the test. Any help appreciated.
Posted
by Avilar.
Last updated
.
Post not yet marked as solved
0 Replies
219 Views
The stereo capture API works fine when no sound is playback from the speakers, but the recrded volume drops dramatically when AVPlayer is playing any audio file simultaneously, which means the stereo capture can only work property when the iPhone is muted. Steps to reproduce: Run iOS stereo audio capture demo (https://developer.apple.com/documentation/avfaudio/avaudiosession/capturing_stereo_audio_from_built-in_microphones) Use AVPlayer to play out an audio file The recorded audio level is OK before the AVPlayer starts to play, but its recorded level drops dramatically once AVPlayer starts to play
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.6k Views
I am a developer of Tencent. We found that after AirPods upgraded to the new firmware version 4A400, some AirPods microphones may have abnormal sound problems, especially on the iPhone with the system version iOS 13 . The specific performance is: the sound captured by AirPods microphones will appear intermittently with noise, broken sound, pitch shift, and tremor, and the sense of hearing and intelligibility is poor. Our users have reported that many times when using Tencent Meeting for video calls, the others can’t hear his/her voice, so we tested many iPhones and AirPods and found that they all have some problems. Let’s share our Specific test results: iPhone 11 Pro Max / iOS 13.6.1 / AirPods 2, the sound periodically appears noise and tremor about every 20s, it sounds uncomfortable, and when using the system phone/FaceTime/WeChat call/Zoom test, the effect is the same ; Use the same iPhone as 1 and replace the earphones with AirPods Pro, and the test conditions are exactly the same as 1; Using the same AirPods Pro as 2 and changing the phone to iPhone X / iOS 13.6, there will be occasional discontinuities in the sound, and crackling noises will be heard; Use the same AirPods Pro as 2 and change the phone to iPhone Xs / iOS 13.7. At the beginning, there was continuous noise and pitch shifting. After a few minutes of speaking, it returned to normal, and then the sound remained normal; Using the same AirPods Pro as 2 and changing the phone to iPhone 12 Pro Max / iOS 15.0.2, the sound is completely normal. Why does the same AirPods, on different iPhones, have such different performance of the collected sound quality, and the same phenomenon of using various VoIP apps, and all of them have problems on iOS 13? Does the 4A400 firmware have compatibility issues with the iOS 13 system? We noticed that the previous AirPods hardware sampling rate was 16kHz, but after upgrading to 4A400, the hardware sampling rate changed to 24kHz. Is the above noise related to the change of the hardware sampling rate? Do I need to modify the Audio Unit parameters to solve the above problems? Our app has a very large group of personal and corporate users. When they find that there is a problem with the sound, they will give us feedback, which brings us more pressure. Hope to get a reply from Apple or other developers, thank you!
Posted
by doveshi.
Last updated
.
Post not yet marked as solved
0 Replies
214 Views
when i use appCall ,I want to stop other apps from using mic AVAudioSessionCategoryOptions options = AVAudioSessionCategoryOptionAllowBluetooth|AVAudioSessionCategoryOptionDefaultToSpeaker; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:options error:nil]; When I started talking,it can't interrupt wechat
Posted Last updated
.
Post not yet marked as solved
1 Replies
370 Views
Hello, I am looking for information on TTS and STT. I am aware that there is possibility to implement both offline and online. I am interested in knowing if it is possible to enable ondevice TTS and STT for third party app, even when the device is online. **Our use case is: when the app is still online, we wish to do TTS and STT on device and not on Apple server(privacy concerns). ** Please let me know if it is possible at all or point me in the right direction. I really appreciate and look forward to your reply.
Posted Last updated
.
Post not yet marked as solved
0 Replies
235 Views
In our app we do live streaming and have audio input from the user. We have also added a stepper that allows user to increase input gain for the microphone using AVAudioSession, like so: let audioSession = AVAudioSession.sharedInstance()     if audioSession.isInputGainSettable {       do {         let success: () = try audioSession.setInputGain(gain)       } catch {         print("error setting input gain")       }     } And this works fine on my iPad Pro 11 (2nd generation). However, upon further testing it seems we noticed that on our 4 iPads, works on 2 ( both 2nd generation ) but it does not work on other 2 ( both 3rd generation ). The thing that is strange is that input gain is set correctly on all 4 devices, isInputGainSettable is true and the function setInputGain returns success, but on these 2 devices there are no changes in the audio input volume. Does anyone know why something like this would happen?
Posted
by bojandvcs.
Last updated
.
Post not yet marked as solved
1 Replies
513 Views
I want to create a sort of soundscape in surround sound. Imagine something along the lines of the user can place the sound of a waterfall to their front right and the sound of frogs croaking to their left etc. etc. I have an AVAudioEngine playing a number of AVAudioPlayerNodes. I'm using AVAudioEnvironmentNode to simulate the positioning of these. The position seems to work correctly. However, I'd like these to work with head tracking so if the user moves their head the sounds from the players move accordingly. I can't figure out for to do it or find any docs on the subject. Is it possible to make AVAudioEngine output surround sound and if it can would the tracking just work automagically the same as it does when playing surround sound content using AVPlayerItem. If not is the only way to achieve this effect to use CMHeadphonemotionmanager and manually move the listener AVAudioEnvironmentNode listener around?
Posted Last updated
.
Post not yet marked as solved
2 Replies
511 Views
I have a music app that can play in the background, using AVQueuePlayer. I'm in the process of adding support for CloudKit sync of the CoreData store, switching from NSPersistentContainer to NSPersistentCloudKitContainer. The initial sync can be fairly large (10,000+ records), depending on how much the user has used the app. The issue I'm seeing is this: ✅ When the app is in the foreground, CloudKit sync uses a lot of CPU, nearly 100% for a long time (this is expected during the initial sync). ✅ If I AM NOT playing music, when I put the app in the background, CloudKit sync eventually stops syncing until I bring the app to the foreground again (this is also expected). ❌ If I AM playing music, when I put the app in the background, CloudKit never stops syncing, which leads the system to terminate the app after a certain amount of time due to high CPU usage. Is there any way to pause the CloudKit sync when the app is in the background or is there any way to mitigate this?
Posted
by jbuckner.
Last updated
.
Post not yet marked as solved
0 Replies
280 Views
What are the possibilities to use "Hey Siri" while Audio Call in App is in progress? Or is there any function that triggers while we say "Hey Siri"? In my case, I am using PJSIP library to use push to talk functionality (VOIP) that required access of microphone during the call in BG and FG both.
Posted Last updated
.