AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

84 Posts
Sort by:
Post not yet marked as solved
0 Replies
607 Views
Good day community, More than half a year we faced the crash with following callstack: Crashed: AVAudioSession Notify Thread EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000000 0. libEmbeddedSystemAUs.dylib InterruptionListener(void*, unsigned int, unsigned int, void const*) 1. libEmbeddedSystemAUs.dylib InterruptionListener(void*, unsigned int, unsigned int, void const*) arrow_right 2. AudioToolbox AudioSessionPropertyListeners::CallPropertyListeners(unsigned int, unsigned int, void const*) + 596 3. AudioToolbox HandleAudioSessionCFTypePropertyChangedMessage(unsigned int, unsigned int, void*, unsigned int) + 1144 4. AudioToolbox ProcessDeferredMessage(unsigned int, __CFData const*, unsigned int, unsigned int) + 2452 5. AudioToolbox ASCallbackReceiver_AudioSessionPingMessage + 632 6. AudioToolbox _XAudioSessionPingMessage + 44 7. libAudioToolboxUtility.dylib mshMIGPerform + 264 8. CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 9. CoreFoundation __CFRunLoopDoSource1 + 444 10. CoreFoundation __CFRunLoopRun + 1888 11. CoreFoundation CFRunLoopRunSpecific + 424 12. AVFAudio GenericRunLoopThread::Entry(void*) + 156 13. AVFAudio CAPThread::Entry(CAPThread*) + 204 14. libsystem_pthread.dylib _pthread_start + 156 15. libsystem_pthread.dylib thread_start + 8 We use Wwise audio framework as audio playback API. We did reported the problem to Audiokinetic's support, but it seems that the problem is not there. Also we used FMOD sound engine earlier, but we had the same issue. At this time we have around 100 crash events every day, which makes us upset. Looks like it started from iOS 13. My main problem is that I don't communicate with AudioToolbox or AVFAudio API directly but use thirdparty sound engines instead. I believe I am not the only who faced this problem. Also there is a discussion at https://forum.unity.com/threads/ios-12-crash-audiotoolbox.719675/ The last message deserves special attention: https://zhuanlan.zhihu.com/p/370791950 where Jeffrey Zhuang made a research. This might be helpful for Apple's support team. Any help is highly appreciated. Best regards, Sergey.
Posted
by
Post not yet marked as solved
0 Replies
246 Views
our app meet a wired problem for online version. more and more user get 561145187 when try to call this code: AudioQueueNewInput(&self->_recordFormat, inputBufferHandler, (__bridge void *)(self), NULL, NULL, 0, &self->_audioQueue)" I search for several weeks, but nothing help. we sum up all issues devices, found some similarity: only happens on iPad OS 14.0 + occurred when app started or wake from background (we call the code when app received "UIApplicationDidBecomeActiveNotification") Any Idea why this happens?
Posted
by
Post not yet marked as solved
6 Replies
1.8k Views
What are the requirements to support Voice Isolation / Wide Spectrum microphone modes on iOS 15? I see that it's possible to programmatically display the selection menu but the new options say they are unavailable inside of my app. I have a dummy app that creates a standard AVAudioSession and sets the mode to .voiceChat (have tried a lot values here) but still can't seem to switch the microphone mode. It also has the VoIP flag enabled on the capabilities tab. Docs to show Microphone Mode prompt: https://developer.apple.com/documentation/avfoundation/avcapturedevice/systemuserinterface/microphonemodes
Posted
by
Post not yet marked as solved
0 Replies
589 Views
I’m getting the following crash when using AVAudioSession with AVAudioEngine. What I don’t understand is why InterruptionListener is listed twice in the stack trace. Does this mean it’s somehow being called again before it has returned? Is this likely to be a concurrency issue? Crashed: AVAudioSession Notify Thread 0 libEmbeddedSystemAUs.dylib 0x1dbc3333c InterruptionListener(void*, unsigned int, unsigned int, void const*) 1 libEmbeddedSystemAUs.dylib 0x1dbc33270 InterruptionListener(void*, unsigned int, unsigned int, void const*) 2 AudioToolbox 0x1c86e6484 AudioSessionPropertyListeners::CallPropertyListeners(unsigned int, unsigned int, void const*) + 596 3 AudioToolbox 0x1c8740798 HandleAudioSessionCFTypePropertyChangedMessage(unsigned int, unsigned int, void*, unsigned int) + 1144 4 AudioToolbox 0x1c873fec0 ProcessDeferredMessage(unsigned int, __CFData const*, unsigned int, unsigned int) + 2452 5 AudioToolbox 0x1c873f17c ASCallbackReceiver_AudioSessionPingMessage + 632 6 AudioToolbox 0x1c87ad398 _XAudioSessionPingMessage + 44 7 libAudioToolboxUtility.dylib 0x1c8840430 mshMIGPerform + 264 8 CoreFoundation 0x1bd42b174 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 9 CoreFoundation 0x1bd42a880 __CFRunLoopDoSource1 + 444 10 CoreFoundation 0x1bd425634 __CFRunLoopRun + 1888 11 CoreFoundation 0x1bd424ba8 CFRunLoopRunSpecific + 424 12 AVFAudio 0x1ca1f4a2c GenericRunLoopThread::Entry(void*) + 156 13 AVFAudio 0x1ca2457a0 CAPThread::Entry(CAPThread*) + 204 14 libsystem_pthread.dylib 0x1bd1c2d98 _pthread_start + 156 15 libsystem_pthread.dylib 0x1bd1c674c thread_start + 8
Post not yet marked as solved
1 Replies
696 Views
Hello, I've implemented two functions in View controller (setupRemoteTransportControls() and setupNowPlaying()) and added one function to AppDelegate, but I'm still unable to see background audio controls of my app on the lock screen and also audio interruption function isn't working. This is the live stream from url, as you can spot on in the code. In the general settings I have added background playing: What I would like to do is to print on the Remote Command Center artist, title and albumArt, but but i was stuck just displaying the command center. I attach link my code on github, because it is too many characters to paste it: https://github.com/pawelzet/promil_new/blob/main/ViewController.swift Here is AppDelegate func that I've added:     func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {         application.beginReceivingRemoteControlEvents()      //    Override point for customization after application launch.        return true     }
Posted
by
Post not yet marked as solved
0 Replies
418 Views
Using AVAudioEngine with an AVAudioPlayerNode which plays stereo sound works perfectly. I even understand that turning setVoiceProcessingEnabled on the inputNode turns the sound mono. But after I stop the session and the engine, and turn voice processing off, the sound remains mono. This issue is only present with the built in speakers. This is what the I/O formats of the nodes look like before and after the voiceProcessing on-off: Before: mainMixer input<AVAudioFormat 0x281896580: 2 ch, 44100 Hz, Float32, non-inter> mainMixer output<AVAudioFormat 0x2818919f0: 2 ch, 44100 Hz, Float32, non-inter> outputNode input<AVAudioFormat 0x281891cc0: 2 ch, 44100 Hz, Float32, non-inter> outputNode output<AVAudioFormat 0x281891770: 2 ch, 44100 Hz, Float32, non-inter> After: mainMixer input<AVAudioFormat 0x2818acaf0: 2 ch, 44100 Hz, Float32, non-inter> mainMixer output<AVAudioFormat 0x2818acaa0: 1 ch, 44100 Hz, Float32> outputNode input<AVAudioFormat 0x281898820: 1 ch, 44100 Hz, Float32> outputNode output<AVAudioFormat 0x2818958b0: 2 ch, 44100 Hz, Float32, non-inter> Sadly just changing the connection type does not solve anything, I already tried that with (this solves the stereo issue on headphones, but not on built in speakers): let format = AVAudioFormat(standardFormatWithSampleRate: 48000, channels: 2)! audioEngine.connect(audioEngine.mainMixerNode, to: audioEngine.outputNode, format: format)
Posted
by
Post not yet marked as solved
0 Replies
940 Views
something broke in iOS 15 in my app , the same code is working fine with iOS 14.8 and below versions. The actual issues is when I play audio in my app then I go to notification bar , pause the audio and next play the audio from notification bar itself , then same audio is playing twice . One audio is resuming from where I paused it before and the other one is playing the same audio from initial stage. When the issue is happening this is the logs I am getting Ignoring setPlaybackState because application does not contain entitlement com.apple.mediaremote.set-playback-state for platform 2021-09-24 21:40:06.597469+0530 BWW[2898:818107] [rr] Response: updateClientProperties<A4F2E21E-9D79-4FFA-9B49-9F85214107FD> returned with error <Error Domain=kMRMediaRemoteFrameworkErrorDomain Code=29 “Could not find the specified now playing player” UserInfo={NSLocalizedDescription=Could not find the specified now playing player}> for origin-iPhone-1280262988/client-com.iconicsolutions.xstream-2898/player-(null) in 0.0078 seconds I got stuck with this issue since 2 days , I tried all the ways but unable to get why it's only happening in iOS 15. Any help will be greatly appreciated.
Posted
by
Post not yet marked as solved
0 Replies
329 Views
Hello we have an issue, which is more than annoying. Ever since we upgraded to BIG SUR our IOS App fails to load in ANY Simulator IOS version below 14.4. We are showing a very small video on app start which uses the AVPlayer which when commented out lets the app proceed, but as soon as the AVPlayer is attempting to play something the App comes to a stand still 2021-09-28 15:15:01.043592+1000 Test_EN[30285:505278] [AudioHAL_Client] HALB_IOBufferManager.cpp:226:GetIOBuffer:  HALB_IOBufferManager::GetIOBuffer: the stream index is out of range 2021-09-28 15:15:01.043758+1000 Test_EN[30285:505278] [AudioHAL_Client] HALB_IOBufferManager.cpp:226:GetIOBuffer:  HALB_IOBufferManager::GetIOBuffer: the stream index is out of range 2021-09-28 15:15:01.071092+1000 Test_EN[30285:505278] [aqme] 254: AQDefaultDevice (1): output stream 0: null buffer 2021-09-28 15:15:01.071440+1000 Test_EN[30285:505278] [aqme] 1433: EXCEPTION thrown (-50): - 2021-09-28 15:15:15.915705+1000 Test_EN[30285:505150] [aqme] 177: timed out after 15.000s (0 1); suspension count=0 (IOSuspensions: ) 2021-09-28 15:15:15.916170+1000 Test_EN[30285:505150] 239: CA_UISoundClientBase::StartPlaying: AddRunningClient failed (status = -66681). In oder to actually test our app I needed to install Catalina on another partition and run it on Catalina instead. By the looks this issue is not new and has been reported over a year ago and there is still no fix. NOTE: The app works fine on real devices.
Posted
by
Post not yet marked as solved
0 Replies
249 Views
I recorded some fan noise using my record app. My audio recording app uses "AVAudioRecorder". Recently, I found that audio level is fade in for the first few seconds especially in iPhone 12 series. I tested it using "voice memos" app too. I checked the same issue. I'm attaching 2 screen shots. One is from iPhone12 and the other is from iPhone11 pro max. Could you explain why this happen? And is there any solution to prevent this?
Posted
by
Post marked as solved
1 Replies
647 Views
My app uses the Media Player framework to play music and has the Audio background mode. When a user pauses the music and backgrounds the app (or if the app is already in the background when they pause), the system does not kill the app as long as it has the active AVAudioSession. This means that as long as a user doesn't start playback from another audio app, mine is available in Control Center for a quick resume. I recently implemented the UIBackgroundTask API (specifically UIApplication.shared.beginBackgroundTask/endBackgroundTask) to run a 5-20 second server communication task when the app is paused while in the background. Now, iOS kills my app at the conclusion of that task, despite it still having the active audio session. It is no longer in Control Center, and needs to be launched fresh. Is there anything I can do to prevent the system from killing my app at the conclusion of the background task? I'm starting to get complaints from users that they're having to relaunch the app after every time they pause for more than a few seconds. Thanks!
Posted
by
Post not yet marked as solved
0 Replies
306 Views
I'm using AVAudioRecorder to record audio on my Swift app. It takes 30 seconds clip, do some things with that clip, deletes the clip and then it starts the recording of 30 seconds on a new clip, and it goes on. The app works fine doing that on the background, I have the audio background mode enabled. The problem that I'm having is that, when it passes some time (probably like 90 minutes or something like that), the recording suddenly stops and it does not continue recording. I handle the interruptions in my app fine, so when the recording stops for some reason, I get a notification telling me that the recording has been stopped; but when it passes about 90 minutes, the recording just stops without giving me any alerts or something. In fact, I enable the recording with an UISwitch. If the recording stops normally due to some interruption, the switch changes to 'off'. But when the recording stops when about 90 minutes has passed, the switch does not change and it is like the app doesn't get notified about the recording session being canceled. I do not know if this is due to some time restriction when I use a recording on the background or something like that. I could't find any documentation about this problem, so that's why I'm asking for help here.
Posted
by
Post not yet marked as solved
0 Replies
280 Views
What are the possibilities to use "Hey Siri" while Audio Call in App is in progress? Or is there any function that triggers while we say "Hey Siri"? In my case, I am using PJSIP library to use push to talk functionality (VOIP) that required access of microphone during the call in BG and FG both.
Posted
by
Post not yet marked as solved
2 Replies
511 Views
I have a music app that can play in the background, using AVQueuePlayer. I'm in the process of adding support for CloudKit sync of the CoreData store, switching from NSPersistentContainer to NSPersistentCloudKitContainer. The initial sync can be fairly large (10,000+ records), depending on how much the user has used the app. The issue I'm seeing is this: ✅ When the app is in the foreground, CloudKit sync uses a lot of CPU, nearly 100% for a long time (this is expected during the initial sync). ✅ If I AM NOT playing music, when I put the app in the background, CloudKit sync eventually stops syncing until I bring the app to the foreground again (this is also expected). ❌ If I AM playing music, when I put the app in the background, CloudKit never stops syncing, which leads the system to terminate the app after a certain amount of time due to high CPU usage. Is there any way to pause the CloudKit sync when the app is in the background or is there any way to mitigate this?
Posted
by
Post not yet marked as solved
1 Replies
513 Views
I want to create a sort of soundscape in surround sound. Imagine something along the lines of the user can place the sound of a waterfall to their front right and the sound of frogs croaking to their left etc. etc. I have an AVAudioEngine playing a number of AVAudioPlayerNodes. I'm using AVAudioEnvironmentNode to simulate the positioning of these. The position seems to work correctly. However, I'd like these to work with head tracking so if the user moves their head the sounds from the players move accordingly. I can't figure out for to do it or find any docs on the subject. Is it possible to make AVAudioEngine output surround sound and if it can would the tracking just work automagically the same as it does when playing surround sound content using AVPlayerItem. If not is the only way to achieve this effect to use CMHeadphonemotionmanager and manually move the listener AVAudioEnvironmentNode listener around?
Posted
by
Post not yet marked as solved
0 Replies
235 Views
In our app we do live streaming and have audio input from the user. We have also added a stepper that allows user to increase input gain for the microphone using AVAudioSession, like so: let audioSession = AVAudioSession.sharedInstance()     if audioSession.isInputGainSettable {       do {         let success: () = try audioSession.setInputGain(gain)       } catch {         print("error setting input gain")       }     } And this works fine on my iPad Pro 11 (2nd generation). However, upon further testing it seems we noticed that on our 4 iPads, works on 2 ( both 2nd generation ) but it does not work on other 2 ( both 3rd generation ). The thing that is strange is that input gain is set correctly on all 4 devices, isInputGainSettable is true and the function setInputGain returns success, but on these 2 devices there are no changes in the audio input volume. Does anyone know why something like this would happen?
Posted
by
Post not yet marked as solved
1 Replies
370 Views
Hello, I am looking for information on TTS and STT. I am aware that there is possibility to implement both offline and online. I am interested in knowing if it is possible to enable ondevice TTS and STT for third party app, even when the device is online. **Our use case is: when the app is still online, we wish to do TTS and STT on device and not on Apple server(privacy concerns). ** Please let me know if it is possible at all or point me in the right direction. I really appreciate and look forward to your reply.
Posted
by
Post not yet marked as solved
1 Replies
1.6k Views
I am a developer of Tencent. We found that after AirPods upgraded to the new firmware version 4A400, some AirPods microphones may have abnormal sound problems, especially on the iPhone with the system version iOS 13 . The specific performance is: the sound captured by AirPods microphones will appear intermittently with noise, broken sound, pitch shift, and tremor, and the sense of hearing and intelligibility is poor. Our users have reported that many times when using Tencent Meeting for video calls, the others can’t hear his/her voice, so we tested many iPhones and AirPods and found that they all have some problems. Let’s share our Specific test results: iPhone 11 Pro Max / iOS 13.6.1 / AirPods 2, the sound periodically appears noise and tremor about every 20s, it sounds uncomfortable, and when using the system phone/FaceTime/WeChat call/Zoom test, the effect is the same ; Use the same iPhone as 1 and replace the earphones with AirPods Pro, and the test conditions are exactly the same as 1; Using the same AirPods Pro as 2 and changing the phone to iPhone X / iOS 13.6, there will be occasional discontinuities in the sound, and crackling noises will be heard; Use the same AirPods Pro as 2 and change the phone to iPhone Xs / iOS 13.7. At the beginning, there was continuous noise and pitch shifting. After a few minutes of speaking, it returned to normal, and then the sound remained normal; Using the same AirPods Pro as 2 and changing the phone to iPhone 12 Pro Max / iOS 15.0.2, the sound is completely normal. Why does the same AirPods, on different iPhones, have such different performance of the collected sound quality, and the same phenomenon of using various VoIP apps, and all of them have problems on iOS 13? Does the 4A400 firmware have compatibility issues with the iOS 13 system? We noticed that the previous AirPods hardware sampling rate was 16kHz, but after upgrading to 4A400, the hardware sampling rate changed to 24kHz. Is the above noise related to the change of the hardware sampling rate? Do I need to modify the Audio Unit parameters to solve the above problems? Our app has a very large group of personal and corporate users. When they find that there is a problem with the sound, they will give us feedback, which brings us more pressure. Hope to get a reply from Apple or other developers, thank you!
Posted
by
Post not yet marked as solved
0 Replies
214 Views
when i use appCall ,I want to stop other apps from using mic AVAudioSessionCategoryOptions options = AVAudioSessionCategoryOptionAllowBluetooth|AVAudioSessionCategoryOptionDefaultToSpeaker; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:options error:nil]; When I started talking,it can't interrupt wechat
Posted
by