AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

AVAudioSession Documentation

Posts under AVAudioSession tag

102 results found
Sort by:
Post not yet marked as solved
31 Views

in iOS 15 , avplayer same audio is playing two times when we pause and play from Notification Mini Bar

something broke in iOS 15 in my app , the same code is working fine with iOS 14.8 and below versions. The actual issues is when I play audio in my app then I go to notification bar , pause the audio and next play the audio from notification bar itself , then same audio is playing twice . One audio is resuming from where I paused it before and the other one is playing the same audio from initial stage. When the issue is happening this is the logs I am getting Ignoring setPlaybackState because application does not contain entitlement com.apple.mediaremote.set-playback-state for platform 2021-09-24 21:40:06.597469+0530 BWW[2898:818107] [rr] Response: updateClientProperties<A4F2E21E-9D79-4FFA-9B49-9F85214107FD> returned with error <Error Domain=kMRMediaRemoteFrameworkErrorDomain Code=29 “Could not find the specified now playing player” UserInfo={NSLocalizedDescription=Could not find the specified now playing player}> for origin-iPhone-1280262988/client-com.iconicsolutions.xstream-2898/player-(null) in 0.0078 seconds I got stuck with this issue since 2 days , I tried all the ways but unable to get why it's only happening in iOS 15. Any help will be greatly appreciated.
Asked
by KLucky.
Last updated
.
Post not yet marked as solved
22 Views

Domain=NSOSStatusErrorDomain Code=561145203

Env: iPad mini 4/14.7 use : NSError *error = nil; [session setActive:YES error:&error] logger: Error Domain=NSOSStatusErrorDomain Code=561145203 "(null)"
Asked Last updated
.
Post not yet marked as solved
28 Views

Audio remains mono after turning voiceProcessing off

Using AVAudioEngine with an AVAudioPlayerNode which plays stereo sound works perfectly. I even understand that turning setVoiceProcessingEnabled on the inputNode turns the sound mono. But after I stop the session and the engine, and turn voice processing off, the sound remains mono. This issue is only present with the built in speakers. This is what the I/O formats of the nodes look like before and after the voiceProcessing on-off: Before: mainMixer input<AVAudioFormat 0x281896580: 2 ch, 44100 Hz, Float32, non-inter> mainMixer output<AVAudioFormat 0x2818919f0: 2 ch, 44100 Hz, Float32, non-inter> outputNode input<AVAudioFormat 0x281891cc0: 2 ch, 44100 Hz, Float32, non-inter> outputNode output<AVAudioFormat 0x281891770: 2 ch, 44100 Hz, Float32, non-inter> After: mainMixer input<AVAudioFormat 0x2818acaf0: 2 ch, 44100 Hz, Float32, non-inter> mainMixer output<AVAudioFormat 0x2818acaa0: 1 ch, 44100 Hz, Float32> outputNode input<AVAudioFormat 0x281898820: 1 ch, 44100 Hz, Float32> outputNode output<AVAudioFormat 0x2818958b0: 2 ch, 44100 Hz, Float32, non-inter> Sadly just changing the connection type does not solve anything, I already tried that with (this solves the stereo issue on headphones, but not on built in speakers): let format = AVAudioFormat(standardFormatWithSampleRate: 48000, channels: 2)! audioEngine.connect(audioEngine.mainMixerNode, to: audioEngine.outputNode, format: format)
Asked Last updated
.
Post not yet marked as solved
178 Views

iOS 15 - Microphone Mode Support

What are the requirements to support Voice Isolation / Wide Spectrum microphone modes on iOS 15? I see that it's possible to programmatically display the selection menu but the new options say they are unavailable inside of my app. I have a dummy app that creates a standard AVAudioSession and sets the mode to .voiceChat (have tried a lot values here) but still can't seem to switch the microphone mode. It also has the VoIP flag enabled on the capabilities tab. Docs to show Microphone Mode prompt: https://developer.apple.com/documentation/avfoundation/avcapturedevice/systemuserinterface/microphonemodes
Asked Last updated
.
Post not yet marked as solved
75 Views

MPRemoteCommandCenter - Remote controls on lock screen does not show up

Hello, I've implemented two functions in View controller (setupRemoteTransportControls() and setupNowPlaying()) and added one function to AppDelegate, but I'm still unable to see background audio controls of my app on the lock screen and also audio interruption function isn't working. This is the live stream from url, as you can spot on in the code. In the general settings I have added background playing: What I would like to do is to print on the Remote Command Center artist, title and albumArt, but but i was stuck just displaying the command center. I attach link my code on github, because it is too many characters to paste it: https://github.com/pawelzet/promil_new/blob/main/ViewController.swift Here is AppDelegate func that I've added:     func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {         application.beginReceivingRemoteControlEvents()      //    Override point for customization after application launch.        return true     }
Asked
by pawelzet.
Last updated
.
Post not yet marked as solved
5.3k Views

Simulator crashing with iOS < 14. Started happening since Big Sur

I can no longer run my app in the simulator with a version lower than iOS14. I tried iOS12, iOS12.4, iOS13.7 and they all crash with the same error. This only started since upgrading to Big Sur. Nothing has changed in my code base. This is the code that crashes:     // Provide callback audio rendering function on the unit     // Set input callback     AURenderCallbackStruct callbackStruct;     callbackStruct.inputProc = playbackCallback;     callbackStruct.inputProcRefCon = self;     status = AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, kOutputBus, &callbackStruct, sizeof(callbackStruct) );     NSAssert1(status == noErr, @"Error setting callback: %d", (int)status);     //start audio     AudioOutputUnitStart(_audioUnit); The error "Thread 1: signal SIGABRT" occurs at the last line... AudioOutputUnitStart(_audioUnit); And the errors in the console are: /Library/Audio/Plug-Ins/HAL/JackRouter.plugin/Contents/MacOS/JackRouter: mach-o, but not built for iOS simulator 2020-11-25 15:17:34.229006-0800 PolyNome[20602:879922] Cannot find function pointer New_JackRouterPlugIn for factory <CFUUID 0x600003f11a80> 7CB18864-927D-48B5-904C-CCFBCFBC7ADD in CFBundle/CFPlugIn 0x7fbe247a7390 </Library/Audio/Plug-Ins/HAL/JackRouter.plugin> (bundle, not loaded) 2020-11-25 15:17:34.461372-0800 PolyNome[20602:880389] [AudioHAL_Client] HALB_IOBufferManager.cpp:226:GetIOBuffer:  HALB_IOBufferManager::GetIOBuffer: the stream index is out of range 2020-11-25 15:17:34.461528-0800 PolyNome[20602:880389] [AudioHAL_Client] HALB_IOBufferManager.cpp:226:GetIOBuffer:  HALB_IOBufferManager::GetIOBuffer: the stream index is out of range 2020-11-25 15:17:34.474317-0800 PolyNome[20602:880389] [aqme] 254: AQDefaultDevice (1): output stream 0: null buffer 2020-11-25 15:17:34.475993-0800 PolyNome[20602:880389] [aqme] 1640: EXCEPTION thrown (-50): - 2020-11-25 15:17:43.424327-0800 PolyNome[20602:879922] RPCTimeout.mm:55:_ReportRPCTimeout: Start: Mach message timeout. Apparently deadlocked. Aborting now. CoreSimulator 732.18.0.2 - Device: iPhone 8 Plus (796F538B-78DA-4FE7-9005-317621931E88) - Runtime: iOS 12.4 (16G73) - DeviceType: iPhone 8 Plus As I said, the only thing that's changed since it last worked is that I upgraded to Big Sur. It runs fine in the iOS Simulator, but not in simulators with a lower iOS version. The Audio Output of the Simulator is set to Internal Speakers. Any help is much appreciated.
Asked Last updated
.
Post not yet marked as solved
73 Views

AVAudioSession Notify Thread Crash

I’m getting the following crash when using AVAudioSession with AVAudioEngine. What I don’t understand is why InterruptionListener is listed twice in the stack trace. Does this mean it’s somehow being called again before it has returned? Is this likely to be a concurrency issue? Crashed: AVAudioSession Notify Thread 0 libEmbeddedSystemAUs.dylib 0x1dbc3333c InterruptionListener(void*, unsigned int, unsigned int, void const*) 1 libEmbeddedSystemAUs.dylib 0x1dbc33270 InterruptionListener(void*, unsigned int, unsigned int, void const*) 2 AudioToolbox 0x1c86e6484 AudioSessionPropertyListeners::CallPropertyListeners(unsigned int, unsigned int, void const*) + 596 3 AudioToolbox 0x1c8740798 HandleAudioSessionCFTypePropertyChangedMessage(unsigned int, unsigned int, void*, unsigned int) + 1144 4 AudioToolbox 0x1c873fec0 ProcessDeferredMessage(unsigned int, __CFData const*, unsigned int, unsigned int) + 2452 5 AudioToolbox 0x1c873f17c ASCallbackReceiver_AudioSessionPingMessage + 632 6 AudioToolbox 0x1c87ad398 _XAudioSessionPingMessage + 44 7 libAudioToolboxUtility.dylib 0x1c8840430 mshMIGPerform + 264 8 CoreFoundation 0x1bd42b174 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 9 CoreFoundation 0x1bd42a880 __CFRunLoopDoSource1 + 444 10 CoreFoundation 0x1bd425634 __CFRunLoopRun + 1888 11 CoreFoundation 0x1bd424ba8 CFRunLoopRunSpecific + 424 12 AVFAudio 0x1ca1f4a2c GenericRunLoopThread::Entry(void*) + 156 13 AVFAudio 0x1ca2457a0 CAPThread::Entry(CAPThread*) + 204 14 libsystem_pthread.dylib 0x1bd1c2d98 _pthread_start + 156 15 libsystem_pthread.dylib 0x1bd1c674c thread_start + 8
Asked Last updated
.
Post marked as solved
344 Views

RemoteIO Glitch With Sound Recognition Feature

My app is using RemoteIO to record audio. It doesn’t do any playback. RemoteIO seems to be broadly compatible with the new Sound Recognition feature in iOS14, but I’m seeing a glitch when sound recognition is first enabled. If my app is started and I initialise RemoteIO, and then turn on Sound Recognition (say via control centre), the RemoteIO input callback is not called thereafter, until I tear down the audio unit and set it back up. So something like the following: Launch app RemoteIO is initialised and working, can record Turn on Sound Recognition via Settings or control centre widget Start recording with already-set up RemoteIO Recording callback is never again called Though no input callbacks are seen, kAudioOutputUnitProperty_IsRunning is reported as true, so the audio unit thinks it is active Tear down audio unit Set up audio unit again Recording works Buffer size is changed, reflecting some effect on the audio session of the Sound Recognition feature I also noticed that when Sound Recognition is enabled, I see several (usually 3) AVAudioSession.routeChangeNotifications in quick succession. When Sound Recognition is disabled while RemoteIO is set up, I don’t see this problem. I’m allocating my own buffers so it’s not a problem with their size. What could be going on here? Am I not handling a route change properly? There doesn’t seem to be a reliable sequence of events I can catch to know when to reset the audio unit. The only fix I’ve found here is to hack in a timer that checks for callback activity shortly after starting recording, and resets the audio unit if no callback activity is seen. Better than nothing, but not super reliable.
Asked Last updated
.
Post not yet marked as solved
90 Views

Audio Recording failing for longer duration

We are trying to record audio for time more than 6 hours. The app is working for 2 to 3 hours, but if I record it more than 4 or 5 hours, the recording is failing. I have. enough space on my device.We are using AVAudioRecorder. Appreciate your help
Asked Last updated
.
Post not yet marked as solved
64 Views

AudioQueue error 561145187

our app meet a wired problem for online version. more and more user get 561145187 when try to call this code: AudioQueueNewInput(&self->_recordFormat, inputBufferHandler, (__bridge void *)(self), NULL, NULL, 0, &self->_audioQueue)" I search for several weeks, but nothing help. we sum up all issues devices, found some similarity: only happens on iPad OS 14.0 + occurred when app started or wake from background (we call the code when app received "UIApplicationDidBecomeActiveNotification") Any Idea why this happens?
Asked
by kevin zh.
Last updated
.
Post not yet marked as solved
268 Views

App crashes on Activation of Display after Background Audio

How can I find out what the Problem is? Every Time I start Audio and hear it when the iPad/iPhone is turned off and then activate Display of the device after 10-15 Minutes, the App crashes. Here are the First Lines of the Crash Report: Hardware Model: iPad8,12 Process: VOH-App [16336] Path: /private/var/containers/Bundle/Application/5B2CF582-D108-4AA2-B30A-81BA510B7FB6/VOH-App.app/VOH-App Identifier: com.voiceofhope.VOH Version: 7 (1.0) Code Type: ARM-64 (Native) Role: Non UI Parent Process: launchd [1] Coalition: com.voiceofhope.VOH [740] Date/Time: 2021-08-18 22:51:24.0770 +0200 Launch Time: 2021-08-18 22:36:50.4081 +0200 OS Version: iPhone OS 14.7.1 (18G82) Release Type: User Baseband Version: 2.05.01 Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Subtype: KERN_PROTECTION_FAILURE at 0x000000016d2dffb0 VM Region Info: 0x16d2dffb0 is in 0x16d2dc000-0x16d2e0000; bytes after start: 16304 bytes before end: 79 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL CG raster data 11cad0000-11d814000 [ 13.3M] r--/r-- SM=COW GAP OF 0x4fac8000 BYTES ---> STACK GUARD 16d2dc000-16d2e0000 [ 16K] ---/rwx SM=NUL ... for thread 0 Stack 16d2e0000-16d3dc000 [ 1008K] rw-/rwx SM=PRV thread 0 Termination Signal: Segmentation fault: 11 Termination Reason: Namespace SIGNAL, Code 0xb Terminating Process: exc handler [16336] Triggered by Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 libswiftCore.dylib 0x00000001a8028360 swift::MetadataCacheKey::operator==+ 3773280 (swift::MetadataCacheKey) const + 4 1 libswiftCore.dylib 0x00000001a801ab8c _swift_getGenericMetadata+ 3718028 (swift::MetadataRequest, void const* const*, swift::TargetTypeContextDescriptor<swift::InProcess> const*) + 304 2 libswiftCore.dylib 0x00000001a7ffbd00 __swift_instantiateCanonicalPrespecializedGenericMetadata + 36 Here is a full crash Report: VOH-App 16.08.21, 20-22.crash
Asked Last updated
.
Post not yet marked as solved
82 Views

iOS AVAudioSession Notify Thread crash.

Good day community, More than half a year we faced the crash with following callstack: Crashed: AVAudioSession Notify Thread EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000000000000000 0. libEmbeddedSystemAUs.dylib InterruptionListener(void*, unsigned int, unsigned int, void const*) 1. libEmbeddedSystemAUs.dylib InterruptionListener(void*, unsigned int, unsigned int, void const*) arrow_right 2. AudioToolbox AudioSessionPropertyListeners::CallPropertyListeners(unsigned int, unsigned int, void const*) + 596 3. AudioToolbox HandleAudioSessionCFTypePropertyChangedMessage(unsigned int, unsigned int, void*, unsigned int) + 1144 4. AudioToolbox ProcessDeferredMessage(unsigned int, __CFData const*, unsigned int, unsigned int) + 2452 5. AudioToolbox ASCallbackReceiver_AudioSessionPingMessage + 632 6. AudioToolbox _XAudioSessionPingMessage + 44 7. libAudioToolboxUtility.dylib mshMIGPerform + 264 8. CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__ + 56 9. CoreFoundation __CFRunLoopDoSource1 + 444 10. CoreFoundation __CFRunLoopRun + 1888 11. CoreFoundation CFRunLoopRunSpecific + 424 12. AVFAudio GenericRunLoopThread::Entry(void*) + 156 13. AVFAudio CAPThread::Entry(CAPThread*) + 204 14. libsystem_pthread.dylib _pthread_start + 156 15. libsystem_pthread.dylib thread_start + 8 We use Wwise audio framework as audio playback API. We did reported the problem to Audiokinetic's support, but it seems that the problem is not there. Also we used FMOD sound engine earlier, but we had the same issue. At this time we have around 100 crash events every day, which makes us upset. Looks like it started from iOS 13. My main problem is that I don't communicate with AudioToolbox or AVFAudio API directly but use thirdparty sound engines instead. I believe I am not the only who faced this problem. Also there is a discussion at https://forum.unity.com/threads/ios-12-crash-audiotoolbox.719675/ The last message deserves special attention: https://zhuanlan.zhihu.com/p/370791950 where Jeffrey Zhuang made a research. This might be helpful for Apple's support team. Any help is highly appreciated. Best regards, Sergey.
Asked Last updated
.
Post not yet marked as solved
311 Views

ShazamKit during AVCaptureSession - Recognize audio while using camera

Hi, I want to implement ShazamKit in my project. But I have some problems. I use AVCaptureSession to take photos in my app and I'm unable to use ShazamKit. I tried to use three different ways Use an AVAudioEngine during my AVCaptureSession But I didn't obtain any result from Shazam. Try to use ShazamKit after stopping my AvCaptureSession but this causes some problems, and some crashes. Try to use the buffer of my AVCaptureSession to catch audio directly without use AVAudioEngine. This is the code that I use with AVAudioEngine: try! audioSession.setActive(true, options: .notifyOthersOnDeactivation)                 let inputNode = self.audioEngine.inputNode                 let recordingFormat = inputNode.outputFormat(forBus: 0)                                 let audioFormat = recordingFormat //AVAudioFormat(standardFormatWithSampleRate: self.audioEngine.inputNode.outputFormat(forBus: 0).sampleRate,                     //                            channels: 1)                                  inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in                     try! self.signatureGenerator.append(buffer, at: nil)                                          self.session.matchStreamingBuffer(buffer, at: nil)                 }                              self.audioEngine.prepare()                 try! self.audioEngine.start() I can choose two ways to do this, use AVCaptureSession output to pass it to ShazamKit or use an AVAudioSession after the stop of AVCaptureSession. So I have two questions: Can I use a CMSampleBufferRef from AVCaptureSession buffer in a SHSession? And if the answer is yes how? How can I prevent this error if I want to use an AVAudioSession after I stopped my AVCaptureSession? [aurioc]            AURemoteIO.cpp:1117  failed: -10851 (enable 1, outf< 2 ch,      0 Hz, Float32, deinterleaved> inf< 2 ch,      0 Hz, Float32, deinterleaved>) [avae]            AVAEInternal.h:76    required condition is false: [AVAEGraphNode.mm:834:CreateRecordingTap: (IsFormatSampleRateAndChannelCountValid(format))] *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)' Thanks
Asked
by RedSun.
Last updated
.
Post not yet marked as solved
116 Views

AVAudiosession with allowbluetooth option prevents to listen bluetooth button events

I have a VoIP call application. I m trying to add functionality that when the user connected bluetooth to device and hit the bluetooth button call might be answered. I added options that when incoming call received incoming ring should play in bluetooth also. let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(AVAudioSession.Category.playAndRecord, mode: .spokenAudio, options:[.defaultToSpeaker,.allowBluetooth,.allowBluetoothA2DP]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) and I added MPCommandCenter to listen bluetooth events ; let rcCenter = MPRemoteCommandCenter.shared() rcCenter.nextTrackCommand.isEnabled = false rcCenter.nextTrackCommand.addTarget { _ in return .success } rcCenter.previousTrackCommand.isEnabled = false rcCenter.previousTrackCommand.addTarget { _ in return .success } rcCenter.togglePlayPauseCommand.isEnabled = true rcCenter.playCommand.isEnabled = true rcCenter.pauseCommand.isEnabled = true rcCenter.stopCommand.isEnabled = true rcCenter.togglePlayPauseCommand.addTarget{ [unowned self] event in print("togglePlayPauseCommand") return.commandFailed } rcCenter.playCommand.addTarget{ [unowned self] event in print("playCommand") return.commandFailed } rcCenter.pauseCommand.addTarget{ [unowned self] event in print("pause") return.commandFailed } rcCenter.stopCommand.addTarget{ [unowned self] event in print("stop") return.commandFailed } When I remove bluetooth related options from audioSession.setCategory I can listen to events. But when I put them again, events don't work. I also tried UIResponder but had no success; UIApplication.shared.becomeFirstResponder() UIApplication.shared.beginReceivingRemoteControlEvents() and override func remoteControlReceived(with event: UIEvent?) { if let rc = event?.subtype{ print("OVER HERE") } } any idea will be appreciated.
Asked
by xyzbilal.
Last updated
.
Post not yet marked as solved
174 Views

AVSpeechSynthesizer - how to run callback onError

I use AVSpeechSynthesizer to pronounce some text in German. Sometimes it works just fine and sometimes it doesn't for some unknown to me reason (there is no error, because the speak() method doesn't throw and the only thing I am able to observe is the following message logged in the console): _BeginSpeaking: couldn't begin playback I tried to find some API in the AVSpeechSynthesizerDelegate to register a callback when error occurs, but I have found none. The closest match was this (but it appears to be only available for macOS, not iOS): https://developer.apple.com/documentation/appkit/nsspeechsynthesizerdelegate/1448407-speechsynthesizer?changes=_10 Below you can find how I initialize and use the speech synthesizer in my app: class Speaker: NSObject, AVSpeechSynthesizerDelegate {   class func sharedInstance() -> Speaker {     struct Singleton {       static var sharedInstance = Speaker()     }     return Singleton.sharedInstance   }       let audioSession = AVAudioSession.sharedInstance()   let synth = AVSpeechSynthesizer()       override init() {     super.init()     synth.delegate = self   }       func initializeAudioSession() {     do {       try audioSession.setCategory(.playback, mode: .spokenAudio, options: .duckOthers)       try audioSession.setActive(true, options: .notifyOthersOnDeactivation)     } catch {             }   }       func speak(text: String, language: String = "de-DE") { guard !self.synth.isSpeaking else { return }     let utterance = AVSpeechUtterance(string: text)     let voice = AVSpeechSynthesisVoice.speechVoices().filter { $0.language == language }.first!           utterance.voice = voice     self.synth.speak(utterance)   } } The audio session initialization is ran during app started just once. Afterwards, speech is synthesized by running the following code: Speaker.sharedInstance.speak(text: "Lederhosen") The problem is that I have no way of knowing if the speech synthesis succeeded—the UI is showing "speaking" state, but nothing is actually being spoken.
Asked Last updated
.