Post not yet marked as solved
Hello,
Our beta testers have recently had issues connecting their AirPods to the Apple Watch. We have a AVAudioSession in the watch app, and have set up workout etc. regular Bluetooth headphones have no issues and we can see the connection through the AVAudioSession.routeChangeNotification on them, but not AirPods for some reason? Any one with an idea of what may cause this? Thank you
Post not yet marked as solved
Here is my situation:
I'm working on an app that has background music. This should obviously be muted by the silent switch. For this reason, I'm using AVAudioSession category ambient.
However, part of the app is a list of audio samples. Each one has a preview button, a big ol' rightwards facing triangle ▶️. That's a play button, and by tapping it the user is unambiguously indicating they want to hear the audio. But in ambient, those sounds are silenced if the silent switch is enabled; the play button switches to pause, the progress indicator runs as normal, but no sound comes out. To the user, this looks like a bug. I can't even tell them why it's not working, because there's no API to tell that it's not.
Ok, fine, I can switch the category to playback and we'll hear them… but then we'll also hear the background music suddenly fade in, and then back out after we're done playing the sample.
Alright, so we can pause the BGM before changing the category and playing the sample, then change it back and unpause the BGM. This now works as expected if you have the device on silent.
But these are only samples – sound effects, really – and they should play over the background music, not interrupt it. If we don't have the silent switch enabled, we get the BGM suddenly fading out before playing our sample, and that's almost as bad.
Basically, I feel like I'm being hamstrung by the all-or-nothing nature of the AVAudioSession. There are some sounds I want to play regardless of the position of the silent switch, and others that I want to mute if the silent switch is enabled. But I have no idea how to do that, or if it's even possible.
Basically, here are the options as I see them, and the issues with each of them.
Category
Silent
Ring
Issue
playback
Plays all audio
Plays all audio
Unacceptable, BGM overrides silent switch
ambient
Plays no audio
Plays all audio
Sample list appears broken when device is silenced
ambient => playback
Plays BGM and SFX when user presses play button
Plays correctly
Weird fade in of background music when silenced
ambient => playback, pausing BGM
Plays correctly
BGM pauses before playing SFX
Weird fade out of background music when not silenced
Can anyone offer any advice, here?
Post not yet marked as solved
I recently met a weird crash which only appears in iOS 14.5 and iOS 14.5.1, which I cannot reproduce.
But the crash log only contains code from the system framework, it did contain any code from my app, does anybody know why? Any infomation will be appreciated. Below is the stacktrace, "TaxiDriverOwner" is my project name.
Date/Time: 2022-04-06 15:20:45.2457 +0800
Launch Time: 2022-04-06 15:15:54.6033 +0800
OS Version: iPhone OS 14.5.1 (18E212)
Release Type: User
Baseband Version: 1.62.11
Report Version: 104
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Triggered by Thread: 0
Thread 0 name:
Thread 0 Crashed:
0 libsystem_kernel.dylib 0x00000001c9be9334 __pthread_kill + 8
1 libsystem_pthread.dylib 0x00000001e760daa0 pthread_kill + 272 (pthread.c:1392)
2 libsystem_c.dylib 0x00000001a4e56b90 abort + 104 (abort.c:110)
3 libsystem_c.dylib 0x00000001a4e56024 __assert_rtn + 292 (assert.c:96)
4 libAXSpeechManager.dylib 0x00000001c8253df8 -[AXSpeechManager _updateAuxiliarySession].cold.2 + 44 (AXSpeechManager.m:360)
5 libAXSpeechManager.dylib 0x00000001c824a070 -[AXSpeechManager _updateAuxiliarySession] + 1672 (AXSpeechManager.m:360)
6 libAXSpeechManager.dylib 0x00000001c824a60c -[AXSpeechManager _handleMediaServicesWereLost:] + 96 (AXSpeechManager.m:517)
7 CoreFoundation 0x000000019ba31534 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ + 28 (CFNotificationCenter.c:706)
8 CoreFoundation 0x000000019ba314dc ___CFXRegistrationPost_block_invoke + 52 (CFNotificationCenter.c:173)
9 CoreFoundation 0x000000019ba30a48 _CFXRegistrationPost + 440 (CFNotificationCenter.c:198)
10 CoreFoundation 0x000000019ba30408 _CFXNotificationPost + 716 (CFNotificationCenter.c:1071)
11 Foundation 0x000000019cd2867c -[NSNotificationCenter postNotificationName:object:userInfo:] + 64 (NSNotification.m:575)
12 AudioSession 0x00000001a34c4718 -[AVAudioSession privateHandleServerDied] + 196 (AVAudioSession_iOS.mm:2871)
13 AudioSession 0x00000001a34bdaf0 (anonymous namespace)::HandlePropertyListenerCallback(unsigned int, objc_selector*) + 60 (AVAudioSession_iOS.mm:255)
14 libdispatch.dylib 0x000000019b6c1a54 _dispatch_call_block_and_release + 32 (init.c:1466)
15 libdispatch.dylib 0x000000019b6c37ec _dispatch_client_callout + 20 (object.m:559)
16 libdispatch.dylib 0x000000019b6d1c40 _dispatch_main_queue_callback_4CF + 884 (inline_internal.h:2557)
17 CoreFoundation 0x000000019ba501f8 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16 (CFRunLoop.c:1790)
18 CoreFoundation 0x000000019ba4a0d0 __CFRunLoopRun + 2524 (CFRunLoop.c:3118)
19 CoreFoundation 0x000000019ba491c0 CFRunLoopRunSpecific + 600 (CFRunLoop.c:3242)
20 GraphicsServices 0x00000001b3031734 GSEventRunModal + 164 (GSEvent.c:2259)
21 UIKitCore 0x000000019e4b77e4 -[UIApplication _run] + 1072 (UIApplication.m:3269)
22 UIKitCore 0x000000019e4bd054 UIApplicationMain + 168 (UIApplication.m:4740)
23 TaxiDriverOwner 0x00000001002228a0 main + 68 (AppDelegate.swift:17)
24 libdyld.dylib 0x000000019b705cf8 start + 4
Thread 1 name:
Thread 1:
0 libsystem_kernel.dylib 0x00000001c9bc44fc mach_msg_trap + 8
1 libsystem_kernel.dylib 0x00000001c9bc3884 mach_msg + 76 (mach_msg.c:103)
2 CoreFoundation 0x000000019ba4fd10 __CFRunLoopServiceMachPort + 372 (CFRunLoop.c:2641)
3 CoreFoundation 0x000000019ba49bb0 __CFRunLoopRun + 1212 (CFRunLoop.c:2974)
4 CoreFoundation 0x000000019ba491c0 CFRunLoopRunSpecific + 600 (CFRunLoop.c:3242)
5 Foundation 0x000000019cd29fac -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 232 (NSRunLoop.m:377)
6 Foundation 0x000000019cd29e78 -[NSRunLoop(NSRunLoop) runUntilDate:] + 92 (NSRunLoop.m:424)
7 UIKitCore 0x000000019e56c38c -[UIEventFetcher threadMain] + 516 (UIEventFetcher.m:929)
8 Foundation 0x000000019ce9b2fc __NSThread__start__ + 864 (NSThread.m:724)
9 libsystem_pthread.dylib 0x00000001e760cc00 _pthread_start + 320 (pthread.c:881)
10 libsystem_pthread.dylib 0x00000001e7615758 thread_start + 8
Post not yet marked as solved
I've currently received a task that requires to evaluate the possibility, as title, of recording via mic on BLE headset and play sound via built-in speaker at same time on iOS.
I've done implementing forcing audio device set to built-in speaker whenever the BLE headset is connected/disconnected. It works if both mic/speaker need to be set to built-in one. But after days of search and try, I found that it is not possible to make mic/speaker set separately. Even specifying input device on AVAudioEngine is supported only on MacOS, not iOS.
Can anyone or any technician give me a persuading answer about "Possibility of record via mic on BLE headset and play sound via built-in speaker at same time"?
Post not yet marked as solved
How can you add a live audio player to Xcode where they will have a interactive UI to control the audio and they will be able to exit out of the app and or turn their device off and it will keep playing? Is their a framework or API that will work for this? Thanks! Really need help with this…. 🤩 I have looked everywhere and haven’t found something that works….
Post not yet marked as solved
How can you add a live audio player to Xcode where they will have a interactive UI to control the audio and they will be able to exit out of the app and or turn their device off and it will keep playing? Is their a framework or API that will work for this? Thanks! Really need help with this…. 🤩
Post not yet marked as solved
I'm unclear on how to access the inward facing microphone in the AirPods Pro (not the outward facing one). If this is possible, can you point me in the right direction?
More context is that there is a ticking noise coming a spasm inside someone's ears that I'd like to try canceling for them.
The standard AirPods Pro noise cancellation modes don't have any effect on the sound.
I know latency may be too high to do this on the phone with a custom app, but thought if I could reach the point of that being the problem, then I could experiment with predictive algorithms.
Thank you in advance for ideas or recommendations.
Post not yet marked as solved
Hi,
I'm trying to get this example working on MacOS now that SFSpeechRecognizer is available for the platform. A few questions ...
Do I need to make an authorization request of the user if I intend to use "on device recognition"?
When I ask for authorization to use speech recognition the dialog that pops up contains text that's not in my speech recognition usage description indicating that recordings will be sent to Apple's servers. But that is not accurate if I am using on device recognition (as far as I can tell). Is there a way to suppress that language if I am not using online speech recognition?
Is there an updated example of the article I linked to that describes how to accomplish the same thing with MacOS instead of IOS? My compiler is complaining that AVAudioSession() is not available in MacOS and I'm not sure how to set things up for passing audio from the microphone to the speech recognizer.
Thanks :-D
Brian Duffy
Post not yet marked as solved
Could anyone help with this crush? I don't understand what's the exact reason for it.
Here's the log:
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Subtype: KERN_INVALID_ADDRESS at 0x0000000000000000
Exception Codes: 0x0000000000000001, 0x0000000000000000
VM Region Info: 0 is not in any region. Bytes before following region: 4303306752
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
UNUSED SPACE AT START
--->
__TEXT 1007f4000-100808000 [ 80K] r-x/r-x SM=COW ...
Exception Note: EXC_CORPSE_NOTIFY
Termination Reason: SIGNAL 11 Segmentation fault: 11
Terminating Process: exc handler [9393]
Triggered by Thread: 0
Kernel Triage:
VM - pmap_enter failed with resource shortage
VM - pmap_enter failed with resource shortage
Thread 0 name:
Thread 0 Crashed:
0 libobjc.A.dylib 0x00000001997652d8 lookUpImpOrForward + 76 (objc-runtime-new.h:2123)
1 libobjc.A.dylib 0x00000001997609e4 _objc_msgSend_uncached + 68
2 AVFAudio 0x00000001f3390dd8 tryToSetPlayerSessionListener(AVAudioPlayer*) + 32 (AVAudioPlayer.mm:86)
3 AVFAudio 0x00000001f33e8164 AVAudioPlayerCpp::allocAudioQueue() + 660 (AVAudioPlayerCpp.mm:1535)
4 AVFAudio 0x00000001f33e7b60 AVAudioPlayerCpp::prepareToPlayQueue() + 32 (AVAudioPlayerCpp.mm:813)
5 AVFAudio 0x00000001f33e7a1c AVAudioPlayerCpp::playQueue(AudioTimeStamp const*) + 116 (AVAudioPlayerCpp.mm:922)
6 AVFAudio 0x00000001f33e75f4 AVAudioPlayerCpp::DoAction(unsigned int, unsigned long, void const*) + 168 (AVAudioPlayerCpp.mm:644)
7 AVFAudio 0x00000001f3391864 -[AVAudioPlayer play] + 40 (AVAudioPlayer.mm:544)
8 0x0000000100903ef0 thunk for @escaping @callee_guaranteed () -> () + 20 (<compiler-generated>:0)
9 libdispatch.dylib 0x0000000180ba9924 _dispatch_call_block_and_release + 32 (init.c:1517)
10 libdispatch.dylib 0x0000000180bab670 _dispatch_client_callout + 20 (object.m:560)
11 libdispatch.dylib 0x0000000180bb9b70 _dispatch_main_queue_callback_4CF + 944 (inline_internal.h:2601)
12 CoreFoundation 0x0000000180ef1d84 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16 (CFRunLoop.c:1795)
13 CoreFoundation 0x0000000180eabf5c __CFRunLoopRun + 2540 (CFRunLoop.c:3144)
14 CoreFoundation 0x0000000180ebf468 CFRunLoopRunSpecific + 600 (CFRunLoop.c:3268)
15 GraphicsServices 0x000000019ca4b38c GSEventRunModal + 164 (GSEvent.c:2200)
16 UIKitCore 0x0000000183861088 -[UIApplication _run] + 1100 (UIApplication.m:3493)
17 UIKitCore 0x00000001835df958 UIApplicationMain + 2092 (UIApplication.m:5046)
18 libswiftUIKit.dylib 0x0000000198791fa0 UIApplicationMain(_:_:_:_:) + 104 (UIKit.swift:530)
19 0x000000010080b850 main + 80 (<compiler-generated>:11)
20 0x000000010080b850 $main + 92 (ListeningViewController.swift:0)
21 0x000000010080b850 main + 108
22 dyld 0x0000000100ea9aa4 start + 520 (dyldMain.cpp:879)
Post not yet marked as solved
Hi!
I am making an app that records the screen and microphone in MacOS, I specifically need it to run in a subprocess, for this I use fork(); The truth is that it marks the following error in loop:
Error:
CMIOHardware.cpp:379:CMIOObjectGetPropertyData Error: 2003332927, failed
2022-03-01 20:19:41.708913+0100 WebCamProj[8051:91700] [] CMIO_Unit_Convertor_VideoToolboxCompressor.cpp:461:rebuildVideoCompressor ### Err -12903
2022-03-01 20:19:41.709213+0100 WebCamProj[8051:91700] [] CMIO_Unit_Convertor_VideoToolboxCompressor.cpp:1958:doCompressBuffer [0x15382da00] EXCEPTION ON ERROR -12903
The example on which I base is the following, it comes from Apple's official website for developers:
https://developer.apple.com/library/archive/qa/qa1740/_index.html
The fragment that executes the operation looks something like this:
pid_t pidScreenRecorder;
if ((pidScreenRecorder = fork()) == 0) {
NSURL *pathUrl = [[NSURL alloc] initFileURLWithPath:[NSString stringWithFormat:@"%@%s", NSHomeDirectory(), "/myScreen.mov"]];
ScreenRecord *screenRec = [[ScreenRecord alloc] init];
[screenRec startScreenRecording:pathUrl];
}
I do not know why this error occurs only with this API, I have used others, for example AVAudioRecorder individually and I have not had any problem.
Thanks you.
Post not yet marked as solved
This just seems like a useful thing to have when rendering audio. For example, let's say you have an effect that pitches audio up/down. That typically requires that you know the sample rate of the incoming audio. The way I do this right now is just to save the sample rate after the AUAudioUnit's render resources have been allocated, but being provided this info on a per-render-callback basis seems more useful.
Another use case is for AUAudioUnit's on the input chain. Since the format for connections must match the hardware format, you can no longer explicitly set the format that you expect the audio to come in at. You can check the sample rate on the AVAudioEngine's input node or the sample rate on the AVAudioSession singleton, but when you are working with the audio from within the render callback, you don't want to be accessing those methods due to the possibility they are blocking calls. This is especially true when using the AVAudioSinkNode where you don't have the ability to set the sample rate before the underlying node's render resources are allocated.
Am I missing something here, or does this actually seem useful?
Post not yet marked as solved
When using VoiceProcessingIO audio unit with voicechat audio session mode to have echo cancellation, I can't play audio in stereo, it only allows mono audio.
How can I enable stereo playback with echo cancellation?
Is it some kind of limitation? since it isn't mentioned anywhere in the documentation.
Post not yet marked as solved
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation.
When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low.
It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
Post not yet marked as solved
Is there a way for me to programmatically query whether if my AVAudioSession is able to play even when app is minimized/screen is locked? I need this to debug background audio permissions as my AVAudioSession keeps getting paused while app goes into background and it resumes once it goes into the foreground. Moreover, when I try to call setActive for AVAudioSession in didEnterBackground, it gives me the error code 561015905 which says it is permission related.
My Info.plist already has
<key>UIBackgroundModes</key>
<array>
<string>audio</string>
</array>
added to it.
Post not yet marked as solved
Hi,
Not sure if this is by design or not, but whenever I connect a Bluetooth device to the app the AU callback stops producing frames for several seconds until the device is connected.
I'm building a recording app that uses AVAssetWriter with fragmented segments (HLS buffers).
When the callback freezes it suppose to create a gap in the audio but for some reason, the segment that is created does not contain an audio gap, and the audio just "jumps" in timestamps.
Post not yet marked as solved
Physical volume button pressed notification "AVSystemController_SystemVolumeDidChangeNotification" stopped working with the release of iOS 15.
AVAudioSession::OutputVolume (https://developer.apple.com/documentation/avfaudio/avaudiosession/1616533-outputvolume?language=objc) seems like the preferred approach but the limitation is that it doesn't provide a callback when volume is at max (press up) or min (press down). is there any alternative to get the callback even when the volume is at max or min?
There is a new notification available "SystemVolumeDidChange" that works similar to "AVSystemController_SystemVolumeDidChangeNotification". Please can you confirm if it a Private API or if it may be used to publish apps on App Store?
Our use case requires the physical volume button press notification even if volume is already at max or min. is there any other alternative approach available?
Post not yet marked as solved
I don't know how to take out SUBTITLE file.
We have noticed that sometimes our app spends too much time in the first call of AVAudioSession.sharedInstance and [AVAudioSession setCategory:error:] which we call on app's initialization (during init of apps delegate). I am not sure if the app is stuck in these calls or it simply takes too much time to complete.
This probably causes the app to crash due to main thread watchdog.
Would it be safe to move these calls to a separate thread?
Post not yet marked as solved
I recently released my first ShazamKit app, but there is one thing that still bothers me.
When I started I followed the steps as documented by Apple right here : https://developer.apple.com/documentation/shazamkit/shsession/matching_audio_using_the_built-in_microphone
however when I was running this on iPad I receive a lot of high pitched feedback noise when I ran my app with this configuration. I got it to work by commenting out the output node and format and only use the input.
But now I want to be able to recognise the song that’s playing from the device that has my app open and was wondering if I need the output nodes for that or if I can do something else to prevent the Mic. Feedback from happening.
In short:
What can I do to prevent feedback from happening
Can I use the output of a device to recognise songs or do I just need to make sure that the microphone can run at the same time as playing music?
Other than that I really love the ShazamKit API and can highly recommend to have a go with it!
This is the code as documented in the above link (I just added the comments of what broke it for me)
func configureAudioEngine() {
// Get the native audio format of the engine's input bus.
let inputFormat = audioEngine.inputNode.inputFormat(forBus: 0)
// THIS CREATES FEEDBACK ON IPAD PRO
let outputFormat = AVAudioFormat(standardFormatWithSampleRate: 48000, channels: 1)
// Create a mixer node to convert the input.
audioEngine.attach(mixerNode)
// Attach the mixer to the microphone input and the output of the audio engine.
audioEngine.connect(audioEngine.inputNode, to: mixerNode, format: inputFormat)
// THIS CREATES FEEDBACK ON IPAD PRO
audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: outputFormat)
// Install a tap on the mixer node to capture the microphone audio.
mixerNode.installTap(onBus: 0,
bufferSize: 8192,
format: outputFormat) { buffer, audioTime in
// Add captured audio to the buffer used for making a match.
self.addAudio(buffer: buffer, audioTime: audioTime)
}
}
Post not yet marked as solved
I know that if you want background audio from AVPlayer you need to detatch your AVPlayer from either your AVPlayerViewController or your AVPlayerLayer in addition to having your AVAudioSession configured correctly.
I have that all squared away and background audio is fine until we introduce AVPictureInPictureController or use the PiP behavior baked into AVPlayerViewController.
If you want PiP to behave as expected when you put your app into the background by switching to another app or going to the homescreen you can't perform the detachment operation otherwise the PiP display fails.
On an iPad if PiP is active and you lock your device you continue to get background audio playback. However on an iPhone if PiP is active and you lock the device the audio pauses.
However if PiP is inactive and you lock the device the audio will pause and you have to manually tap play on the lockscreen controls. This is the same between iPad and iPhone devices.
My questions are:
Is there a way to keep background-audio playback going when PiP is inactive and the device is locked (iPhone and iPad)
Is there a way to keep background-audio playback going when PiP is active and the device is locked? (iPhone)