Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Posts under Core Audio tag

52 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

iOS 16 RemoteIO: Input data proc returned inconsistent 2 packets
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions. I am using RemoteIO to playback live audio at 4000 hz. The error is the following: Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets This is how the audio format and the callback is set: // Set the Audio format AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 4000; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; AURenderCallbackStruct callbackStruct; // Set output callback callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self); status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); Note that the mSampleRate I set is 4000 Hz. In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because: number of frames * buffer duration = sampling rate 93 * 0.02322 = 4000 Hz However, in iOS 16 I am getting the aforementioned error in the callback. Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal. number of frames * buffer duration = sampling rate 2 * 0.02322 = 0.046 Hz That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate. I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function. Any help on this issue would be appreciated.
10
3
4.0k
Oct ’23
Monitoring Sound Input on Output Devices with the Lowest Possible Latency on MAC and iPhone
“I am trying to monitor sound input on an output device with the lowest possible latency on MAC and iPhone. I would like to know if it is possible to send the input buffer to the output device without having to do it through the callbacks of both processes, that is, as close as possible to redirecting them by hardware. I am using the Core Audio API, specifically AudioQueue Services, to achieve this. I also use HAL for configuration, but I would not like to depend too much on HAL since I understand that it is not accessible from iOS.
0
0
450
Oct ’23
NSSound causing crashes on Sonoma
We've been doing the following in our app for years without issues: [[NSSound soundSystem:@"Basso"] play] Suddenly we're seeing hundreds of crashes from macOS 14.0 users and we're not sure what's causing this. There are no memory leaks within the app and all the stack traces are around NSSound: 0 AudioToolbox 0x1f558 MEDeviceStreamClient::RemoveRunningClient(AQIONodeClient&, bool, bool) + 3096 1 AudioToolbox 0x1e8fc AQMEDevice::RemoveRunningClient(AQIONodeClient&, bool) + 108 2 AudioToolbox 0x1e854 AQMixEngine_Base::RemoveRunningClient(AQIONodeClient&, bool) + 76 3 AudioToolbox 0xcdd78 AudioQueueObject::StopRunning(AQIONode*, bool) + 244 4 AudioToolbox 0xcbdd0 AudioQueueObject::Stop(bool, bool, int*) + 736 5 AudioToolbox 0xf1840 AudioQueueXPC_Server::Stop(unsigned int, bool) + 172 6 AudioToolbox 0x1418b4 ___ZN20AudioQueueXPC_Bridge4StopEjb_block_invoke + 72 7 libdispatch.dylib 0x3910 _dispatch_client_callout + 20 8 libdispatch.dylib 0x130f8 _dispatch_sync_invoke_and_complete_recurse + 64 9 AudioToolbox 0x141844 AudioQueueXPC_Bridge::Stop(unsigned int, bool) + 184 10 AudioToolbox 0xa09b0 AQ::API::V2Impl::AudioQueueStop(OpaqueAudioQueue*, unsigned char) + 492 11 AVFAudio 0xbe12c AVAudioPlayerCpp::disposeQueue(bool) + 188 12 AVFAudio 0x341dc -[AudioPlayerImpl dealloc] + 72 13 AVFAudio 0x358a0 -[AVAudioPlayer dealloc] + 36 14 AppKit 0x1b13b4 -[NSAVAudioPlayerSoundEngine dealloc] + 44 15 AppKit 0x1b132c -[NSSound dealloc] + 164 16 libobjc.A.dylib 0xf418 AutoreleasePoolPage::releaseUntil(objc_object**) + 196 17 libobjc.A.dylib 0xbaf0 objc_autoreleasePoolPop + 260 18 CoreFoundation 0x3c57c _CFAutoreleasePoolPop + 32 19 Foundation 0x30e88 -[NSAutoreleasePool drain] + 140 20 Foundation 0x31f94 _NSAppleEventManagerGenericHandler + 92 21 AE 0xbd8c _AppleEventsCheckInAppWithBlock + 13808 22 AE 0xb6b4 _AppleEventsCheckInAppWithBlock + 12056 23 AE 0x4cc4 aeProcessAppleEvent + 488 24 HIToolbox 0x402d4 AEProcessAppleEvent + 68 25 AppKit 0x3a29c _DPSNextEvent + 1440 26 AppKit 0x80db94 -[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 716 27 AppKit 0x2d43c -[NSApplication run] + 476 28 AppKit 0x4708 NSApplicationMain + 880 29 ??? 0x180739058 (Missing)
1
0
753
Oct ’23
MacOS echo cancellation (AUVoiceProcessing) trouble with device gain/volume
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
3
0
1.5k
Oct ’23
AudioConverterFillComplexBuffer crash (NativeInt16ToFloat32Scaled_ARM)
Here is the process in My application.Mic -> AVCaptureOutput -> Audio(PCM) -> Audio Encoder -> AAC Packet (Encoded)Camera -> AVCaptureOutput -> Image -> Video Encoder -> H.264 Video Packet.(Encoded)So, My App is Movie Encoder.Crash is happened when camera is switched. (Front Camera <-> Back Camera)Crash line is AudioConverterFillComplexBuffer.maybe NativeInt16ToFloat32Scaled_ARM..what does that mean??? why???0 AudioCodecs 0x0000000183fbe2bc NativeInt16ToFloat32Scaled_ARM + 1321 AudioCodecs 0x0000000183f63708 AppendInputData(void*, void const*, unsigned int*, unsigned int*, AudioStreamPacketDescription const*) + 562 AudioToolbox 0x000000018411aaac CodecConverter::AppendExcessInput(unsigned int&) + 1963 AudioToolbox 0x000000018411a59c CodecConverter::EncoderFillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 6604 AudioToolbox 0x0000000184124ec0 AudioConverterChain::RenderOutput(CABufferList*, unsigned int, unsigned int&, AudioStreamPacketDescription*) + 1165 AudioToolbox 0x0000000184100d98 BufferedAudioConverter::FillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 4446 AudioToolbox 0x00000001840d8c9c AudioConverterFillComplexBuffer + 3407 MovieEncoder 0x0000000100341fd4 __49-[AACEncoder encodeSampleBuffer:completionBlock:]_block_invoke (AACEncoder.m:247)
1
1
992
Sep ’23
AudioServerPlugInHostRef WriteToStorage and CopyFromStorage after reboot
Dear Sirs, I'm trying to find a way to save and restore some settings of an Audio Server Plugin so that they will be available again after a reboot. I came across the functions WriteToStorage and CopyFromStorage which seem to work correct but after a reboot my settings seem to be gone. Am I doing something wrong and normally this storage should survive a reboot or is this not the intended way to have persistent settings. What would be the recommended way if I want to use these settings right from the start before and user mode app is started? Thanks and best regards, Johannes
1
0
608
Sep ’23
Is AudioOutputUnitStop synchronous?
That is, will my render callback ever be called after AudioOutputUnitStop() returns?In other words will it be safe to free resources used by the render callback or do I need to add realtime safe communication between the stopping thread and the callback thread?This question is intended for both macOS HAL Output and iOS Remote IO output units.
5
0
4.8k
Aug ’23
Core Audio: sine wave gets distorted over time
Hello, I am using Core Audio to output a sine wave at a constant frequency (256Hz). The problem I have is that the sound starts very nice and pure, but gets distored over time - feels like there is some sort of a cumulative error which gets worse as the time goes by. I am using AudioDeviceCreateIOProcID to create a callback, in which I populate the buffer with samples. I only have a single buffer, because my samples are interleaved. Buffer size is always constant (12800 bytes). Samples are floats (from -1 to 1). Here is what I tried to identify the reasons for distortion: I validated that each subsequent callback starts generating samples with the proper phase i.e. the one at which the previous callback ended. E.g. if the last sample from previous callback was 0.8f, then the first callback in next callback is going to be 0.82f as expected. I was wondering if maybe hardware plays the buffer when I am filling it, so I even used mutex to lock the buffer as I am writing to it, but it did not do anything at all. This probably means that the buffer that is passed to callback by OS is already safe to write to. I inspected AudioStreamBasicDescription, buffer size and how many bytes I write to the buffer - it all matches my expectations. Any ideas on what might be causing this sound distortion over time?
3
0
667
Aug ’23
Retreive Audio Workgroup in iOS
Hi! I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system). os_workgroup_t os_workgroup{nullptr}; uint32_t os_workgroup_index_size; if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0, &os_workgroup, &os_workgroup_index_size); status != noErr) { throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " + to_string(status)); } However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0. Can anyone help?
0
0
785
Aug ’23
Developer APIs for AirTag integration in our apps and games?
I'm very excited about the new AirTag product and am wondering if there will be any new APIs introduced in iOS 14.5+ to allow developers to build apps around them outside the context of the Find My network? The contexts in which I am most excited about using AirTags are: Gaming Health / Fitness-focused apps Accessibility features Musical and other creative interactions within apps I haven't been able to find any mention of APIs. Thanks in advance for any information that is shared here. Alexander
34
4
27k
Aug ’23