Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

53 Posts
Sort by:
Post not yet marked as solved
0 Replies
873 Views
We are developing an app that uses external hardware to measure analogue hearing-loop performance . It uses audio jack on phone/iPad. With the new hardware on iPad using USB-C , we have noticed that the same input , one with lighting adapter and one with usb-C adapter - both produce way different input levels. The USB-C is ~23dB lower, with the same code and settings. That's almost 10x difference. Is there any way to control the USB-C adapter? am I missing something ? The code simply uses AVAudioInputNode and block attached to it via self.inputNode.installTap we do adjust gain to 1.0 let gain: Float = 1.0 try session.setInputGain(gain) But that still does not help. I wish there was an apple lab I could go to , to speak to engineers about it.
Posted
by
Post not yet marked as solved
3 Replies
1.3k Views
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Posted
by
Post not yet marked as solved
0 Replies
654 Views
I like to know NullAudio.c is official SDK sample or not. And the reason of enum and UID is defined in NullAudio.c, not defined in SDK header files. I try to use kObjectID_Mute_Output_Master, but it defined different values on each 3rd party plugin. kObjectID_Mute_Output_Master = 10 // NullAudio.c kObjectID_Mute_Output_Master = 9 // https://github.com/ExistentialAudio/BlackHole kObjectID_Mute_Output_Master = 6 // https://github.com/q-p/SoundPusher I can build BlackHole and SoundPusher, these plugin worked. This enum should be defined SDK header and keep same value on each SDK version. I like to know why 3rd party defined different value. If you know the history of NullAudio.c, please let me know.
Posted
by
Post not yet marked as solved
0 Replies
671 Views
Hi! I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system). os_workgroup_t os_workgroup{nullptr}; uint32_t os_workgroup_index_size; if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0, &os_workgroup, &os_workgroup_index_size); status != noErr) { throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " + to_string(status)); } However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0. Can anyone help?
Posted
by
Post marked as solved
2 Replies
581 Views
Hello, I am using Core Audio to output a sine wave at a constant frequency (256Hz). The problem I have is that the sound starts very nice and pure, but gets distored over time - feels like there is some sort of a cumulative error which gets worse as the time goes by. I am using AudioDeviceCreateIOProcID to create a callback, in which I populate the buffer with samples. I only have a single buffer, because my samples are interleaved. Buffer size is always constant (12800 bytes). Samples are floats (from -1 to 1). Here is what I tried to identify the reasons for distortion: I validated that each subsequent callback starts generating samples with the proper phase i.e. the one at which the previous callback ended. E.g. if the last sample from previous callback was 0.8f, then the first callback in next callback is going to be 0.82f as expected. I was wondering if maybe hardware plays the buffer when I am filling it, so I even used mutex to lock the buffer as I am writing to it, but it did not do anything at all. This probably means that the buffer that is passed to callback by OS is already safe to write to. I inspected AudioStreamBasicDescription, buffer size and how many bytes I write to the buffer - it all matches my expectations. Any ideas on what might be causing this sound distortion over time?
Posted
by
Post not yet marked as solved
1 Replies
533 Views
Dear Sirs, I'm trying to find a way to save and restore some settings of an Audio Server Plugin so that they will be available again after a reboot. I came across the functions WriteToStorage and CopyFromStorage which seem to work correct but after a reboot my settings seem to be gone. Am I doing something wrong and normally this storage should survive a reboot or is this not the intended way to have persistent settings. What would be the recommended way if I want to use these settings right from the start before and user mode app is started? Thanks and best regards, Johannes
Posted
by
Post not yet marked as solved
1 Replies
627 Views
We've been doing the following in our app for years without issues: [[NSSound soundSystem:@"Basso"] play] Suddenly we're seeing hundreds of crashes from macOS 14.0 users and we're not sure what's causing this. There are no memory leaks within the app and all the stack traces are around NSSound: 0 AudioToolbox 0x1f558 MEDeviceStreamClient::RemoveRunningClient(AQIONodeClient&, bool, bool) + 3096 1 AudioToolbox 0x1e8fc AQMEDevice::RemoveRunningClient(AQIONodeClient&, bool) + 108 2 AudioToolbox 0x1e854 AQMixEngine_Base::RemoveRunningClient(AQIONodeClient&, bool) + 76 3 AudioToolbox 0xcdd78 AudioQueueObject::StopRunning(AQIONode*, bool) + 244 4 AudioToolbox 0xcbdd0 AudioQueueObject::Stop(bool, bool, int*) + 736 5 AudioToolbox 0xf1840 AudioQueueXPC_Server::Stop(unsigned int, bool) + 172 6 AudioToolbox 0x1418b4 ___ZN20AudioQueueXPC_Bridge4StopEjb_block_invoke + 72 7 libdispatch.dylib 0x3910 _dispatch_client_callout + 20 8 libdispatch.dylib 0x130f8 _dispatch_sync_invoke_and_complete_recurse + 64 9 AudioToolbox 0x141844 AudioQueueXPC_Bridge::Stop(unsigned int, bool) + 184 10 AudioToolbox 0xa09b0 AQ::API::V2Impl::AudioQueueStop(OpaqueAudioQueue*, unsigned char) + 492 11 AVFAudio 0xbe12c AVAudioPlayerCpp::disposeQueue(bool) + 188 12 AVFAudio 0x341dc -[AudioPlayerImpl dealloc] + 72 13 AVFAudio 0x358a0 -[AVAudioPlayer dealloc] + 36 14 AppKit 0x1b13b4 -[NSAVAudioPlayerSoundEngine dealloc] + 44 15 AppKit 0x1b132c -[NSSound dealloc] + 164 16 libobjc.A.dylib 0xf418 AutoreleasePoolPage::releaseUntil(objc_object**) + 196 17 libobjc.A.dylib 0xbaf0 objc_autoreleasePoolPop + 260 18 CoreFoundation 0x3c57c _CFAutoreleasePoolPop + 32 19 Foundation 0x30e88 -[NSAutoreleasePool drain] + 140 20 Foundation 0x31f94 _NSAppleEventManagerGenericHandler + 92 21 AE 0xbd8c _AppleEventsCheckInAppWithBlock + 13808 22 AE 0xb6b4 _AppleEventsCheckInAppWithBlock + 12056 23 AE 0x4cc4 aeProcessAppleEvent + 488 24 HIToolbox 0x402d4 AEProcessAppleEvent + 68 25 AppKit 0x3a29c _DPSNextEvent + 1440 26 AppKit 0x80db94 -[NSApplication(NSEventRouting) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 716 27 AppKit 0x2d43c -[NSApplication run] + 476 28 AppKit 0x4708 NSApplicationMain + 880 29 ??? 0x180739058 (Missing)
Posted
by
Post not yet marked as solved
0 Replies
405 Views
“I am trying to monitor sound input on an output device with the lowest possible latency on MAC and iPhone. I would like to know if it is possible to send the input buffer to the output device without having to do it through the callbacks of both processes, that is, as close as possible to redirecting them by hardware. I am using the Core Audio API, specifically AudioQueue Services, to achieve this. I also use HAL for configuration, but I would not like to depend too much on HAL since I understand that it is not accessible from iOS.
Posted
by
Post not yet marked as solved
1 Replies
535 Views
I am trying to migrate an Audio Unit host based on the AUv2 C API to the newer AUv3 API. While the migration itself was relatively straightforward (in terms of getting it to compile), the actual rendering fails at run-time with error -10876 aka. kAudioUnitErr_NoConnection. The app does not use AUGraph or AVAudioEngine, perhaps that is an issue? Since the AUv3 and the AUv2 API are bridged in both directions and the rendering works fine with the v2 API, I would expect there to be some way to make it work via the v3 API though. Perhaps someone has an idea why (or under which circumstances) the render block throws this error? For context, the app is Mixxx, an open-source DJing application, and here is the full diff between my AUv2 -> v3 migration: https://github.com/fwcd/mixxx/pull/5/files
Posted
by
Post not yet marked as solved
0 Replies
511 Views
Hello, I used kAudioDevicePropertyDeviceIsRunningSomewhere to check if an internal or external microphone is being used. My code works well for the internal microphone, and for microphones which are connected using a cable. External microphones which are connected using bluetooth are not reporting their status. The status is always requested successfully, but it is always reported as inactive. Main relevant parts in my code : static inline AudioObjectPropertyAddress makeGlobalPropertyAddress(AudioObjectPropertySelector selector) { AudioObjectPropertyAddress address = { selector, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster, }; return address; } static BOOL getBoolProperty(AudioDeviceID deviceID, AudioObjectPropertySelector selector) { AudioObjectPropertyAddress const address = makeGlobalPropertyAddress(selector); UInt32 prop; UInt32 propSize = sizeof(prop); OSStatus const status = AudioObjectGetPropertyData(deviceID, &address, 0, NULL, &propSize, &prop); if (status != noErr) { return 0; //this line never gets executed in my tests. The call above always succeeds, but it always gives back "false" status. } return static_cast<BOOL>(prop == 1); } ... __block BOOL microphoneActive = NO; iterateThroughAllInputDevices(^(AudioObjectID object, BOOL *stop) { if (getBoolProperty(object, kAudioDevicePropertyDeviceIsRunningSomewhere) != 0) { microphoneActive = YES; *stop = YES; } }); What could cause this and how could it be fixed? Thank you for your help in advance!
Posted
by
Post marked as solved
6 Replies
693 Views
After update XCode to 15, I encountered a crash with UnsafeMutableRawPointer. To recreate the problem I write this simple test code. class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() test() } private func test() { var abl = AudioBufferList() let capacity = 4096 let lp1 = UnsafeMutableAudioBufferListPointer(&amp;amp;abl) let outputBuffer1 = UnsafeMutablePointer&amp;lt;Int8&amp;gt;.allocate(capacity: capacity) let outputBuffer2 = UnsafeMutablePointer&amp;lt;Int8&amp;gt;.allocate(capacity: capacity) // It crashed here lp1[0].mData = UnsafeMutableRawPointer(outputBuffer1) lp1[0].mNumberChannels = 1 lp1[0].mDataByteSize = UInt32(capacity) lp1[1].mData = UnsafeMutableRawPointer(outputBuffer2) lp1[1].mNumberChannels = 1 lp1[1].mDataByteSize = UInt32(capacity) let lp2 = UnsafeMutableAudioBufferListPointer(&amp;amp;abl) let data = ( UnsafeMutablePointer&amp;lt;Int16&amp;gt;.allocate(capacity: 4096), packet: 1 ) lp2[0].mData = UnsafeMutableRawPointer(data.0) } } I checked the XCode 15 Release Notes and found out they did something with the pointer default initialization P1020R1 - Smart pointer creation with default initialization Is this causing the problem or I'm doing it wrong? Because it work perfectly fine with XCode 14.3.1 and below P/s: I can't provide the full crash logs cause it's company property but I can provide these: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 5 Application Specific Information: stack buffer overflow Thread 5 name: Dispatch queue: com.apple.NSXPCConnection.user.endpoint Thread 5 Crashed: 0 libsystem_kernel.dylib 0x20419ab78 __pthread_kill + 8 1 libsystem_pthread.dylib 0x23de0c3bc pthread_kill + 268 2 libsystem_c.dylib 0x1d780c44c __abort + 128 3 libsystem_c.dylib 0x1d77f7868 __stack_chk_fail + 96 Clearly there are something wrong with the memory address after init UnsafeMutableRawPointer from the UnsafeMutablePointer&amp;lt;Int8&amp;gt;
Posted
by
Post not yet marked as solved
3 Replies
592 Views
I've added a listener block for camera notifications. This works as expected: the listener block is invoked then the camera is activated/deactivated. However, when I call CMIOObjectRemovePropertyListenerBlock to remove the listener block, though the call succeeds, camera notifications are still delivered to the listener block. Since in the header file it states this function "Unregisters the given CMIOObjectPropertyListenerBlock from receiving notifications when the given properties change." I'd assume that once called, no more notifications would be delivered? Sample code: #import <Foundation/Foundation.h> #import <CoreMediaIO/CMIOHardware.h> #import <AVFoundation/AVCaptureDevice.h> int main(int argc, const char * argv[]) { AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; OSStatus status = -1; CMIOObjectID deviceID = 0; CMIOObjectPropertyAddress propertyStruct = {0}; propertyStruct.mSelector = kAudioDevicePropertyDeviceIsRunningSomewhere; propertyStruct.mScope = kAudioObjectPropertyScopeGlobal; propertyStruct.mElement = kAudioObjectPropertyElementMain; deviceID = (UInt32)[camera performSelector:NSSelectorFromString(@"connectionID") withObject:nil]; CMIOObjectPropertyListenerBlock listenerBlock = ^(UInt32 inNumberAddresses, const CMIOObjectPropertyAddress addresses[]) { NSLog(@"Callback: CMIOObjectPropertyListenerBlock invoked"); }; status = CMIOObjectAddPropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), listenerBlock); if(noErr != status) { NSLog(@"ERROR: CMIOObjectAddPropertyListenerBlock() failed with %d", status); return -1; } NSLog(@"Monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID); sleep(10); status = CMIOObjectRemovePropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_main_queue(), listenerBlock); if(noErr != status) { NSLog(@"ERROR: 'AudioObjectRemovePropertyListenerBlock' failed with %d", status); return -1; } NSLog(@"Stopped monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID); sleep(10); return 0; } Compiling and running this code outputs: Monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21) Callback: CMIOObjectPropertyListenerBlock invoked Callback: CMIOObjectPropertyListenerBlock invoked Stopped monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21) Callback: CMIOObjectPropertyListenerBlock invoked Callback: CMIOObjectPropertyListenerBlock invoked Note the last two log messages showing that the CMIOObjectPropertyListenerBlock is still invoked ...even though CMIOObjectRemovePropertyListenerBlock has successfully been invoked. Am I just doing something wrong here? Or is the API broken?
Post not yet marked as solved
0 Replies
546 Views
Hello, I am an audio developer, currently using macOS version 14.1.1. I noticed that after disabling the microphone, the small yellow dot in the Control Center disappears immediately, but the one in the menu bar takes about 20 seconds to disappear. I tested the built-in Voice Memos app and found the same behavior. Our users may be concerned about their privacy being violated, even though the software is not using the microphone at that time. We believe this is a bug, and the microphone icon in the menu bar should disappear immediately after the microphone is no longer in use. Do you have plans to fix this issue in future versions? Additionally, is there any workaround for the current version? As a supplement, we are using CoreAudio API with AudioDeviceStart & AudioDeviceStop, not AudioUnit.
Posted
by
Post not yet marked as solved
2 Replies
1.9k Views
I report here some messages from Apple Community for an untracked bug in macOS Sonoma (from 14.0 to 14.2 beta 4 at the time). https://discussions.apple.com/thread/255214328 Original message 1: I've finally noticed a pattern that occurs rather frequently on macOS Sonoma. I was blaming Bluetooth issues before, but it looks like it's more about audio in general. What happens is that at some point, all audio freezes. The hotkeys for the audio controls show "Stop" sign, like there are no audio outputs connected, the taskbar is completely unresponsive: Control Centre shows a spinning circle, the sidebar is not opening (Spotlight works, though). If you go to the System Settings, some menu items will be unresponsive: Sound doesn't open, Bluetooth does not open, Accessibility and Siri & Spotlights all do not open. Then, a new bug appeared that I've just started to notice recently. The screen is flashing like there's an Accessibility feature enabled that uses warning flash instead of sound. It appears just randomly, out of nowhere. Immediately after that, sound works just normally. When this is happening, and video/audio content in the browser and wherever does not work, Tidal shows many random errors, and Firefox just completely hangs when you try to play a video on YouTube. I've tried to stop coreaudiod and it did restart the daemon, but nothing else happened. The device is a very fresh M1 Max MacBook, and nothing like that was happening on Ventura. I've had audio cracks on another M1 Pro laptop, but this one didn't even have those. P.S. This is happened just when I was writing this post, and I've disabled Bluetooth just before. Now, the Bluetooth section in the Settings is opening, but others are still unresponsive. For reference - I have yabai and BetterSnapTool installed, which modify system behavior, but with system protection enabled. Siri is disabled. I've tried to stop a bunch of random processes when this happened, but none helped so far. This issue constantly haunts me since I've upgraded, and it's extremely annoying. Original message 2: Yes, I'm thinking it's a combination of Bluetooth and audio issues. I've got all apps that are trying to use audio crashing after I'm just connecting my Bluetooth earbuds. Now I see that the coreaudiod is just not running this time - I've tried to connect to a Slack Huddle, and it just hanged, sound is unresponsive again and the Settings app is not working as I mentioned before. I've checked the Activity Monitor and found that the process that works with audio on macOS (coreaudiod) is not running. I've attempted to launch coreaudiod with sudo launchctl load /system/library/launchdaemons/com.apple.audio.coreaudiod.plis, and got Load failed: 5: Input/output error as a response. After a while, when I disabled the earbuds it's started again on its own and coreaudiod is running, and the audio controls are working once more. Original message 3: Just accidentally looked at the Console App while looking for logs for other things, and found out that my codeaudiod crashing by cooldown every day 10 to 50 times with intervals from 1 second to a couple of hours, around 5 minutes on average. The crash is the following: Crashed Thread: 18 Dispatch queue: com.apple.audio.device.~:AMS2_StackedOutput:0.event Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000 I also found that avconferenced fails too occasionally, though very rare - I believe that's the process that connects iPad as a second screen, and it _too fails with sigsegv on 0x0 - though not that it's some unique bug to attempt to read memory at 0, maybe just a coincidence. @Flomaster do you use Sidecar by chance? My message: I too have this problem on my MacBook Pro M2 Pro since upgrading to macOS Sonoma. It mainly occurs with AirPods Pro 2 but I have also had this happen to me using OnePlus Buds. The blockages are the same as you have experienced and, as I often work in video conferences, blocking MS Teams or Google Meet is really becoming a serious problem. Desperate, I tried installing macOS Sonoma 14.2 beta but none of the updates solved the problem. I even tried a full restore, re-importing the data from Time Machine but to no avail. Indeed, with Beta 4 the problem seems to have worsened because the AirPods now even struggle to connect.
Posted
by
Post not yet marked as solved
2 Replies
654 Views
I am working on an app that uses Core Audio through JUCE library for audio. The problem I'm trying to solve is that when the app is using a full duplex audio interface such as one from Focusrite Scarlett series for output, the app shows a dialog requesting permission to use microphone. The root cause of the issue is that by default, Core Audio opens full duplex devices for both input and output. On previous macOS versions, I was able to work around the problem by disabling the input stream before starting the IOProc by setting AudioHardwareIOProcStreamUsage to all zero for input. On macOS Sonoma this disables input so that the microphone indicator is not shown, but the permission popup is still shown. What other reasons there are to show the popup? I have noticed that Chrome and Slack have the same problem that they show the microphone popup when trying to play sounds on the Focusrite, but for example Deezer manages without the popup.
Posted
by
Post not yet marked as solved
1 Replies
433 Views
i have create one recording application, but user switch off or kill the application, so that time how to save ongoing record.
Posted
by
Post not yet marked as solved
4 Replies
622 Views
I am trying to get the raw audio data from the system microphone using AudioToolbox and CoreFoundation frameworks. So far the writing packets to file logic works but when I try to capture the raw data into a file I am getting white noise. Callback function looks like this: static void MyAQInputCallback(void *inUserData, AudioQueueRef inQueue, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) { MyRecorder *recorder = (MyRecorder *)inUserData; if (inNumPackets > 0) { CheckError(AudioFileWritePackets(recorder->recordFile, FALSE, inBuffer->mAudioDataByteSize, inPacketDesc, recorder->recordPacket, &inNumPackets, inBuffer->mAudioData), "AudioFileWritePackets failed"); recorder->recordPacket += inNumPackets; int sampleCount = inBuffer->mAudioDataByteSize / sizeof(AUDIO_DATA_TYPE_FORMAT); AUDIO_DATA_TYPE_FORMAT* samples = (AUDIO_DATA_TYPE_FORMAT*)inBuffer->mAudioData; FILE *fp = fopen(filename, "a"); for (int i = 0; i < sampleCount; i++){ fprintf(fp, "%i;\n",samples[i]); } fclose(fp); } if (recorder->running) CheckError(AudioQueueEnqueueBuffer(inQueue, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed"); } Some parameters: NumberRecordBuffers = 3 buffer duration = 0.1 format->mFramesPerPacket = 4096 samplerate = 44100 inNumPackets = 1 recordFormat.mFormatID = kAudioFormatAppleLossless; recordFormat.mChannelsPerFrame = 1; recordFormat.mBitsPerChannel = 16; Is this the correct way to do this? I could not find much information in the documentation. Any help is appreciated. Thank you in advance.
Posted
by