Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Posts under Core Audio tag

54 Posts

Post

Replies

Boosts

Views

Activity

Hosting x86 Audio Units on Silicon Mac
My app encountered problems when trying to open an x86 audioUnit v2 on a Silicon Mac (although Rosetta is installed). There seems to be a XPC connection issue with the AUHostingService that I don't know how to fix. I observed other host apps opening the same plugins without problem, so there is probably something wrong or incompatible in my codes. I noticed that: The issue occurs whether or not the app is sandboxed. The issue does no longer occur when the app itself runs under Rosetta. There is no error reported by CoreAudio during allocation and initialization of the audio unit. The first notified errors appears when the unit calls AudioUnitRender from the rendering callback. With most x86 plugins, the error is on first call: kAudioUnitErr_RenderTimeout and on any subsequent call: kAudioComponentErr_InstanceInvalidated On the UI side, when the Cocoa View is loaded, it appears shortly, then disappears immediately leaving its superview empty. With another x86 plugin, the Cocoa View is loaded normally, but CoreAudio still emits kAudioUnitErr_NoConnection from AudioUnitRender, whether the view has been loaded or not, and the plugin produces no sound. I also find these messages in the console (printed in that order): CLIENT ERROR: RemoteAUv2ViewController does not override - and thus cannot react to catastrophic errors beyond logging them AUAudioUnit_XPC.mm:641 Crashed AU possible component description: aumu/Helm/Tyte My app uses the AUv2 API and I suspect that working with the AUv3 API would spare me these problems. However, considering how my audio system is built (audio units are wrapped into C++ classes and most connections between units are managed on the fly from the rendering callback), it would be a lot of work to convert, and I’m even not sure that all I do with the AUv2 API would be possible with the AUv3 API. I could possibly find an intermediate solution, but in the immediate future I'm looking for the simplest and fastest possible fix. If I cannot find better, I see two fallback options: In this part of the doc: “Beginning with macOS 11, the system loads audio units into a separate process that depends on the architecture or host preference”, does “host preference” means that it would be possible to disable the “out of process” behavior, for example from the app entitlements or info.plist? Otherwise, as a last resort, I could completely disable the use of x86 audioUnits when my app runs under ARM64, for at least making things cleaner. But the Audio Component API doesn’t give any info about the plugin architecture, how could I found it? Any tip or idea about this issue will be much appreciated. Thanks in advance!
1
0
35
3h
Ventura Hack for FireWire Core Audio Support on Supported MacBook Pro and others...
Hi all,  Apple dropping on-going development for FireWire devices that were supported with the Core Audio driver standard is a catastrophe for a lot of struggling musicians who need to both keep up to date on security updates that come with new OS releases, and continue to utilise their hard earned investments in very expensive and still pristine audio devices that have been reduced to e-waste by Apple's seemingly tone-deaf ignorance in the cries for on-going support.  I have one of said audio devices, and I'd like to keep using it while keeping my 2019 Intel Mac Book Pro up to date with the latest security updates and OS features.  Probably not the first time you gurus have had someone make the logical leap leading to a request for something like this, but I was wondering if it might be somehow possible of shoe-horning the code used in previous versions of Mac OS that allowed the Mac to speak with the audio features of such devices to run inside the Ventura version of the OS.  Would it possible? Would it involve a lot of work? I don't think I'd be the only person willing to pay for a third party application or utility that restored this functionality. There has to be 100's of thousands of people who would be happy to spare some cash to stop their multi-thousand dollar investment in gear to be so thoughtlessly resigned to the scrap heap.  Any comments or layman-friendly explanations as to why this couldn’t happen would be gratefully received!  Thanks,  em
62
9
32k
23h
AVAudioEngine : Split 1x4 channel bus into 4x1 channel busses?
I'm using a 4 channel USB Audio interface, with 4 microphones, and want to process them through 4 independent effect chains. However the output from AVAudioInputNode is a single 4 channel bus. How can I split this into 4 mono busses? The following code splits the input into 4 copies, and routes them through the effects, but each bus contains all four channels. How can I remap the channels to remove the unwanted channels from the bus? I tried using channelMap on the mixer node but that had no effect. I'm currently using this code primarily on iOS but it should be portable between iOS and MacOS. It would be possible to do this through a Matrix Mixer Node, but that seems completely overkill, for such a basic operation. I'm already using a Matrix Mixer to combine the inputs, and it's not well supported in AVAudioEngine. AVAudioInputNode *inputNode=[engine inputNode]; [inputNode setVoiceProcessingEnabled:NO error:nil]; NSMutableArray *micDestinations=[NSMutableArray arrayWithCapacity:trackCount]; for(i=0;i<trackCount;i++) { fixMicFormat[i]=[AVAudioMixerNode new]; [engine attachNode:fixMicFormat[i]]; // And create reverb/compressor and eq the same way... [engine connect:reverb[i] to:matrixMixerNode fromBus:0 toBus:i format:nil]; [engine connect:eq[i] to:reverb[i] fromBus:0 toBus:0 format:nil]; [engine connect:compressor[i] to:eq[i] fromBus:0 toBus:0 format:nil]; [engine connect:fixMicFormat[i] to:compressor[i] fromBus:0 toBus:0 format:nil]; [micDestinations addObject:[[AVAudioConnectionPoint alloc] initWithNode:fixMicFormat[i] bus:0] ]; } AVAudioFormat *inputFormat = [inputNode outputFormatForBus: 1]; [engine connect:inputNode toConnectionPoints:micDestinations fromBus:1 format:inputFormat];
2
0
207
2w
macOS sample for AVAudioEngine recording with playthrough
Hi, I'm still stuck getting a basic record-with-playthrouh pipeline to work. Has anyone a sample of setting up a AVAudioEngine pipeline for recording with playthrough? Plkaythrough works with AVPlayerNode as input but not with any microphone input. The docs mention the "enabled state" of the outputNode of the engine without explaining the concept, i.e. how to enable an output. When the engine renders to and from an audio device, the AVAudioSession category and the availability of hardware determines whether an app performs output. Check the output node’s output format (specifically, the hardware format) for a nonzero sample rate and channel count to see if output is in an enabled state. Well, in my setup the output is NOT enabled, and any attempt to switch (e.g. audioEngine.outputNode.auAudioUnit.setDeviceID(deviceID) )/ attach a dedicated device / ... results in exceptions / errors
0
0
214
2w
[iOS 26] [PushToTalk] Not receiving microphone PCM sample when Transmission Starts from System UI.
Steps To reproduce: Login to application and App has joined the PTC channel. Push the application to background and Lock the device. From the System UI press the talk button which will start transmit. Audio Session has been activated and Audio unit has been initialised properly. On terminator side no media is being played out. Issue observed consistently on specific models which has configured audio codec with Stereo type. More details are added : FB20281626
0
0
98
3w
Convert CoreAudio AudioObjectID to IOUSB LocationID
Is there a recommended way on macOS 26 Tahoe to take a CoreAudio AudioObjectID and use it to lookup the underlying USB LocationID? I previously used AudioObjectID to query the corresponding DeviceUID with kAudioDevicePropertyDeviceUID. Then I queried for the IOService matching kIOAudioEngineClassName with property kIOAudioEngineGlobalUniqueIDKey matching DeviceUID, and I loaded kUSBDevicePropertyLocationID from the result. This fails on macOS 26, because the IO Registry for the device has an entry for usbaudiod rather than AppleUSBAudioEngine, and usbaudiod does not include a kIOAudioEngineGlobalUniqueIDKey property (or any other property to map it to a CoreAudio DeviceUID). My use-case here is a piece of audio recording software that allows configuring a set of supported audio devices via USB HID prior to recording. I present the user with a list of CoreAudio devices to use, but without a way to lookup the underlying USB LocationID, I cannot guarantee that the configured device matches the selected device (e.g. if the user plugged in two identical microphones).
2
0
474
Sep ’25
AudioQueue Output fails playing audio almost immediately?
On macOS Sequoia, I'm having the hardest time getting this basic audio output to work correctly. I'm compiling in XCode using C99, and when I run this, I get audio for a split second, and then nothing, indefinitely. Any ideas what could be going wrong? Here's a minimum code example to demonstrate: #include &lt;AudioToolbox/AudioToolbox.h&gt; #include &lt;stdint.h&gt; #define RENDER_BUFFER_COUNT 2 #define RENDER_FRAMES_PER_BUFFER 128 // mono linear PCM audio data at 48kHz #define RENDER_SAMPLE_RATE 48000 #define RENDER_CHANNEL_COUNT 1 #define RENDER_BUFFER_BYTE_COUNT (RENDER_FRAMES_PER_BUFFER * RENDER_CHANNEL_COUNT * sizeof(f32)) void RenderAudioSaw(float* outBuffer, uint32_t frameCount, uint32_t channelCount) { static bool isInverted = false; float scalar = isInverted ? -1.f : 1.f; for (uint32_t frame = 0; frame &lt; frameCount; ++frame) { for (uint32_t channel = 0; channel &lt; channelCount; ++channel) { // series of ramps, alternating up and down. outBuffer[frame * channelCount + channel] = 0.1f * scalar * ((float)frame / frameCount); } } isInverted = !isInverted; } AudioStreamBasicDescription coreAudioDesc = { 0 }; AudioQueueRef coreAudioQueue = NULL; AudioQueueBufferRef coreAudioBuffers[RENDER_BUFFER_COUNT] = { NULL }; void coreAudioCallback(void* unused, AudioQueueRef queue, AudioQueueBufferRef buffer) { // 0's here indicate no fancy packet magic AudioQueueEnqueueBuffer(queue, buffer, 0, 0); } int main(void) { const UInt32 BytesPerSample = sizeof(float); coreAudioDesc.mSampleRate = RENDER_SAMPLE_RATE; coreAudioDesc.mFormatID = kAudioFormatLinearPCM; coreAudioDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked; coreAudioDesc.mBytesPerPacket = RENDER_CHANNEL_COUNT * BytesPerSample; coreAudioDesc.mFramesPerPacket = 1; coreAudioDesc.mBytesPerFrame = RENDER_CHANNEL_COUNT * BytesPerSample; coreAudioDesc.mChannelsPerFrame = RENDER_CHANNEL_COUNT; coreAudioDesc.mBitsPerChannel = BytesPerSample * 8; coreAudioQueue = NULL; OSStatus result; // most of the 0 and NULL params here are for compressed sound formats etc. result = AudioQueueNewOutput(&amp;coreAudioDesc, &amp;coreAudioCallback, NULL, 0, 0, 0, &amp;coreAudioQueue); if (result != noErr) { assert(false == "AudioQueueNewOutput failed!"); abort(); } for (int i = 0; i &lt; RENDER_BUFFER_COUNT; ++i) { uint32_t bufferSize = coreAudioDesc.mBytesPerFrame * RENDER_FRAMES_PER_BUFFER; result = AudioQueueAllocateBuffer(coreAudioQueue, bufferSize, &amp;(coreAudioBuffers[i])); if (result != noErr) { assert(false == "AudioQueueAllocateBuffer failed!"); abort(); } } for (int i = 0; i &lt; RENDER_BUFFER_COUNT; ++i) { RenderAudioSaw(coreAudioBuffers[i]-&gt;mAudioData, RENDER_FRAMES_PER_BUFFER, RENDER_CHANNEL_COUNT); coreAudioBuffers[i]-&gt;mAudioDataByteSize = coreAudioBuffers[i]-&gt;mAudioDataBytesCapacity; AudioQueueEnqueueBuffer(coreAudioQueue, coreAudioBuffers[i], 0, 0); } AudioQueueStart(coreAudioQueue, NULL); sleep(10); // some time to hear the audio AudioQueueStop(coreAudioQueue, true); AudioQueueDispose(coreAudioQueue, true); return 0; }
2
0
401
Sep ’25
dlsym cannot find symbol g_dwILResult when debugging an audio plugin
I am trying to debug the AAX version of my plugin (MIDI effect) on Pro Tools, but I am getting the following error (Mac console) when attempting to load it: dlsym cannot find symbol g_dwILResult in CFBundle etc.. I used Xcode 16.4 to build the plugin. Has anybody come across the same or a similar message? Best, Achillefs Axart Labs
2
0
475
Sep ’25
How to get PID from AudioObjectID on macOS pre Sonoma
3 I am working on an application to get when input audio device is being used. Basically I want to know the application using the microphone (built-in or external) This app runs on macOS. For Mac versions starting from Sonoma I can use this code: int getAudioProcessPID(AudioObjectID process) { pid_t pid; if (@available(macOS 14.0, *)) { constexpr AudioObjectPropertyAddress prop { kAudioProcessPropertyPID, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain }; UInt32 dataSize = sizeof(pid); OSStatus error = AudioObjectGetPropertyData(process, &amp;prop, 0, nullptr, &amp;dataSize, &amp;pid); if (error != noErr) { return -1; } } else { // Pre sonoma code goes here } return pid; } which works. However, kAudioProcessPropertyPID was added in macOS SDK 14.0. Does anyone know how to achieve the same functionality on previous versions?
1
0
239
Sep ’25
Potential Documentation Error in kAudioAggregateDevicePropertyTapList Sample Code
Hi, I believe I've found a potential error in the sample code on the documentation page for creating and using a process tap with an aggregate device. The issue is in the section explaining how to add a tap to the aggregate device. I have already filed a Feedback Assistant ticket on this (ID: FB17411663) but haven't heard back for months. Capturing system audio with Core Audio taps The sample code for modifying the kAudioAggregateDevicePropertyTapList incorrectly uses the tapID as the target AudioObjectID when calling AudioObjectSetPropertyData. // (Code to get the list and potentially modify listAsArray) if var listAsArray = list as? [CFString] { // ... (modification logic) ... // Set the list back on the aggregate device. <--- The comment is correct list = listAsArray as CFArray _ = withUnsafeMutablePointer(to: &list) { list in // INCORRECT: This call uses tapID as the target object. AudioObjectSetPropertyData(tapID, &propertyAddress, 0, nil, propertySize, list) } } The kAudioAggregateDevicePropertyTapList is a property that belongs to the aggregate device, not the individual tap. Therefore, to set this property, the AudioObjectSetPropertyData function must target the AudioObjectID of the aggregate device itself. Using tapID as the first argument is logically incorrect for this operation and will not update the aggregate device as intended. Furthermore, the preceding AudioObjectGetPropertyData call to fetch the list also appears to use the incorrect tapID as its target in the sample. The AudioObjectID for both getting and setting this property should be the ID of the aggregate device. _ = AudioObjectGetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, &propertySize, &list) _ = AudioObjectSetPropertyData(aggregateDeviceID, &propertyAddress, 0, nil, propertySize, newList) Thank you!
0
0
288
Sep ’25
Audio Unit v3 host v2 third party plugins
Hi, I have just implemented an Audio Unit v3 host. AgsAudioUnitPlugin *audio_unit_plugin; AVAudioUnitComponentManager *audio_unit_component_manager; NSArray<AVAudioUnitComponent *> *av_component_arr; AudioComponentDescription description; guint i, i_stop; if(!AGS_AUDIO_UNIT_MANAGER(audio_unit_manager)){ return; } audio_unit_component_manager = [AVAudioUnitComponentManager sharedAudioUnitComponentManager]; /* effects */ description = (AudioComponentDescription) {0,}; description.componentType = kAudioUnitType_Effect; av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description]; i_stop = [av_component_arr count]; for(i = 0; i < i_stop; i++){ ags_audio_unit_manager_load_component(audio_unit_manager, (gpointer) av_component_arr[i]); } /* instruments */ description = (AudioComponentDescription) {0,}; description.componentType = kAudioUnitType_MusicDevice; av_component_arr = [audio_unit_component_manager componentsMatchingDescription:description]; i_stop = [av_component_arr count]; for(i = 0; i < i_stop; i++){ ags_audio_unit_manager_load_component(audio_unit_manager, (gpointer) av_component_arr[i]); } But this doesn't show me Audio Unit v2 plugins, why? regards, Joël
3
0
340
Aug ’25
block microphone and speakers due to security reason
Hello, As part of developing a DLP system, the microphone and speakers should be blocked. My solution involves muting devices by changing the property kAudioDevicePropertyMute. However, this solution allows the user to unmute the device, and the app must implement a property listener to mute the device again. The problem is that muting takes some time and the device is temporarily unmuted. Admittedly, it takes less than a second, but nevertheless, it appears insecure. Is there an Apple-recommended approach to implement such blocking more securely? Maybe some solution which is based on IOKit. Thank you in advance, Pavel
2
0
89
Aug ’25
Logic Pro cannot load v3 audio unit with framework compiled with Swift 6
Sequoia 15.4.1 (24E263) XCode: 16.3 (16E140) Logic Pro: 11.2.1 I’ve been developing a complex audio unit for Mac OS that works perfectly well in its own bespoke host app and is now well into its beta testing stage. It did take some effort to get it to work well in Logic Pro however and all was fine and working well until: The AU part is an empty app extension with a framework containing its code. The framework contains Swift code for the UI and C code for the DSP parts. When the framework is compiled using the Swift 5 compiler the AU will run in Logic with no problems. (I should also mention that AU passes the most strict auval tests). But… when the framework is compiled with Swift 6 Logic Pro cannot load it. Logic displays a message saying the audio unit could not be loaded and to contact the developer. My own host app loads the AU perfectly well with the Swift 6 version, so I know there’s nothing wrong with the audio unit. I cannot find any differences in any of the built output files except, of course, the actual binary code in the framework. I’ve worked for hours on this and cannot find a solution other than to build the framework in Swift 5. (I worked hard to get all the async code updated and working with Swift 6! so I feel a little cheated!) What is happening? Is this a bug in Logic? Is this a bug in Swift 6 compiler/linker? I’m at the Duh! hands in the air, tearing out hair stage! ( once again!)
1
0
326
Jul ’25
How to play Vorbis/OGG files with swift?
Does anyone have a working example on how to play OGG files with swift? I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C&#92;&#43;+ to fill the PCM because this method seems to only be available in C&#92;&#43;+ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes. I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac) I also tried using Cricket Audio framework below. https://github.com/sjmerel/ck It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue https://github.com/sjmerel/ck/issues/3 Right now I believe every player that can play OGGs on mac is written in Objective-C or C++. Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.
2
0
4.1k
Jul ’25
Graceful shutdown during background audio playback.
Hello. My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background. From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case. All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction. Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole. Best, John
0
1
75
Jun ’25
AVAssetResourceLoaderDelegate for radio stream
Hi everyone, I’m trying to use AVAssetResourceLoaderDelegate to handle a live radio stream (e.g. Icecast/HTTP stream). My goal is to have access to the last 30 seconds of audio data during playback, so I can analyze it for specific audio patterns in near-real-time. I’ve implemented a custom resource loader that works fine for podcasts and static files, where the file size and content length are known. However, for infinite live streams, my current implementation stops receiving new loading requests after the first one is served. As a result, the playback either stalls or fails to continue. Has anyone successfully used AVAssetResourceLoaderDelegate with a continuous radio stream? Or maybe you can suggest betterapproach for buffering and analyzing live audio? Any tips, examples, or advice would be appreciated. Thanks!
0
0
96
Jun ’25
AudioUnit may experience silent capture issues on iPadOS 18.4.1 or 18.5.
Among the millions of users of our online product, we have identified through data metrics that the silent audio data capture rate on iPadOS 18.4.1 or 18.5 has increased abnormally. However, we are unable to reproduce the issue. Has anyone encountered a similar issue? The parameters we used are as follows: AudioSession: category:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeDefault option:77 preferredSampleRate:48000.000000 preferredIOBufferDuration:0.010000 AudioUnit format.mFormatID = kAudioFormatLinearPCM; format.mSampleRate = 48000.0; format.mChannelsPerFrame = 2; format.mBitsPerChannel = 16; format.mFramesPerPacket = 1; format.mBytesPerFrame = format.mChannelsPerFrame * 16 / 8; format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket; format.mFormatFlags = kAudioFormatFlagsNativeEndian | kLinearPCMFormatFlagIsPacked | kLinearPCMFormatFlagIsSignedInteger; component.componentType = kAudioUnitType_Output; component.componentSubType = kAudioUnitSubType_RemoteIO; component.componentManufacturer = kAudioUnitManufacturer_Apple; component.componentFlags = 0; component.componentFlagsMask = 0;
0
0
108
Jun ’25
SystemAudio Capture API Fails with OSStatus error 1852797029 (kAudioCodecIllegalOperationError)
Issue Description I'm implementing a system audio capture feature using AudioHardwareCreateProcessTap and AudioHardwareCreateAggregateDevice. The app successfully creates the tap and aggregate device, but when starting the IO procedure with AudioDeviceStart, it sometimes fails with OSStatus error 1852797029. (The operation couldn’t be completed. (OSStatus error 1852797029.)) The error occurs inconsistently, which makes it particularly difficult to debug and reproduce. Questions Has anyone encountered this intermittent "nope" error code (0x6e6f7065) when working with system audio capture? Are there specific conditions or system states that might trigger this error sporadically? Are there any known workarounds for handling this intermittent failure case? Any insights or guidance would be greatly appreciated. I'm wondering if anyone else has encountered this specific "nope" error code (0x6e6f7065) when working with system audio capture.
0
0
104
May ’25
how to enumerate build-in media devices?
Hello, I need to enumerate built-in media devices (cameras, microphones, etc.). For this purpose, I am using the CoreAudio and CoreMediaIO frameworks. According to the table 'Daemon-Safe Frameworks' in Apple’s TN2083, CoreAudio is daemon-safe. However, the documentation does not mention CoreMediaIO. Can CoreMediaIO be used in a daemon? If not, are there any documented alternatives to detect built-in cameras in a daemon (e.g., via device classes in IOKit)? Thank you in advance, Pavel
0
0
79
May ’25