Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

50 Posts
Sort by:
Post not yet marked as solved
34 Replies
9k Views
I have a USB audio interface that is causing kernel traps and the audio output to "skip" or dropout every few seconds. This behavior occurs with a completely fresh install of Catalina as well as Big Sur with the stock Music app on a 2019 MacBook Pro 16 (full specs below). The Console logs show coreaudiod got an error from a kernel trap, a "USB Sound assertion" in AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644, and the Music app "skipping cycle due to overload." I've added a short snippet from Console logs around the time of the audio skip/drop out. The more complete logs are at this gist: https://gist.github.com/djflux/08d9007e2146884e6df1741770de5105 I've also opened a Feedback Assistant ticket (FB9037528): https://feedbackassistant.apple.com/feedback/9037528 Does anyone know what could be causing this issue? Thanks for any help. Cheers, Flux aka Andy. Hardware Overview:  Model Name: MacBook Pro  Model Identifier: MacBookPro16,1  Processor Name: 8-Core Intel Core i9  Processor Speed: 2.4 GHz  Number of Processors: 1  Total Number of Cores: 8  L2 Cache (per Core): 256 KB  L3 Cache: 16 MB  Hyper-Threading Technology: Enabled  Memory: 64 GB  System Firmware Version: 1554.80.3.0.0 (iBridge: 18.16.14347.0.0,0) System Software Overview: System Version: macOS 11.2.3 (20D91)  Kernel Version: Darwin 20.3.0  Boot Volume: Macintosh HD  Boot Mode: Normal  Computer Name: mycomputername  User Name: myusername  Secure Virtual Memory: Enabled  System Integrity Protection: Enabled USB interface: Denon DJ DS1 Snippet of Console logs error 21:07:04.848721-0500 coreaudiod HALS_IOA1Engine::EndWriting: got an error from the kernel trap, Error: 0xE00002D7 default 21:07:04.848855-0500 Music HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload default 21:07:04.857903-0500 kernel USB Sound assertion (Resetting engine due to error returned in Read Handler) in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644 ... default 21:07:05.102746-0500 coreaudiod Audio IO Overload inputs: 'private' outputs: 'private' cause: 'Unknown' prewarming: no recovering: no default 21:07:05.102926-0500 coreaudiod   CAReportingClient.mm:508  message {   HostApplicationDisplayID = "com.apple.Music";   cause = Unknown;   deadline = 2615019;   "input_device_source_list" = Unknown;   "input_device_transport_list" = USB;   "input_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";   "io_buffer_size" = 512;   "io_cycle" = 1;   "is_prewarming" = 0;   "is_recovering" = 0;   "issue_type" = overload;   lateness = "-535";   "output_device_source_list" = Unknown;   "output_device_transport_list" = USB;   "output_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2"; }: (null)
Posted
by djflux.
Last updated
.
Post not yet marked as solved
1 Replies
553 Views
Here is the process in My application.Mic -> AVCaptureOutput -> Audio(PCM) -> Audio Encoder -> AAC Packet (Encoded)Camera -> AVCaptureOutput -> Image -> Video Encoder -> H.264 Video Packet.(Encoded)So, My App is Movie Encoder.Crash is happened when camera is switched. (Front Camera <-> Back Camera)Crash line is AudioConverterFillComplexBuffer.maybe NativeInt16ToFloat32Scaled_ARM..what does that mean??? why???0 AudioCodecs 0x0000000183fbe2bc NativeInt16ToFloat32Scaled_ARM + 1321 AudioCodecs 0x0000000183f63708 AppendInputData(void*, void const*, unsigned int*, unsigned int*, AudioStreamPacketDescription const*) + 562 AudioToolbox 0x000000018411aaac CodecConverter::AppendExcessInput(unsigned int&) + 1963 AudioToolbox 0x000000018411a59c CodecConverter::EncoderFillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 6604 AudioToolbox 0x0000000184124ec0 AudioConverterChain::RenderOutput(CABufferList*, unsigned int, unsigned int&, AudioStreamPacketDescription*) + 1165 AudioToolbox 0x0000000184100d98 BufferedAudioConverter::FillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 4446 AudioToolbox 0x00000001840d8c9c AudioConverterFillComplexBuffer + 3407 MovieEncoder 0x0000000100341fd4 __49-[AACEncoder encodeSampleBuffer:completionBlock:]_block_invoke (AACEncoder.m:247)
Posted Last updated
.
Post not yet marked as solved
1 Replies
94 Views
Dear Sirs, I'm trying to find a way to save and restore some settings of an Audio Server Plugin so that they will be available again after a reboot. I came across the functions WriteToStorage and CopyFromStorage which seem to work correct but after a reboot my settings seem to be gone. Am I doing something wrong and normally this storage should survive a reboot or is this not the intended way to have persistent settings. What would be the recommended way if I want to use these settings right from the start before and user mode app is started? Thanks and best regards, Johannes
Posted Last updated
.
Post not yet marked as solved
5 Replies
4.2k Views
That is, will my render callback ever be called after AudioOutputUnitStop() returns?In other words will it be safe to free resources used by the render callback or do I need to add realtime safe communication between the stopping thread and the callback thread?This question is intended for both macOS HAL Output and iOS Remote IO output units.
Posted Last updated
.
Post marked as solved
2 Replies
217 Views
Hello, I am using Core Audio to output a sine wave at a constant frequency (256Hz). The problem I have is that the sound starts very nice and pure, but gets distored over time - feels like there is some sort of a cumulative error which gets worse as the time goes by. I am using AudioDeviceCreateIOProcID to create a callback, in which I populate the buffer with samples. I only have a single buffer, because my samples are interleaved. Buffer size is always constant (12800 bytes). Samples are floats (from -1 to 1). Here is what I tried to identify the reasons for distortion: I validated that each subsequent callback starts generating samples with the proper phase i.e. the one at which the previous callback ended. E.g. if the last sample from previous callback was 0.8f, then the first callback in next callback is going to be 0.82f as expected. I was wondering if maybe hardware plays the buffer when I am filling it, so I even used mutex to lock the buffer as I am writing to it, but it did not do anything at all. This probably means that the buffer that is passed to callback by OS is already safe to write to. I inspected AudioStreamBasicDescription, buffer size and how many bytes I write to the buffer - it all matches my expectations. Any ideas on what might be causing this sound distortion over time?
Posted Last updated
.
Post not yet marked as solved
0 Replies
196 Views
Hi! I am working on an audio application on iOS. This is how I retreive the workgroup from the remoteIO audiounit (ioUnit). The unit is initialized and is working fine (meaning that it is regularly called by the system). os_workgroup_t os_workgroup{nullptr}; uint32_t os_workgroup_index_size; if (status = AudioUnitGetProperty(ioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 0, &os_workgroup, &os_workgroup_index_size); status != noErr) { throw runtime_error("AudioUnitSetProperty kAudioOutputUnitProperty_OSWorkgroup - Failed with OSStatus: " + to_string(status)); } However the resulting os_workgroup's value is 0x40. Which seems not correct. No wonder I cannot join any other realtime threads to the workgroup as well. The returned status however is a solid 0. Can anyone help?
Posted Last updated
.
Post not yet marked as solved
30 Replies
23k Views
I'm very excited about the new AirTag product and am wondering if there will be any new APIs introduced in iOS 14.5+ to allow developers to build apps around them outside the context of the Find My network? The contexts in which I am most excited about using AirTags are: Gaming Health / Fitness-focused apps Accessibility features Musical and other creative interactions within apps I haven't been able to find any mention of APIs. Thanks in advance for any information that is shared here. Alexander
Posted
by alexander.
Last updated
.
Post marked as solved
23 Replies
7.9k Views
Hi all,  Apple dropping on-going development for FireWire devices that were supported with the Core Audio driver standard is a catastrophe for a lot of struggling musicians who need to both keep up to date on security updates that come with new OS releases, and continue to utilise their hard earned investments in very expensive and still pristine audio devices that have been reduced to e-waste by Apple's seemingly tone-deaf ignorance in the cries for on-going support.  I have one of said audio devices, and I'd like to keep using it while keeping my 2019 Intel Mac Book Pro up to date with the latest security updates and OS features.  Probably not the first time you gurus have had someone make the logical leap leading to a request for something like this, but I was wondering if it might be somehow possible of shoe-horning the code used in previous versions of Mac OS that allowed the Mac to speak with the audio features of such devices to run inside the Ventura version of the OS.  Would it possible? Would it involve a lot of work? I don't think I'd be the only person willing to pay for a third party application or utility that restored this functionality. There has to be 100's of thousands of people who would be happy to spare some cash to stop their multi-thousand dollar investment in gear to be so thoughtlessly resigned to the scrap heap.  Any comments or layman-friendly explanations as to why this couldn’t happen would be gratefully received!  Thanks,  em
Posted
by emihatov.
Last updated
.
Post not yet marked as solved
3 Replies
1.8k Views
I'm using a VoiceProcessingIO audio unit in my VoIP application on Mac. The problem is, at least since Mojave, AudioComponentInstanceNew blocks for at least 2 seconds. Profiling shows that internally it's waiting on some mutex and then on some message queue. My code to initialize the audio unit is as follows: OSStatus status; AudioComponentDescription desc; AudioComponent inputComponent; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc.componentFlags = 0; desc.componentFlagsMask = 0; desc.componentManufacturer = kAudioUnitManufacturer_Apple; inputComponent = AudioComponentFindNext(NULL, &desc); status = AudioComponentInstanceNew(inputComponent, &unit);Here's a profiler screenshot showing the two system calls in question.So, is this a bug or an intended behavior?
Posted
by Grishka.
Last updated
.
Post not yet marked as solved
0 Replies
370 Views
I like to know NullAudio.c is official SDK sample or not. And the reason of enum and UID is defined in NullAudio.c, not defined in SDK header files. I try to use kObjectID_Mute_Output_Master, but it defined different values on each 3rd party plugin. kObjectID_Mute_Output_Master = 10 // NullAudio.c kObjectID_Mute_Output_Master = 9 // https://github.com/ExistentialAudio/BlackHole kObjectID_Mute_Output_Master = 6 // https://github.com/q-p/SoundPusher I can build BlackHole and SoundPusher, these plugin worked. This enum should be defined SDK header and keep same value on each SDK version. I like to know why 3rd party defined different value. If you know the history of NullAudio.c, please let me know.
Posted
by Himadeus.
Last updated
.
Post not yet marked as solved
1 Replies
331 Views
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Posted Last updated
.
Post not yet marked as solved
5 Replies
1.2k Views
hi, so i have a little bit of work left on the Asus Xonar family of audio devices. thanks to APPUL's samplepciaudiodriver code and their excellent documentation, Evegeny Gavrilov's kxAudio driver for MAC and Takashi Iwai's exceptional documentation of the ALSA API i have something that is ready for testing. the stats look good, but unfortunately i this is my second HDAV1.3 deluxe. the other one is also in the same room consuming all of my devices with powered audio outputs. no matter, i am in the process of acquiring another xonar sound card in this family. which brings me to my question: what is the benefit of getting an apple developer account for 99 dollars a year? will i be able to distribute a beta kext with my signature that will allow people to test the binary? i don't think others could run a self-signed kext built on one machine, on another, correct? so would a developer license allow others to test a binary built on my machine, assuming they're x86? my hope is that the developer program would allow me to test the binaries and solicit input from enthusiast mac pro owners WORLD WIDE. i them hope to create a new program that will give us the wealth mixers/controls this fantastic line is capable of providing.
Posted
by broly.
Last updated
.
Post not yet marked as solved
0 Replies
324 Views
We are developing an app that uses external hardware to measure analogue hearing-loop performance . It uses audio jack on phone/iPad. With the new hardware on iPad using USB-C , we have noticed that the same input , one with lighting adapter and one with usb-C adapter - both produce way different input levels. The USB-C is ~23dB lower, with the same code and settings. That's almost 10x difference. Is there any way to control the USB-C adapter? am I missing something ? The code simply uses AVAudioInputNode and block attached to it via self.inputNode.installTap we do adjust gain to 1.0 let gain: Float = 1.0 try session.setInputGain(gain) But that still does not help. I wish there was an apple lab I could go to , to speak to engineers about it.
Posted
by greggj.
Last updated
.
Post not yet marked as solved
0 Replies
375 Views
A: iPhone SE 2nd (iOS 16.5) Used bluetooth model: Shokz OpenRun S803 B: Any mobile device A uses bluetooth microphone/speaker, and make a call to B using iPhone app. Mute the A's headphone. (The bluetooth device support mute by hardware). While A mutes, B speaks. Unmute A's headphone. Every time B speaks, B can hear the echo. Since there is no audio data during the hardware muted, VPIO don't recognize audio reference data to remove echo signal. Is there any alternative to resolve this echo in VoIP software using VPIO?
Posted
by ened.
Last updated
.
Post not yet marked as solved
8 Replies
2.4k Views
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions. I am using RemoteIO to playback live audio at 4000 hz. The error is the following: Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets This is how the audio format and the callback is set: // Set the Audio format AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 4000; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; AURenderCallbackStruct callbackStruct; // Set output callback callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self); status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); Note that the mSampleRate I set is 4000 Hz. In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because: number of frames * buffer duration = sampling rate 93 * 0.02322 = 4000 Hz However, in iOS 16 I am getting the aforementioned error in the callback. Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal. number of frames * buffer duration = sampling rate 2 * 0.02322 = 0.046 Hz That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate. I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function. Any help on this issue would be appreciated.
Posted
by ndrg.
Last updated
.
Post not yet marked as solved
2 Replies
4.5k Views
Hi! I am trying to develop an application that uses Core MIDI to instantiate a connection to my Macbook via Audio MIDI Setup. I have created a client within my application and that shows up under the directory in Audio MIDI Setup on my macbook. Now I am stuck trying to figure out how to send MIDI from my app to my computer. I have tried MIDISend, and using CreateOutputPort. Both are successful in the sense that I don't get zeros when printing to the console, but nothing changes in the DAW when I set the controller and number values to the exact numbers I created in my app.I have a feeling that I am missing a network connection within my app somehow so that it recognizes my computer as a source, but I have not yet found an effective method to do this.Any information as to how I get midi to send from my app to my DAW on my computer would be greatly appreciated!I am trying to make this for my final project in one of my coding classes.Thanks!-GH
Posted Last updated
.
Post not yet marked as solved
0 Replies
383 Views
I am currently working on a project that involves real-time audio processing in my iOS/macOS application. I have been exploring the Audio Unit Hosting API and specifically the AUHAL units for handling audio input and output. My goal is to establish a direct connection between an input AUHAL unit and an output AUHAL unit to achieve seamless real-time audio processing. I've been researching and experimenting with the API, but I haven't been able to find a clear solution or documentation regarding this specific scenario. Has anyone attempted such a configuration or encountered similar requirements? I would greatly appreciate any insights, suggestions, or pointers to relevant documentation that could help me achieve this direct connection between the input and output AUHAL units. Thank you in advance for your time and assistance. Best regards, Yosemite
Posted
by yosemite.
Last updated
.