Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

69 Posts
Sort by:
Post not yet marked as solved
6 Replies
1.2k Views
Hi! Now with Monterey, you can configure an aggregate device with 7.1.4 output and play Dolby Atmos in the same format. But my Apple Music is only delivering 5.1. Can I play Apple Music in 7.1.4 ??
Posted
by DehBaorli.
Last updated
.
Post marked as solved
2 Replies
94 Views
Hi, I want to do some work on an audio unit after a gap of (many) years, and am not clear about the status of v2 AUs. There's no template for them in the current Xcode, and the relevant CoreAudio classes (AUBase and subclasses) are also absent. Have I missed some optional download from Apple that restores these things, or is it all over for v2?
Posted
by timhl.
Last updated
.
Post not yet marked as solved
2 Replies
404 Views
We are getting this error on iOS 16.1 beta 5 that we never saw before in any of the iOS versions. [as_client]     AVAudioSession_iOS.mm:2374  Failed to set category, error: 'what' I wonder if there is any known workaround for the same. iOS 16 has been a nightmare and lot of AVFoundation code breaks or becomes unpredictable in behaviour. This is a new issue added in iOS 16.1.
Posted Last updated
.
Post not yet marked as solved
27 Replies
18k Views
I'm very excited about the new AirTag product and am wondering if there will be any new APIs introduced in iOS 14.5+ to allow developers to build apps around them outside the context of the Find My network? The contexts in which I am most excited about using AirTags are: Gaming Health / Fitness-focused apps Accessibility features Musical and other creative interactions within apps I haven't been able to find any mention of APIs. Thanks in advance for any information that is shared here. Alexander
Posted
by alexander.
Last updated
.
Post not yet marked as solved
2 Replies
1.1k Views
I'm using a VoiceProcessingIO audio unit in my VoIP application on Mac. The problem is, at least since Mojave, AudioComponentInstanceNew blocks for at least 2 seconds. Profiling shows that internally it's waiting on some mutex and then on some message queue. My code to initialize the audio unit is as follows: OSStatus status; AudioComponentDescription desc; AudioComponent inputComponent; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc.componentFlags = 0; desc.componentFlagsMask = 0; desc.componentManufacturer = kAudioUnitManufacturer_Apple; inputComponent = AudioComponentFindNext(NULL, &desc); status = AudioComponentInstanceNew(inputComponent, &unit);Here's a profiler screenshot showing the two system calls in question.So, is this a bug or an intended behavior?
Posted
by Grishka.
Last updated
.
Post not yet marked as solved
1 Replies
356 Views
On newer Macs, the audio input from an external microphone plugged into the headphone jack (on a pair of wired Apple EarPods, for example) does not add any noticeable latency. However, the audio input from the built-in microphone adds considerable (~30ms) latency. I imagine this is due to system-supplied signal processing that reduces background noise. Is there a way of bypassing this signal processing and reducing the latency? On iOS, there is an AVAudioSessionModeMeasurement mode that disables signal-processing and lowers input latency. Is there an equivalent for MacOS? FYI, on the 2015 MacBook Pros, there is no noticeable added latency on the built-in mic. This is an issue that affects newer computers, including the M1 line.
Posted
by dizman.
Last updated
.
Post not yet marked as solved
3 Replies
4.7k Views
I'm trying to convert a CMSampleBuffer to a AVAudioPCMBuffer instance to be able to perform audio processing in realtime. I wrote an optional initialiser for my extension to pass a CMSampleBuffer reference. Unfortunately I simply don't know how to write to the AVAudioPCMBuffer's data. Here is my code so far:import AVKit extension AVAudioPCMBuffer { static func create(from sampleBuffer: CMSampleBuffer) -> AVAudioPCMBuffer? { guard let description: CMFormatDescription = CMSampleBufferGetFormatDescription(sampleBuffer), let sampleRate: Float64 = description.audioStreamBasicDescription?.mSampleRate, let numberOfChannels: Int = description.audioChannelLayout?.numberOfChannels else { return nil } guard let blockBuffer: CMBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else { return nil } let length: Int = CMBlockBufferGetDataLength(blockBuffer) let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: AVAudioChannelCount(numberOfChannels), interleaved: false) let buffer = AVAudioPCMBuffer(pcmFormat: audioFormat!, frameCapacity: AVAudioFrameCount(length))! buffer.frameLength = buffer.frameCapacity for channelIndex in 0...numberOfChannels - 1 { guard let channel: UnsafeMutablePointer<Float> = buffer.floatChannelData?[channelIndex] else { return nil } for pointerIndex in 0...length - 1 { let pointer: UnsafeMutablePointer<Float> = channel.advanced(by: pointerIndex) pointer.pointee = 100 } } return buffer } }Does anyone knows how to convert a CMSampleBuffer to AVAudioPCMBuffer and back again? I assume there is no other way if I want to interact with AVAudioEngine, am I right?
Posted
by fsoc.
Last updated
.
Post not yet marked as solved
2 Replies
507 Views
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions. I am using RemoteIO to playback live audio at 4000 hz. The error is the following: Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets This is how the audio format and the callback is set: // Set the Audio format AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 4000; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; AURenderCallbackStruct callbackStruct; // Set output callback callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self); status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); Note that the mSampleRate I set is 4000 Hz. In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because: number of frames * buffer duration = sampling rate 93 * 0.02322 = 4000 Hz However, in iOS 16 I am getting the aforementioned error in the callback. Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal. number of frames * buffer duration = sampling rate 2 * 0.02322 = 0.046 Hz That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate. I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function. Any help on this issue would be appreciated.
Posted
by ndrg.
Last updated
.
Post not yet marked as solved
1 Replies
379 Views
I'm having issues with different ARC behaviour on different macOS setups while developing an Audio Server PlugIn. I'm using the kAudioObjectPropertyCustomPropertyInfoList property to declare my custom properties and one of them is declared as kAudioServerPlugInCustomPropertyDataTypeCFPropertyList to be able to pass CFDataRef containing arbitrary bytes. I haven't found any documentation regarding the memory management in this case, but I noticed that when I receive new data via AudioObjectSetPropertyData it leaks memory so I added CFRelease once I'm done with the data object - the memory leak was gone and I settled on an assumption that the CoreAudio helper and/or daemon does not do any automatic releases of incoming data and it's my responsibility to release it. But then I gave the plugin away for testing to my colleague who had a MacBook Pro (2014) running macOS Big Sur 11.3.1 and the driver started crashing with: Thread 3 Crashed:: Dispatch queue: com.apple.NSXPCConnection.user.anonymous.81372 0 libobjc.A.dylib 0x00007fff203844af objc_release + 31 1 libobjc.A.dylib 0x00007fff203a220f AutoreleasePoolPage::releaseUntil(objc_object**) + 167 2 libobjc.A.dylib 0x00007fff20384e30 objc_autoreleasePoolPop + 161 3 libxpc.dylib 0x00007fff2022ac24 _xpc_connection_call_event_handler + 56 4 libxpc.dylib 0x00007fff20229a9b _xpc_connection_mach_event + 938 5 libdispatch.dylib 0x00007fff2033a886 _dispatch_client_callout4 + 9 6 libdispatch.dylib 0x00007fff20351aa0 _dispatch_mach_msg_invoke + 444 7 libdispatch.dylib 0x00007fff20340473 _dispatch_lane_serial_drain + 263 8 libdispatch.dylib 0x00007fff203525e2 _dispatch_mach_invoke + 484 9 libdispatch.dylib 0x00007fff20340473 _dispatch_lane_serial_drain + 263 10 libdispatch.dylib 0x00007fff203410c0 _dispatch_lane_invoke + 417 11 libdispatch.dylib 0x00007fff2034abed _dispatch_workloop_worker_thread + 811 12 libsystem_pthread.dylib 0x00007fff204e14c0 _pthread_wqthread + 314 13 libsystem_pthread.dylib 0x00007fff204e0493 start_wqthread + 15 That was hinting of possible double release and usage of autoreleasepool. Once I removed the CFRelease the driver stopped crashing and no memory leaks were happening on his mac (while they returned on mine). So my question is - who's responsiblity it is to release the incoming CFDataRef (and probably also CFStringRef) data in CoreAudio property setters? If that's not certain, then is it possible to detect if my plugin is called under ARC or not (I can't find any APIs to query autorelease pool stack - there's just push and pop)? PS. Machines that I tested this on are iMac (2017) and MacBook Air (2020) both running macOS Monterey 12.5.1
Posted Last updated
.
Post not yet marked as solved
4 Replies
1.8k Views
I am trying to render audio using AVSampleBufferAudioRenderer but there is no sound coming from my speakers and there is a repeated log message.[AQ] 405: SSP::Render: CopySlice returned 1I am creating a CMSampleBuffer from an AudioBufferList. This is the relevant code:var sampleBuffer: CMSampleBuffer! try runDarwin(CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: nil, dataReady: false, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: sampleCount, sampleTimingEntryCount: 1, sampleTimingArray: &timingInfo, sampleSizeEntryCount: sampleSizeEntryCount, sampleSizeArray: sampleSizeArray, sampleBufferOut: &sampleBuffer)) try runDarwin(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer, blockBufferAllocator: kCFAllocatorDefault, blockBufferMemoryAllocator: kCFAllocatorDefault, flags: 0, bufferList: audioBufferList.unsafePointer)) try runDarwin(CMSampleBufferSetDataReady(sampleBuffer))I am pretty confident that my audio format description is correct because CMSampleBufferSetDataBufferFromAudioBufferList, which performs a laundry list of validations, returns no error.I tried to reverse-engineer the CopySlice function, but I’m lost without the parameter names.int ScheduledSlicePlayer::CopySlice( long long, ScheduledSlicePlayer::XScheduledAudioSlice*, int, AudioBufferList&, int, int, bool )Does anyone have any ideas on what’s wrong? For the Apple engineers reading this, can you tell me the parameter names of the CopySlice function so that I can more easily reverse-engineer the function to see what the problem is?
Posted Last updated
.
Post not yet marked as solved
2 Replies
1.3k Views
Hi everyone, I think I solved my problem of cracking and popping sounds. I have a MacBook Pro 16 M1 - 16 Go RAM (sorry in advance for my english : I'm French :-D 1— Everything was working fine with the sound when the MBpro M1 was running the factory version of MacOS. As soon as I made an update to Monterey, BOOM ! The cracking and popping sounds started. I think I try everything described on the internet and nothing solved it really. 2— I try a last thing : starting the Mac without any extensions. The problems with the sound were suddenly gone. So I figured out the problem could be an app running on my Mac. 3— So I started my Mac on normal mode to desactivate manually a few things running in the background. I found that the app "Magnet" (an non-Apple app I used to organize my windows) could be a troublemaker because the sound problem was not as intense as before suddenly. 4— On that basis, I did an experiment with each software I have : I put music from YouTube in Safari, launch an app and I work with the app for a bit. My conclusion : with softwares that are coded for the Apple Chips (noted "Universal" or "Apple" in the Activity Monitor), there are no sound problem at all. If I open an app coded for Intel (eg : Guitar Pro), the crack/popping sound problem starts at the launch of the app and keep going worst and worst. I can reproduce the same thing with every app that are code for "Intel". So my guess is : the problem appears when an Intel app is going through Rosetta (which is the piece of MacOS that helps Intel app to work on Apple Chip Macs) 5— About the Magnet app : the app is noted as "Universal". My guess is part of the app is largely based on the Intel code (don't laugh people, I'm not really an expert as you guys :-). So it's perhaps going through Rosetta to work on my M1 Mac. So the sound problem never stops when Magnet is opened (Magnet is an app that work permanently in the background). Everything seems OK since a few days. I will give an update if something new. I hope it helps. Best from Paris
Posted
by Cocochat.
Last updated
.
Post not yet marked as solved
19 Replies
4.4k Views
I have a USB audio interface that is causing kernel traps and the audio output to "skip" or dropout every few seconds. This behavior occurs with a completely fresh install of Catalina as well as Big Sur with the stock Music app on a 2019 MacBook Pro 16 (full specs below). The Console logs show coreaudiod got an error from a kernel trap, a "USB Sound assertion" in AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644, and the Music app "skipping cycle due to overload." I've added a short snippet from Console logs around the time of the audio skip/drop out. The more complete logs are at this gist: https://gist.github.com/djflux/08d9007e2146884e6df1741770de5105 I've also opened a Feedback Assistant ticket (FB9037528): https://feedbackassistant.apple.com/feedback/9037528 Does anyone know what could be causing this issue? Thanks for any help. Cheers, Flux aka Andy. Hardware Overview:  Model Name: MacBook Pro  Model Identifier: MacBookPro16,1  Processor Name: 8-Core Intel Core i9  Processor Speed: 2.4 GHz  Number of Processors: 1  Total Number of Cores: 8  L2 Cache (per Core): 256 KB  L3 Cache: 16 MB  Hyper-Threading Technology: Enabled  Memory: 64 GB  System Firmware Version: 1554.80.3.0.0 (iBridge: 18.16.14347.0.0,0) System Software Overview: System Version: macOS 11.2.3 (20D91)  Kernel Version: Darwin 20.3.0  Boot Volume: Macintosh HD  Boot Mode: Normal  Computer Name: mycomputername  User Name: myusername  Secure Virtual Memory: Enabled  System Integrity Protection: Enabled USB interface: Denon DJ DS1 Snippet of Console logs error 21:07:04.848721-0500 coreaudiod HALS_IOA1Engine::EndWriting: got an error from the kernel trap, Error: 0xE00002D7 default 21:07:04.848855-0500 Music HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload default 21:07:04.857903-0500 kernel USB Sound assertion (Resetting engine due to error returned in Read Handler) in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644 ... default 21:07:05.102746-0500 coreaudiod Audio IO Overload inputs: 'private' outputs: 'private' cause: 'Unknown' prewarming: no recovering: no default 21:07:05.102926-0500 coreaudiod   CAReportingClient.mm:508  message {   HostApplicationDisplayID = "com.apple.Music";   cause = Unknown;   deadline = 2615019;   "input_device_source_list" = Unknown;   "input_device_transport_list" = USB;   "input_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";   "io_buffer_size" = 512;   "io_cycle" = 1;   "is_prewarming" = 0;   "is_recovering" = 0;   "issue_type" = overload;   lateness = "-535";   "output_device_source_list" = Unknown;   "output_device_transport_list" = USB;   "output_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2"; }: (null)
Posted
by djflux.
Last updated
.
Post not yet marked as solved
1 Replies
270 Views
I want to generate a sample withAudioFilePlayerUnit to play 6 channel audio file. The issue here is that, the AudioFilePlayerUnit callback shows stereo channel in buffer, but the file loaded to the fileUnit was of 6 channels. I have tried setting the input and output format for the fileUnit to that of file's stream format(6 channels), but still the callback shows stereo channels in buffer. I have added my code snippet below. OSStatus error = AudioFileOpenURL(audioFileURL, kAudioFileReadPermission, 0, &_audioInputFileId); UInt32 dataSize = sizeof(AudioStreamBasicDescription); AudioFileGetProperty(_audioInputFileId, kAudioFilePropertyDataFormat, &dataSize, &_fileStreamDescription); setting the input and outputFormat for the unit : UInt32 propertySize = sizeof(_fileStreamDescription); AudioUnitScope scope = kAudioUnitScope_Input; OSStatus err = AudioUnitSetProperty(filePlayerUnit, kAudioUnitProperty_StreamFormat, scope, 0, &_fileStreamDescription, propertySize); OSStatus err = AudioUnitSetProperty(filePlayerUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &_fileStreamDescription, propertySize); here setting the input and output format also throws error. I have added the callback using AudioUnitRenderNotify method to the fileUnit but when check the buffer in the callback I can see it returning as stereo channels.
Posted Last updated
.
Post not yet marked as solved
0 Replies
359 Views
Hi, our CoreAudio server plugin provides the standard kAudioVolumeControlClassID, kAudioMuteControlClassID, kAudioSoloControlClassID incl. kAudioDataSourceControlClassID. But it looks like controllers can be created in a general way. Due to signal processing capabilities of our device it could provide way more controllers, but would there be any application that is able to present those generic controllers? Will Audio MIDI Setup.app or AU Lab be able to display those? Any DAW? Thanks, hagen
Posted
by hagen.
Last updated
.
Post marked as solved
1 Replies
522 Views
Summary: I've created an AUAudioUnit to host some third party signal processing code and am running into a edge case limitation where I can only process and supply output audio data (from the internalRenderBlock) if it's an exact multiple of a specific number of frames. More Detail: This third party code ONLY works with exactly 10ms of data at time. For example, say with 48khz audio, it only accepts 480 frames on each processing function call. If the AUAudioUnit's internalRenderBlock is called with 1024 frames as the frame count, I can use the pullInputBlock to get 480 frames, process it, another 480 frames, and process that, but what should I then do with the remaining 64 frames? Possible Solutions Foiled: a) It seems there's no way to indicate to the host that I have only consumed 960 frames and will only be supplying 960 frames of output. I thought perhaps the host would observe that if the outputData ABL buffers have less than the frame count passed into the internalRenderBlock, that it might appropriately advance the timestamp only by that much the next time time around, but it does not. So it's required that all the audio be processed before the block returns, but I can only do that if the block is requested to handle exactly a multiple of 10ms of data. b) I can't buffer up the "remainder" input and process it on the next internalRenderBlock cycle because all of the output must be provided on return as discussed in A. c) As an alternative, I see no way to have the unit explicitly indicate to the host, how many frames the unit can process at a time. maximumFramesToRender is the host telling the unit (not the reverse), and either way it's a maximum only, not a minumum as well. What can I do?
Posted
by swillits.
Last updated
.
Post not yet marked as solved
2 Replies
2.0k Views
Hello!I'm trying to allow hosts to save presets for my AUv3 extension. Currently I save presets as files within the extension itself, but I'd like to support state saving within hosts. My problem is that I can't really reduce my preset format down to parameters in the parameter tree. Can anyone point me in the right direction? I'm happy to clarify anything further.
Posted Last updated
.
Post not yet marked as solved
2 Replies
530 Views
I see in Crashlytics few users are getting this exception when connecting the inputNode to mainMixerNode in AVAudioEngine: Fatal Exception: com.apple.coreaudio.avfaudio required condition is false: format.sampleRate == hwFormat.sampleRate Here is my code: self.engine = AVAudioEngine() let format = engine.inputNode.inputFormat(forBus: 0) //main mixer node is connected to output node by default engine.connect(self.engine.inputNode, to: self.engine.mainMixerNode, format: format) Just want to understand how can this error occur and what is the right fix?
Posted Last updated
.
Post marked as solved
2 Replies
435 Views
I followed apple's demo application to create an audio server plugin named NullAudio. However, since the code is written in C with some COM interfaces, it is hard for a Swift Programmer. So I want to attach XCode to the Null Audio Plugin to take a look at the method calls. I googled and found the audio server plugin maybe debug in this way: Note the BGMDevice is another audio server plugin they created. So I followed, xcode attaches to coreaudiod process. But any breakpoint in the code just go grey, how can I make the break point active?
Posted Last updated
.
Post not yet marked as solved
0 Replies
372 Views
I'm developing my AudioServerPlugin almost same with AudioServerPlugin sample(NullAudio). What I'm trying to do is let my AudioServerPlugin is hidden by implementing kAudioDevicePropertyIsHidden property as return 1, and my external Application set the kAudioDevicePropertyIsHidden property to 0. Which allows me to control the show/hide status of my AudioServerPlugIn from my Application. Here is what I've done, static OSStatus MyPlugIn_IsDevicePropertySettable(AudioServerPlugInDriverRef inDriver, AudioObjectID inObjectID, pid_t inClientProcessID, const AudioObjectPropertyAddress* inAddress, Boolean* outIsSettable) { switch(inAddress->mSelector) { ...       case kAudioDevicePropertyIsHidden: *outIsSettable = true; break; } ... } And from my Application, I retrieved my PlugIn's Device ID by using kAudioHardwarePropertyTranslateUIDToDevice. And call AudioObjectIsPropertySettable with device ID. CFStringRef devUID = CFSTR("MyPlugInDevice_UID"); AudioObjectPropertyAddress pa; pa.mSelector = kAudioHardwarePropertyTranslateUIDToDevice; pa.mScope = kAudioObjectPropertyScopeGlobal; pa.mElement = kAudioObjectPropertyElementMain; AudioDeviceID devID; UInt32 size = sizeof(AudioDeviceID); if(AudioObjectGetPropertyData (kAudioObjectSystemObject, &pa, sizeof(CFStringRef), &devUID, &size, &devID) == noErr) { pa.mSelector = kAudioDevicePropertyIsHidden; pa.mScope = kAudioObjectPropertyScopeGlobal; pa.mElement = kAudioObjectPropertyElementMain;   Boolean isSettable = false; status = AudioObjectIsPropertySettable(devID, &pa, &isSettable); } But it keep returns 0 to `isSettable'. On the other hands, if I made my PlugIn return false for every call of IsSettable, It returns true when the property is kAudioDevicePropertyIsNominalSampleRate. It seems there is some kind of override from my implementation, but I can't find any documentation of it.
Posted
by rickysung.
Last updated
.
Post not yet marked as solved
1 Replies
911 Views
My iOS app using CoreMIDI is able to receive MIDI messages from various keyboards, but I have just received a note from a customer notifying me that my app does not appear to receive MIDI messages from his Casio CT-S1 keyboard. I am stymied on how to diagnose and fix this issue. Clearly there must be something amiss in my CoreMIDI integration. I thought perhaps someone else might have encountered a similar odd situation such as this. If it helps, here is my CoreMIDI integration code. Thanks! Regards, Brad
Posted
by bradhowes.
Last updated
.