Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

96 results found
Sort by:
Post not yet marked as solved
264 Views

BTLE CoreMIDI: automatically reconnect

My app successfully receives midi from BT MIDI devices when they are enabled through the CABTMidiLocalPeripheralViewController pattern. However, I would like the app to automatically reconnect to these BT MIDI devices in subsequent sessions whenever they are available. Is this possible without requiring the user to go through the connection view controller every time?
Asked
by Brambos.
Last updated
.
Post marked as solved
38 Views

Audio Unit v2: FATAL ERROR:

Hey folks, I've been able to build and run the 'StarterAudioUnitExample' project provided by Apple, in Xcode 12.5.1, and run and load in Logic 10.6.3 on macOS 11.5.2. However, when trying to recreate the same plugin from a blank project, I'm having trouble with AUVAL or Logic actually instantiating and loading the component. See auval output below: validating Audio Unit Tremelo AUv2 by DAVE: AU Validation Tool Version: 1.8.0 Copyright 2003-2019, Apple Inc. All Rights Reserved. Specify -h (-help) for command options VALIDATING AUDIO UNIT: 'aufx' - 'trem' - 'DAVE' Manufacturer String: DAVE AudioUnit Name: Tremelo AUv2 Component Version: 1.0.0 (0x10000) PASS TESTING OPEN TIMES: COLD: FATAL ERROR: OpenAComponent: result: -1,0xFFFFFFFF validation result: couldn’t be opened Does anyone, hopefully someone from Apple, know what the error code <FATAL ERROR: OpenAComponent: result: -1,0xFFFFFFFF> actually refers too? I've been Googling for hours, and nothing I have found had worked for me so far. Also, here is the info.plist file too : Anyone that could help steer me in the right direction? Thanks, Dave
Asked Last updated
.
Post not yet marked as solved
6k Views

Developer APIs for AirTag integration in our apps and games?

I'm very excited about the new AirTag product and am wondering if there will be any new APIs introduced in iOS 14.5+ to allow developers to build apps around them outside the context of the Find My network? The contexts in which I am most excited about using AirTags are: Gaming Health / Fitness-focused apps Accessibility features Musical and other creative interactions within apps I haven't been able to find any mention of APIs. Thanks in advance for any information that is shared here. Alexander
Asked
by alexander.
Last updated
.
Post marked as solved
341 Views

RemoteIO Glitch With Sound Recognition Feature

My app is using RemoteIO to record audio. It doesn’t do any playback. RemoteIO seems to be broadly compatible with the new Sound Recognition feature in iOS14, but I’m seeing a glitch when sound recognition is first enabled. If my app is started and I initialise RemoteIO, and then turn on Sound Recognition (say via control centre), the RemoteIO input callback is not called thereafter, until I tear down the audio unit and set it back up. So something like the following: Launch app RemoteIO is initialised and working, can record Turn on Sound Recognition via Settings or control centre widget Start recording with already-set up RemoteIO Recording callback is never again called Though no input callbacks are seen, kAudioOutputUnitProperty_IsRunning is reported as true, so the audio unit thinks it is active Tear down audio unit Set up audio unit again Recording works Buffer size is changed, reflecting some effect on the audio session of the Sound Recognition feature I also noticed that when Sound Recognition is enabled, I see several (usually 3) AVAudioSession.routeChangeNotifications in quick succession. When Sound Recognition is disabled while RemoteIO is set up, I don’t see this problem. I’m allocating my own buffers so it’s not a problem with their size. What could be going on here? Am I not handling a route change properly? There doesn’t seem to be a reliable sequence of events I can catch to know when to reset the audio unit. The only fix I’ve found here is to hack in a timer that checks for callback activity shortly after starting recording, and resets the audio unit if no callback activity is seen. Better than nothing, but not super reliable.
Asked Last updated
.
Post not yet marked as solved
1.7k Views

Writing to disk from CoreAudio HAL Plugin

I'm writing an audio HAL plugin to send the audio over the network to a device. I can get audio and stream it over a socket just fine but wanted to log debug messages to a file rather than syslog. However, when I try to open a file from within the HAL plugin, I get "Operation not permitted".I tried writing to /tmp as well as to the temporary directory returned by NSTemporaryDirectory(). None of these worked. It seems that the HAL plugin is sandboxed without disk access and I don't know how to provide those permissions as the plugin is run under coreaudio process.Anyone have any experience with writing to disk from an Audio HAL Plugin?
Asked
by kostyan5.
Last updated
.
Post marked as solved
112 Views

Audio Unit v3 Setup

I have just begun to start building plugins for Logic using the AUv3 format. After a few teething problems to say the least I have a basic plug in working (integrated with SwiftUI which is handy) but the install and validation process is still buggy and bugging me! Does the stand alone app which Xcode generates to create the plugin have to always be running separately to Logic for the AUv3 to be available? Is there no way to have it as a permanently available plugin without running that? If anyone has any links to a decent tutorial please.. there are very few I can find on YouTube or anywhere else and the Apple tutorials and examples aren't great.
Asked
by Waterboy.
Last updated
.
Post not yet marked as solved
2.1k Views

AudioOutputUnitStart returns 1852797029

We are using AudioUnit and AUGraph to provide recording feature for millions of users. For a long time, we have been receiving user feedbacks about recording failures. Most of user logs show that AudioOutputUnitStart returns 1852797029 (kAudioCodecIllegalOperationError).It seems that once this error happens, AudioOutputUnitStart will always return this error code unit rebooting device or sometimes rebooting won't do the trick. Does anyone experience this error code or know the possible cause of it
Asked
by andysheng.
Last updated
.
Post marked as solved
1.3k Views

Recreating AUParameterTree without destroying the tree object? Is it possible?

I have a situation where at any one time my AudioUnit can be represented by only a few parameters or literally many 100s.This dynamic situation is under the control of the user and the maximum number of parameters and their hierarchy cannot be predicted in advance. ( at least not accurately ).When the parameters are to change I am setting the parameterTree property by creating a new tree with the child nodes and posting KVC notifications ….. create childGroups … [self willChangeValueForKey:@"parameterTree"]; self.parameterTree = [AUParameterTree createTreeWithChildren: childGroups ]; [self didChangeValueForKey:@"parameterTree"];Most the of the app’s User Interface and AudioUnit is coded in Swift, the engine is in coded in C/C++ with an objectiveC AUAudioUnit class that acts as the go between hence the above.However there is a popular host app that crashes when I do this and it looks like the host is hanging on to the parameterTree object that’s it’s passed originally when the AU first launches but never updates it even after the KVC notifications are posted.So after that long explanation… Am I doing this correctly? OR Is there a solution that can create and rebuild a tree without making a new AUParameterTree object?If I can do that the host in question may not crash, ( although it might anyway because all the parameters have changed ).I have posted a code example to the host developer but sadly got a response which gave me the impression he was not prepared to work on a fix.Thanks!
Asked Last updated
.
Post marked as solved
2.2k Views

How to specify bit rate when writing with AVAudioFile

I can currently write, using AVAudioFile, to any of the file formats specified by Core Audio.It can create files in all formats (except one, see below ) that can be read into iTunes, Quicktime and other apps and played back.However some formats appear to be ignoring values in the AVAudioFile settings dictionary.e.g: • An MP4 or AAC will save and write successfully at any sample rate but any bit rates I attempt to specify are ignored. • Wave files saved with floating point data are always converted to Int32 even though I specify float. Even though the PCM buffers I’m using as input and output for sample rate conversion are float on input and output. So the AVAudioFile is taking Float input but converting it to Int for some reason I can’t fathom. • The only crash/exception/failure I see is if I attempt to create an AVAudioFile as WAV/64 bit float. … bang, AVAudioFile isn’t having that one!The technique I’m using is: • Create AVAudioFile for writing with a settings dictionary. • Get processing and file format from AVAudioFile • Client format is always 32 bit Float, AVAudioFile generally reports its processing format as some other word sized Float format at the sample rate and size I’ve specified in the fileFormat. • Create a converter to convert from client format to processing format. • Process input data through the converter to the file using converter.convert(to: , error:&amp;error, withInputFrom )So this works … sort ofThe files ( be they wav, aiff, flac, pp3, aac, mp4 etc ) are written out and will play back just fine.… but …If the processing word format is Float, in a PCM file like WAV, the AVAudioFile will always report its fileFormat as Int32.And if the file is a compressed format such as mp4/aac, any bit rates I attempt to specify are just ignored but the sample rate appears to be respected as if the converters/encoders just choose a bit rates based on sample rate.So after all that waffle, I've missed something that's probably meant to be obvious, so my questions are … • For lpcm float formats why is Int32 data written even though the AVAudioFile settings dictionary has AVLinearPCMIsFloatKey to true ? • How do arrange the setup so that I can specify the bit rate for compressed audio?The only buffers I actually create are both PCM, the client output buffer, and the AVAudioConverter/AVAudioFile processing buffer.I’ve attempted using AVAudioCompressedBuffer but haven’t had any luck.I hope someone has some clues because I’ve spent more hours on this than anyone should ever need to!For my Christmas present I’d like Core Audio to be fully and comprehensively documented please!
Asked Last updated
.
Post not yet marked as solved
153 Views

MIDI 2 (UMP) equivalent for AUScheduleMIDIEventBlock?

We have an audio app that utilises a custom internal audio unit attached to AVAudioEngine to do DSP processing. Currently: MIDI arrives at the input port for the app (created with MIDIDestinationCreateWithProtocol). For MIDI 1 we use AUScheduleMIDIEventBlock to pass the events from the midi input to the audio unit. All works well for MIDI 1. So while we ponder on it ourselves we have some questions to throw into the ether... a) For MIDI 2 there appears to be no equivalent method to AUScheduleMIDIEventBlock to send UMP to an audio unit? b) We initially chose the audio unit approach because MIDI and audio processing is all handled neatly, but is this approach essentially redundant? Would it be better to put a tap somewhere on the AVAudioEngine and pass MIDI 2 events directly from the input to the tap? I fear in that case synchronising MIDI to audio nicely would be a pain? c) perhaps we should wait until apple implement a UMP version of AUScheduleMIDIEventBlock?
Asked Last updated
.
Post not yet marked as solved
613 Views

What causes "issue_type = overload" in coreaudiod with USB audio interface?

I have a USB audio interface that is causing kernel traps and the audio output to "skip" or dropout every few seconds. This behavior occurs with a completely fresh install of Catalina as well as Big Sur with the stock Music app on a 2019 MacBook Pro 16 (full specs below). The Console logs show coreaudiod got an error from a kernel trap, a "USB Sound assertion" in AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644, and the Music app "skipping cycle due to overload." I've added a short snippet from Console logs around the time of the audio skip/drop out. The more complete logs are at this gist: https://gist.github.com/djflux/08d9007e2146884e6df1741770de5105 I've also opened a Feedback Assistant ticket (FB9037528): https://feedbackassistant.apple.com/feedback/9037528 Does anyone know what could be causing this issue? Thanks for any help. Cheers, Flux aka Andy. Hardware Overview:  Model Name: MacBook Pro  Model Identifier: MacBookPro16,1  Processor Name: 8-Core Intel Core i9  Processor Speed: 2.4 GHz  Number of Processors: 1  Total Number of Cores: 8  L2 Cache (per Core): 256 KB  L3 Cache: 16 MB  Hyper-Threading Technology: Enabled  Memory: 64 GB  System Firmware Version: 1554.80.3.0.0 (iBridge: 18.16.14347.0.0,0) System Software Overview: System Version: macOS 11.2.3 (20D91)  Kernel Version: Darwin 20.3.0  Boot Volume: Macintosh HD  Boot Mode: Normal  Computer Name: mycomputername  User Name: myusername  Secure Virtual Memory: Enabled  System Integrity Protection: Enabled USB interface: Denon DJ DS1 Snippet of Console logs error 21:07:04.848721-0500 coreaudiod HALS_IOA1Engine::EndWriting: got an error from the kernel trap, Error: 0xE00002D7 default 21:07:04.848855-0500 Music HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload default 21:07:04.857903-0500 kernel USB Sound assertion (Resetting engine due to error returned in Read Handler) in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644 ... default 21:07:05.102746-0500 coreaudiod Audio IO Overload inputs: 'private' outputs: 'private' cause: 'Unknown' prewarming: no recovering: no default 21:07:05.102926-0500 coreaudiod   CAReportingClient.mm:508  message {   HostApplicationDisplayID = "com.apple.Music";   cause = Unknown;   deadline = 2615019;   "input_device_source_list" = Unknown;   "input_device_transport_list" = USB;   "input_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";   "io_buffer_size" = 512;   "io_cycle" = 1;   "is_prewarming" = 0;   "is_recovering" = 0;   "issue_type" = overload;   lateness = "-535";   "output_device_source_list" = Unknown;   "output_device_transport_list" = USB;   "output_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2"; }: (null)
Asked
by djflux.
Last updated
.
Post marked as solved
4.3k Views

CMSampleBufferSetDataBufferFromAudioBufferList returning -12731

I am trying to take a video file read it in using AVAssetReader and pass the audio off to CoreAudio for processing (adding effects and stuff) before saving it back out to disk using AVAssetWriter. I would like to point out that if i set the componentSubType on AudioComponentDescription of my output node as RemoteIO, things play correctly though the speakers. This makes me confident that my AUGraph is properly setup as I can hear things working. I am setting the subType to GenericOutput though so I can do the rendering myself and get back the adjusted audio.I am reading in the audio and i pass the CMSampleBufferRef off to copyBuffer. This puts the audio into a circular buffer that will be read in later.- (void)copyBuffer:(CMSampleBufferRef)buf { if (_readyForMoreBytes == NO) { return; } AudioBufferList abl; CMBlockBufferRef blockBuffer; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &amp;abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &amp;blockBuffer); UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf); BOOL bytesCopied = TPCircularBufferProduceBytes(&amp;circularBuffer, abl.mBuffers[0].mData, size); if (!bytesCopied){ / _readyForMoreBytes = NO; if (size &gt; kRescueBufferSize){ NSLog(@"Unable to allocate enought space for rescue buffer, dropping audio frame"); } else { if (rescueBuffer == nil) { rescueBuffer = malloc(kRescueBufferSize); } rescueBufferSize = size; memcpy(rescueBuffer, abl.mBuffers[0].mData, size); } } CFRelease(blockBuffer); if (!self.hasBuffer &amp;&amp; bytesCopied &gt; 0) { self.hasBuffer = YES; } }Next I call processOutput. This will do a manual reder on the outputUnit. When AudioUnitRender is called it invokes the playbackCallback below, which is what is hooked up as input callback on my first node. playbackCallback pulls the data off the circular buffer and feeds it into the audioBufferList passed in. Like I said before if the output is set as RemoteIO this will cause the audio to correctly be played on the speakers. When AudioUnitRender finishes, it returns noErr and the bufferList object contains valid data. When I call CMSampleBufferSetDataBufferFromAudioBufferList though I get kCMSampleBufferError_RequiredParameterMissing (-12731).-(CMSampleBufferRef)processOutput { if(self.offline == NO) { return NULL; } AudioUnitRenderActionFlags flags = 0; AudioTimeStamp inTimeStamp; memset(&amp;inTimeStamp, 0, sizeof(AudioTimeStamp)); inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid; UInt32 busNumber = 0; UInt32 numberFrames = 512; inTimeStamp.mSampleTime = 0; UInt32 channelCount = 2; AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1)); bufferList-&gt;mNumberBuffers = channelCount; for (int j=0; j&lt;channelCount; j++) { AudioBuffer buffer = {0}; buffer.mNumberChannels = 1; buffer.mDataByteSize = numberFrames*sizeof(SInt32); buffer.mData = calloc(numberFrames,sizeof(SInt32)); bufferList-&gt;mBuffers[j] = buffer; } CheckError(AudioUnitRender(outputUnit, &amp;flags, &amp;inTimeStamp, busNumber, numberFrames, bufferList), @"AudioUnitRender outputUnit"); CMSampleBufferRef sampleBufferRef = NULL; CMFormatDescriptionRef format = NULL; CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid }; AudioStreamBasicDescription audioFormat = self.audioFormat; CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &amp;audioFormat, 0, NULL, 0, NULL, NULL, &amp;format), @"CMAudioFormatDescriptionCreate"); CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &amp;timing, 0, NULL, &amp;sampleBufferRef), @"CMSampleBufferCreate"); CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), @"CMSampleBufferSetDataBufferFromAudioBufferList"); return sampleBufferRef; }static OSStatus playbackCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { int numberOfChannels = ioData-&gt;mBuffers[0].mNumberChannels; SInt16 *outSample = (SInt16 *)ioData-&gt;mBuffers[0].mData; / memset(outSample, 0, ioData-&gt;mBuffers[0].mDataByteSize); MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon; if (p.hasBuffer){ int32_t availableBytes; SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &amp;availableBytes); int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels; int bytesToRead = MIN(availableBytes, requestedBytesSize); memcpy(outSample, bufferTail, bytesToRead); TPCircularBufferConsume([p getBuffer], bytesToRead); if (availableBytes &lt;= requestedBytesSize*2){ [p setReadyForMoreBytes]; } if (availableBytes &lt;= requestedBytesSize) { p.hasBuffer = NO; } } return noErr; }The CMSampleBufferRef I pass in looks valid (below is a dump of the object from the debugger)CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180 invalid = NO dataReady = NO makeDataReadyCallback = 0x0 makeDataReadyRefcon = 0x0 formatDescription = &lt;CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]&gt; { mediaType:'soun' mediaSubType:'lpcm' mediaSpecific: { ASBD: { mSampleRate: 44100.000000 mFormatID: 'lpcm' mFormatFlags: 0xc2c mBytesPerPacket: 2 mFramesPerPacket: 1 mBytesPerFrame: 2 mChannelsPerFrame: 1 mBitsPerChannel: 16 } cookie: {(null)} ACL: {(null)} } extensions: {(null)} } sbufToTrackReadiness = 0x0 numSamples = 512 sampleTimingArray[1] = { {PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}}, } dataBuffer = 0x0The buffer list looks like thisPrinting description of bufferList: (AudioBufferList *) bufferList = 0x00007f87d280b0a0 Printing description of bufferList-&gt;mNumberBuffers: (UInt32) mNumberBuffers = 2 Printing description of bufferList-&gt;mBuffers: (AudioBuffer [1]) mBuffers = { [0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00) }Really at a loss here, hoping someone can help. Thanks,In case it matters i am debuggin this in ios 8.3 simulator and the audio is coming from a mp4 that i shot on my iphone 6 then saved to my laptop.
Asked
by Odyth.
Last updated
.
Post not yet marked as solved
1k Views

AVAudioSession activation failed after emergency alert

Hi,I'm developing a VoIP application using CallKit.The AVAudioSession activated by my application has been muted after emergency alert.This issue happens on Google Duo / Facebook Messenger / Zoom also.This issue is 100% reproducible running on iOS 13.3 ~ 13.5.1.2 kinds of audio interruption "AVAudioSessionInterruptionTypeBegan" and "AVAudioSessionInterruptionTypeEnded" events placed at exact time.And my application tries to activate audio session after "AVAudioSessionInterruptionTypeEnded" event, but it fails every time.**** Error Logs ****Error Domain=NSOSStatusErrorDomain Code=1701737535 "(null)" --&gt; AVAudioSessionErrorCodeMissingEntitlementPlease let me know if you can give me some tip or hint to overcome this issue.Best Regards.
Asked
by KI CHEOL.
Last updated
.
Post not yet marked as solved
194 Views

Issue finding available auv3 plugins on ios

Hi, I'm attempting to call audioComponentFindNext() from an iOS application (built with juce) to get a list of all available plugins. I've got an issue whereby the function is only returning the generic system plugins and missing any the 3rd party installed plugins. This issue is currently found when called from within another auv3 plugin though I have also seen it from within a normal iOS app. (Ran on iPad air 4), it the moment is working fine from an iOS app. I've tried setting microphone access and inter-app audio capabilities as I saw it suggested on similar forum posts though it has not solved my problem. Any advice would be very appreciated Thanks
Asked
by Wlpjam.
Last updated
.
Post not yet marked as solved
289 Views

How to create an AUAudioUnitBusArray with variable number of buses

In the documentation for AUAudioUnitBusArray, there is this passage: Some audio units (e.g. mixers) support variable numbers of busses, via subclassing. I tried to implement this by subclassing AUAudioUnitBusArray, creating my own internal array to store the buses, and overriding isCountChangeable to true and setBusCount to now increase the number of buses if the count is less than the current count. However, I don't think this will work because AUAudioUnitBus has several properties that I can't set such as ownerAudioUnit and index. I would also have to change all the observer functions like addObserver(toAllBusses:forKeyPath:options:context:), which seems overkill for a class that is designed for subclassing. I know about replaceBusses(busArray:) but wouldn't that override the current buses in the bus array since it's copying them?
Asked Last updated
.