Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

71 Posts
Sort by:
Post not yet marked as solved
1 Replies
335 Views
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode. The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format. The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports. So the following:   AVAudioEngine *engine = [[AVAudioEngine alloc] init];   playerNode = [[AVAudioPlayerNode alloc] init];   AVAudioFormat *format = [[AVAudioFormat alloc] initWithSettings:utterance.voice.audioFileSettings];   [engine attachNode:self.playerNode];   [engine connect:self.playerNode to:engine.mainMixerNode format:format]; Throws an exception: Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\"" I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer. Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes. I could not find any constant or function which can make such format, ASDB or settings. The smartest way I could think of, which does not work:   AudioStreamBasicDescription toDesc;   FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,                      2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);   AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc]; Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated. So what is the correct way to do this?
Posted
by
Post not yet marked as solved
0 Replies
322 Views
The GarageBand app can import both midi and recorded audio file into a single player to play. Just like this: My App have the same feature but I don't know how to implement it. I have tried the AVAudioSequencer,but it only can load and play MIDI file. I have tried the AVPlayer and AVPlayerItem,but it seems that it can't load the MIDI file. So How to combine MIDI file and audio file into a single AVPlayerItem or anything else to play?
Posted
by
Post not yet marked as solved
0 Replies
232 Views
Hi, to be able to receive IOServiceAddMatchingNotification we need to attach to an appropriate CFRunLoop/IONotificationPort. To avoid race condition the matching notification ideally would be serialized with the CoreAudio notification/callbacks. How can this be achieved? Attaching it to the runloop returned by CFRunLoopGetCurrent() does not yield to any notifications at all, to CFRunLoopGetMain leads to notifications asynchronous to CoreAudio callbacks. There are a set of deprecated AudioHardwareAdd/RemoveRunLoopSource() but apart of its deprecation at least on Big Sur @ Apple Silicon this does not lead to any notification as well. So, how is this supposed to be implemented? Do we really need to introduce locks? Also on the process calls? Wasn't it the purpose of runloops to manage exactly those kinds of situation? And more importantly over everything: Where is the documentation? Thanks for any hints, all the best, hagen.
Posted
by
Post not yet marked as solved
0 Replies
277 Views
I know the VoiceProcessingIO audio unit will create a aggregate audio device. But there are error kAudioUnitErr_InvalidProperty (-10789) during getting kAudioOutputUnitProperty_OSWorkgroup property in recent macOS Monterey 12.2.1 or BigSur 11.6.4. os_workgroup_t workgroup = NULL; UInt32  sSize; OSStatus sStatus; sSize = sizeof(os_workgroup_t); sStatus = AudioUnitGetProperty(mAudioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 1, &workgroup, &sSize); if (sStatus != noErr) { NSLog(@"Error %d", sStatus); } And the same code works fine on iOS 15.3.1 but not macOS. Have you any hint to resolve this issue?
Posted
by
Post not yet marked as solved
0 Replies
162 Views
I use ffplay to play a video, and the following error happens: SDL_OpenAudio (2 channels, 48000 Hz): CoreAudio error (AudioQueueStart): -66680 When I restart the Mac system, then it can play the video successfully, but only for a while.. and the error appears again.
Posted
by
Post not yet marked as solved
0 Replies
244 Views
How can you add a live audio player to Xcode where they will have a interactive UI to control the audio and they will be able to exit out of the app and or turn their device off and it will keep playing? Is their a framework or API that will work for this? Thanks! Really need help with this…. 🤩 I have looked everywhere and haven’t found something that works….
Posted
by
Post not yet marked as solved
0 Replies
190 Views
Since we have to en/decode the audio stream to/from our audio device anyway and we are using NEON SIMD to do so, we could just convert it into a stream of float on the fly. Since floats are the natural CoreAudio data format we probably can avoid having to involve an additional int-float/float-int conversion by CoreAudio this way. Does this make sense? Thanks, hagen
Posted
by
Post not yet marked as solved
0 Replies
244 Views
AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_VoiceProcessingIO, kAudioUnitManufacturer_Apple, 0, 0}; AudioComponent comp = AudioComponentFindNext(NULL, &desc); OSStatus error = AudioComponentInstanceNew(comp, &myAudioUnit);   In special case the returned error value is -1, I searched the https://www.osstatus.com/, but didn't get relevent info. my question is: what's the meanning of -1 in the case ? myAudioUnit is a nullptr this time ?
Posted
by
Post marked as solved
1 Replies
199 Views
I'm trying to figure out how to set the volume of a CoreAudio AudioUnit. I found the parameter kHALOutputParam_Volume, but I can't find anything about it. I called AudioUnitGetPropertyInfo and that told me that the parameter is 4 bytes long and writeable. How can I find out whether that is an Int32, UInt32, Float32 or some other type and what acceptable values are and mean? I used AudioUnitGetProperty and read it as either Int32 (512) or Float32 (7.17e-43). Is there any documentation on this and other parameters?
Posted
by
Post not yet marked as solved
1 Replies
283 Views
I'm writing a macOS audio unit hosting app using the AVAudioUnit and AUAudioUnit APIs. I'm trying to use the NSView cacheDisplay(in:to:) function to capture an image of a plugin's view: func viewToImage(veiwToCapture: NSView) -> NSImage? {     var image: NSImage? = nil     if let rep = veiwToCapture.bitmapImageRepForCachingDisplay(in: veiwToCapture.bounds) {       veiwToCapture.cacheDisplay(in: veiwToCapture.bounds, to: rep)       image = NSImage(size: veiwToCapture.bounds.size)       image!.addRepresentation(rep)     }     return image   } } This works ok when a plugin is instantiated using the .loadInProcess option. If the plugin is instantiated using the .loadOutOfProcess option the resulting bitmapImageRep is blank. I'd much rather be loading plugins out-of-process for the enhanced stability. Is there any trick I'm missing to be able to capture the contents of the NSView from an out-of-process audio unit?
Posted
by
Post not yet marked as solved
0 Replies
184 Views
I need to record 2 stereo AVCaptureDevices into 1 audio track. I can successfully create the aggregate device using AudioHardwareCreateAggregateDevice, but when I record that device, the resulting audio track has quadraphonic audio. The problem with this is that some players, such as VLC, won't play all the tracks. So I tried forcing the file writer to use 2 channels with a stereo layout (see code below). This doesn't quite work because all 4 of the channels are mapped to both of the stereo channels. As in, they are not actually stereo. The left/right channels from the aggregate devices play in both channels of the resulting file. code I used to make the track stereo: var audioOutputSettings = movieFileOutput.outputSettings(for: audioConnection) audioOutputSettings[AVNumberOfChannelsKey] = 2 var layout = AudioChannelLayout() layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo audioOutputSettings[AVChannelLayoutKey] = NSData(bytes: &layout, length: MemoryLayout.size(ofValue: layout)) movieFileOutput.setOutputSettings(audioOutputSettings, for: audioConnection) Can anyone help me with this so that both left channels from the 2 devices play in the left channel and same with both right channels?
Posted
by
Post not yet marked as solved
0 Replies
214 Views
I have downloaded the WWDC signal generator example code for 2019 session 510 "What's New in AVAudioEngine." at link When I run it in XCode 13.2 on OSX 12.3 on a M1 Mac Mini , on line 99 let mainMixer = engine.mainMixerNode I get 9 lines 2022-03-30 21:09:19.288011-0400 SignalGenerator[52247:995478] throwing -10878 2022-03-30 21:09:19.288351-0400 SignalGenerator[52247:995478] throwing -10878 2022-03-30 21:09:19.288385-0400 SignalGenerator[52247:995478] throwing -10878 2022-03-30 21:09:19.288415-0400 SignalGenerator[52247:995478] throwing -10878 2022-03-30 21:09:19.288440-0400 SignalGenerator[52247:995478] throwing -10878 2022-03-30 21:09:19.288467-0400 SignalGenerator[52247:995478] throwing -10878 2022-03-30 21:09:19.288491-0400 SignalGenerator[52247:995478] throwing -10878 2022-03-30 21:09:19.288534-0400 SignalGenerator[52247:995478] throwing -10878 2022-03-30 21:09:19.288598-0400 SignalGenerator[52247:995478] throwing -10878 in the console output. -10878 is in valid parameter But the program seems to run as expected. Can this just be ignored, or does it indicate improper setup?
Posted
by
Post not yet marked as solved
0 Replies
262 Views
When i make a call within our VOIP application (ipadOS app on MacOS and M1 - MBP 16) all is fine. If i make a call with plugged in headphones - all is fine. If i unplug the headphones during the call - whole audio just stop working. If i hang up the call, make the call again - audio is there with no problems. On iPhone and iPad work correctly. Where can be a problem? HALC_ShellDevice::CreateIOContextDescription: failed to get a description from the server HALC_ProxyIOContext::IOWorkLoop: the server failed to start, Error: 0x6E6F7065 HALC_ProxyIOContext::IOWorkLoop: the server failed to start, Error: 0x6E6F7065 HALC_ProxyIOContext::IOWorkLoop: the server failed to start, Error: 0x6E6F7065 AudioObjectGetPropertyDataSize: no object with given ID 73 AudioObjectHasProperty: no object with given ID 66 AudioObjectHasProperty: no object with given ID 66 AudioObjectHasProperty: no object with given ID 0 [auvp] AUVPAggregate.cpp:4413 Failed to get current tap stream physical format, err=2003332927 AudioObjectGetPropertyDataSize: no object with given ID 66 AudioObjectGetPropertyData: no object with given ID 66 AudioObjectHasProperty: no object with given ID 66 AudioObjectRemovePropertyListener: no object with given ID 66 AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 66 AudioObjectGetPropertyDataSize: no object with given ID 73 AudioObjectHasProperty: no object with given ID 66 AudioObjectHasProperty: no object with given ID 66 HALC_ProxySystem::GetObjectInfo: got an error from the server, Error: 560947818 (!obj) AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 66 HALC_ShellObject::HasProperty: there is no proxy object AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 73 [auvp] AUVPAggregate.cpp:6912 error 2003332927 getting input sample rate AudioObjectHasProperty: no object with given ID 73 [auvp] AUVPAggregate.cpp:6922 error 2003332927 getting input latency AudioObjectHasProperty: no object with given ID 73 [auvp] AUVPAggregate.cpp:6932 error 2003332927 getting input safety offset AudioObjectHasProperty: no object with given ID 66 [auvp] AUVPAggregate.cpp:6944 error 2003332927 getting tap stream input latency AudioObjectHasProperty: no object with given ID 66 [auvp] AUVPAggregate.cpp:6954 error 2003332927 getting tap stream input safety offset AudioObjectHasProperty: no object with given ID 73 [auvp] AUVPAggregate.cpp:6965 error 2003332927 getting output sample rate AudioObjectHasProperty: no object with given ID 66 [auvp] AUVPAggregate.cpp:6975 error 2003332927 getting output latency AudioObjectHasProperty: no object with given ID 66 AudioDeviceDuck: no device with given ID [auvp] AUVPAggregate.cpp:6985 error 2003332927 getting output safety offset AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 66 AudioObjectsPublishedAndDied: no such owning object AudioObjectsPublishedAndDied: no such owning object AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 66 AudioObjectHasProperty: no object with given ID 73 [auvp] AUVPAggregate.cpp:6912 error 2003332927 getting input sample rate AudioObjectHasProperty: no object with given ID 73 [auvp] AUVPAggregate.cpp:6922 error 2003332927 getting input latency AudioObjectHasProperty: no object with given ID 73 [auvp] AUVPAggregate.cpp:6932 error 2003332927 getting input safety offset AudioObjectHasProperty: no object with given ID 66 [auvp] AUVPAggregate.cpp:6944 error 2003332927 getting tap stream input latency AudioObjectHasProperty: no object with given ID 66 [auvp] AUVPAggregate.cpp:6954 error 2003332927 getting tap stream input safety offset AudioObjectHasProperty: no object with given ID 73 [auvp] AUVPAggregate.cpp:6965 error 2003332927 getting output sample rate AudioObjectHasProperty: no object with given ID 66 [auvp] AUVPAggregate.cpp:6975 error 2003332927 getting output latency AudioObjectHasProperty: no object with given ID 66 [auvp] AUVPAggregate.cpp:6985 error 2003332927 getting output safety offset AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 73 AudioObjectPropertiesChanged: no such object [auvp] AUVPAggregate.cpp:2799 AggCompChanged wait failed AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 73 AudioObjectSetPropertyData: no object with given ID 73 [auvp] AUVPUtilities.cpp:472 SetDeviceMuteState(73) false: (err=560947818) AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 73 [auvp] AUVPUtilities.cpp:560 SetCFNumberValueForKeyInDescriptionDictionary(73); doesn't support 'cdes' AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 73 AudioObjectHasProperty: no object with given ID 73 AudioObjectGetPropertyDataSize: no object with given ID 73 AudioObjectHasProperty: no object with given ID 66 AudioObjectHasProperty: no object with given ID 66 AudioObjectHasProperty: no object with given ID 66 [auvp] AUVPUtilities.cpp:560 SetCFNumberValueForKeyInDescriptionDictionary(66); doesn't support 'cdes' AudioObjectHasProperty: no object with given ID 66 AudioObjectHasProperty: no object with given ID 66 [auvp] AUVPAggregate.cpp:3523 VP block error num input channels is unexpected (err=-66784) [vp] vpStrategyManager.mm:358 Error code 2003332927 reported at GetPropertyInfo [vp] vpStrategyManager.mm:358 Error code 2003332927 reported at GetPropertyInfo HALC_ProxySystem::GetObjectInfo: got an error from the server, Error: 560947818 (!obj) HALC_ShellDevice::RebuildStreamLists: there is no device [vp] vpStrategyManager.mm:358 Error code 2003332927 reported at GetPropertyInfo [vp] vpStrategyManager.mm:358 Error code 2003332927 reported at GetPropertyInfo
Posted
by
Post not yet marked as solved
0 Replies
180 Views
In the AudioBufferList extension, there is a comment above the allocate function     /// The memory should be freed with `free()`.     public static func allocate(maximumBuffers: Int) -> UnsafeMutableAudioBufferListPointer But when I try to call free on the returned pointer, free (buffer) XCode complains: Cannot convert value of type 'UnsafeMutableAudioBufferListPointer' to expected argument type 'UnsafeMutableRawPointer?' How should the pointer be free'd? I tried free (&buffer) XCode didn't complain, but when I ran the code, I got an error in the console. malloc: *** error for object 0x16fdfee70: pointer being freed was not allocated I know the call to allocate was successful. Thanks, Mark
Posted
by
Post not yet marked as solved
0 Replies
178 Views
I have posted a learning project on github. I have gone through the book "Learning Core Audio" by Adamson & Avila and converted the examples from Objective-C to Swift 5. I hope that this helps others trying to learn CoreAudio and if anyone sees issues in my code, please let me know. Thanks, Mark
Posted
by
Post not yet marked as solved
0 Replies
249 Views
I have an audio box that also supports MIDI, but I can't figure out how to access CoreMIDI services from an Audio Server plug-in. On calling MIDIClientCreate(), I see the following log: com.apple.audio.Core-Audio-Driver-Service: (CoreMIDI) [com.apple.coremidi:client]         MIDIClientLib.cpp:258   Couldn't connect to com.apple.midiserver; CoreMIDI will not be usable I've tried various incantations of the AudioServerPlugIn_MachServices key in the plug-in's Info.plist (per the documentation in AudioServerPlugin.h), but to no avail. Is there some way to use MIDI from an Audio Server plugin? If no, how should such hardware be supported?
Posted
by
Post not yet marked as solved
0 Replies
248 Views
MacOS CoreAudio buffer playback produces annoying noise between correct sound. I'm interested to play valid .wav data though the buffer. Why I'm playing a .wav? It has valid data. What I'm trying to achieve is to understand how to write correctly to the sound buffer. I'm porting a music engine to MacOS .... #include <string.h> #include <math.h> #include <unistd.h> #include <stdio.h> #include <AudioToolbox/AudioToolbox.h> FILE *fp; typedef struct TwavHeader{ char RIFF[4]; uint32_t RIFFChunkSize; char WAVE[4]; char fmt[4]; uint32_t Subchunk1Size; uint16_t AudioFormat; uint16_t NumOfChan; uint32_t SamplesPerSec; uint32_t bytesPerSec; uint16_t blockAlign; uint16_t bitsPerSample; char Subchunk2ID[4]; uint32_t Subchunk2Size; }TwavHeader; typedef struct SoundState { bool done; }SoundState; void auCallback(void *inUserData, AudioQueueRef queue, AudioQueueBufferRef buffer) { buffer->mAudioDataByteSize = 1024*4; int numToRead = buffer->mAudioDataByteSize / sizeof(float) * 2; void *p = malloc(numToRead); fread(p, numToRead,1,fp); void *myBuf = buffer->mAudioData; for (int i=0; i < numToRead / 2; i++) { uint16_t w = *(uint16_t *)&(p[i*sizeof(uint16_t)]); float f = ((float)w / (float)0x8000) - 1.0; *(float *)&(myBuf[i*sizeof(float)]) = f; } free(p); AudioQueueEnqueueBuffer(queue, buffer, 0, 0); } void checkError(OSStatus error){ if (error != noErr) { printf("Error: %d", error); exit(error); } } int main(int argc, const char * argv[]) { printf("START\n"); TwavHeader theHeader; fp = fopen("/Users/kirillkranz/Documents/mytralala-code/CoreAudioTest/unreal.wav", "r"); fread(&theHeader, sizeof(TwavHeader),1,fp); printf("%i\n",theHeader.bitsPerSample); AudioStreamBasicDescription auDesc = {}; auDesc.mSampleRate = theHeader.SamplesPerSec; auDesc.mFormatID = kAudioFormatLinearPCM; auDesc.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked; auDesc.mBytesPerPacket = 8; auDesc.mFramesPerPacket = 1; auDesc.mBytesPerFrame = 8; auDesc.mChannelsPerFrame = 2; auDesc.mBitsPerChannel = 32; AudioQueueRef auQueue = 0; AudioQueueBufferRef auBuffers[2] ={}; // our persistent state for sound playback SoundState soundState= {}; soundState.done=false; OSStatus err; // most of the 0 and nullptr params here are for compressed sound formats etc. err = AudioQueueNewOutput(&auDesc, &auCallback, &soundState, 0, 0, 0, &auQueue); checkError(err); // generate buffers holding at most 1/16th of a second of data uint32_t bufferSize = auDesc.mBytesPerFrame * (auDesc.mSampleRate / 16); err = AudioQueueAllocateBuffer(auQueue, bufferSize, &(auBuffers[0])); checkError(err); err = AudioQueueAllocateBuffer(auQueue, bufferSize, &(auBuffers[1])); checkError(err); // prime the buffers auCallback(&soundState, auQueue, auBuffers[0]); auCallback(&soundState, auQueue, auBuffers[1]); // enqueue for playing AudioQueueEnqueueBuffer(auQueue, auBuffers[0], 0, 0); AudioQueueEnqueueBuffer(auQueue, auBuffers[1], 0, 0); // go! AudioQueueStart(auQueue, 0); char rxChar[10]; scanf( "%s", &rxChar); printf("FINISH"); fclose(fp); // be nice even it doesn't really matter at this point if (auQueue) AudioQueueDispose(auQueue, true); } what do I do wrong?
Posted
by
Post not yet marked as solved
0 Replies
178 Views
On newer Macs, the audio input from an external microphone plugged into the headphone jack (on a pair of wired Apple EarPods, for example) does not add any noticeable latency. However, the audio input from the built-in microphone adds considerable (~30ms) latency. I imagine this is due to system-supplied signal processing that reduces background noise. Is there a way of bypassing this signal processing and reducing the latency? On iOS, there is an AVAudioSessionModeMeasurement mode that disables signal-processing and lowers input latency. Is there an equivalent for MacOS? FYI, on the 2015 MacBook Pros, there is no noticeable added latency on the built-in mic. This is an issue that affects newer computers, including the M1 line.
Posted
by
Post not yet marked as solved
1 Replies
239 Views
I have a RemoteIO unit that successfully playbacks the microphone samples in realtime via attached headphones. I need to get the same functionality ported using AVAudioEngine, but I can't seem to make a head start. Here is my code, all I do is connect inputNode to playerNode which crashes. var engine: AVAudioEngine! var playerNode: AVAudioPlayerNode! var mixer: AVAudioMixerNode! var engineRunning = false private func setupAudioSession() { var options:AVAudioSession.CategoryOptions = [.allowBluetooth, .allowBluetoothA2DP] do { try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: options) try AVAudioSession.sharedInstance().setAllowHapticsAndSystemSoundsDuringRecording(true) } catch { MPLog("Could not set audio session category") } let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setActive(false) try audioSession.setPreferredSampleRate(Double(44100)) } catch { print("Unable to deactivate Audio session") } do { try audioSession.setActive(true) } catch { print("Unable to activate AudioSession") } } private func setupAudioEngine() { self.engine = AVAudioEngine() self.playerNode = AVAudioPlayerNode() self.engine.attach(self.playerNode) engine.connect(self.engine.inputNode, to: self.playerNode, format: nil) do { try self.engine.start() } catch { print("error couldn't start engine") } engineRunning = true } But starting AVAudioEngine causes a crash: libc++abi: terminating with uncaught exception of type NSException *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: inDestImpl->NumberInputs() > 0 || graphNodeDest->CanResizeNumberOfInputs()' terminating with uncaught exception of type NSException How do I get realtime record and playback of mic samples via headphones working?
Posted
by
Post not yet marked as solved
2 Replies
383 Views
Hi! Now with Monterey, you can configure an aggregate device with 7.1.4 output and play Dolby Atmos in the same format. But my Apple Music is only delivering 5.1. Can I play Apple Music in 7.1.4 ??
Posted
by