Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

71 Posts
Sort by:
Post not yet marked as solved
0 Replies
184 Views
I need to record 2 stereo AVCaptureDevices into 1 audio track. I can successfully create the aggregate device using AudioHardwareCreateAggregateDevice, but when I record that device, the resulting audio track has quadraphonic audio. The problem with this is that some players, such as VLC, won't play all the tracks. So I tried forcing the file writer to use 2 channels with a stereo layout (see code below). This doesn't quite work because all 4 of the channels are mapped to both of the stereo channels. As in, they are not actually stereo. The left/right channels from the aggregate devices play in both channels of the resulting file. code I used to make the track stereo: var audioOutputSettings = movieFileOutput.outputSettings(for: audioConnection) audioOutputSettings[AVNumberOfChannelsKey] = 2 var layout = AudioChannelLayout() layout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo audioOutputSettings[AVChannelLayoutKey] = NSData(bytes: &layout, length: MemoryLayout.size(ofValue: layout)) movieFileOutput.setOutputSettings(audioOutputSettings, for: audioConnection) Can anyone help me with this so that both left channels from the 2 devices play in the left channel and same with both right channels?
Posted
by
Post not yet marked as solved
1 Replies
283 Views
I'm writing a macOS audio unit hosting app using the AVAudioUnit and AUAudioUnit APIs. I'm trying to use the NSView cacheDisplay(in:to:) function to capture an image of a plugin's view: func viewToImage(veiwToCapture: NSView) -> NSImage? {     var image: NSImage? = nil     if let rep = veiwToCapture.bitmapImageRepForCachingDisplay(in: veiwToCapture.bounds) {       veiwToCapture.cacheDisplay(in: veiwToCapture.bounds, to: rep)       image = NSImage(size: veiwToCapture.bounds.size)       image!.addRepresentation(rep)     }     return image   } } This works ok when a plugin is instantiated using the .loadInProcess option. If the plugin is instantiated using the .loadOutOfProcess option the resulting bitmapImageRep is blank. I'd much rather be loading plugins out-of-process for the enhanced stability. Is there any trick I'm missing to be able to capture the contents of the NSView from an out-of-process audio unit?
Posted
by
Post marked as solved
1 Replies
198 Views
I'm trying to figure out how to set the volume of a CoreAudio AudioUnit. I found the parameter kHALOutputParam_Volume, but I can't find anything about it. I called AudioUnitGetPropertyInfo and that told me that the parameter is 4 bytes long and writeable. How can I find out whether that is an Int32, UInt32, Float32 or some other type and what acceptable values are and mean? I used AudioUnitGetProperty and read it as either Int32 (512) or Float32 (7.17e-43). Is there any documentation on this and other parameters?
Posted
by
Post not yet marked as solved
0 Replies
244 Views
AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_VoiceProcessingIO, kAudioUnitManufacturer_Apple, 0, 0}; AudioComponent comp = AudioComponentFindNext(NULL, &desc); OSStatus error = AudioComponentInstanceNew(comp, &myAudioUnit);   In special case the returned error value is -1, I searched the https://www.osstatus.com/, but didn't get relevent info. my question is: what's the meanning of -1 in the case ? myAudioUnit is a nullptr this time ?
Posted
by
Post not yet marked as solved
0 Replies
190 Views
Since we have to en/decode the audio stream to/from our audio device anyway and we are using NEON SIMD to do so, we could just convert it into a stream of float on the fly. Since floats are the natural CoreAudio data format we probably can avoid having to involve an additional int-float/float-int conversion by CoreAudio this way. Does this make sense? Thanks, hagen
Posted
by
Post not yet marked as solved
0 Replies
244 Views
How can you add a live audio player to Xcode where they will have a interactive UI to control the audio and they will be able to exit out of the app and or turn their device off and it will keep playing? Is their a framework or API that will work for this? Thanks! Really need help with this…. 🤩 I have looked everywhere and haven’t found something that works….
Posted
by
Post not yet marked as solved
0 Replies
162 Views
I use ffplay to play a video, and the following error happens: SDL_OpenAudio (2 channels, 48000 Hz): CoreAudio error (AudioQueueStart): -66680 When I restart the Mac system, then it can play the video successfully, but only for a while.. and the error appears again.
Posted
by
Post not yet marked as solved
0 Replies
277 Views
I know the VoiceProcessingIO audio unit will create a aggregate audio device. But there are error kAudioUnitErr_InvalidProperty (-10789) during getting kAudioOutputUnitProperty_OSWorkgroup property in recent macOS Monterey 12.2.1 or BigSur 11.6.4. os_workgroup_t workgroup = NULL; UInt32  sSize; OSStatus sStatus; sSize = sizeof(os_workgroup_t); sStatus = AudioUnitGetProperty(mAudioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 1, &workgroup, &sSize); if (sStatus != noErr) { NSLog(@"Error %d", sStatus); } And the same code works fine on iOS 15.3.1 but not macOS. Have you any hint to resolve this issue?
Posted
by
Post not yet marked as solved
0 Replies
232 Views
Hi, to be able to receive IOServiceAddMatchingNotification we need to attach to an appropriate CFRunLoop/IONotificationPort. To avoid race condition the matching notification ideally would be serialized with the CoreAudio notification/callbacks. How can this be achieved? Attaching it to the runloop returned by CFRunLoopGetCurrent() does not yield to any notifications at all, to CFRunLoopGetMain leads to notifications asynchronous to CoreAudio callbacks. There are a set of deprecated AudioHardwareAdd/RemoveRunLoopSource() but apart of its deprecation at least on Big Sur @ Apple Silicon this does not lead to any notification as well. So, how is this supposed to be implemented? Do we really need to introduce locks? Also on the process calls? Wasn't it the purpose of runloops to manage exactly those kinds of situation? And more importantly over everything: Where is the documentation? Thanks for any hints, all the best, hagen.
Posted
by
Post not yet marked as solved
0 Replies
322 Views
The GarageBand app can import both midi and recorded audio file into a single player to play. Just like this: My App have the same feature but I don't know how to implement it. I have tried the AVAudioSequencer,but it only can load and play MIDI file. I have tried the AVPlayer and AVPlayerItem,but it seems that it can't load the MIDI file. So How to combine MIDI file and audio file into a single AVPlayerItem or anything else to play?
Posted
by
Post not yet marked as solved
1 Replies
335 Views
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode. The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format. The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports. So the following:   AVAudioEngine *engine = [[AVAudioEngine alloc] init];   playerNode = [[AVAudioPlayerNode alloc] init];   AVAudioFormat *format = [[AVAudioFormat alloc] initWithSettings:utterance.voice.audioFileSettings];   [engine attachNode:self.playerNode];   [engine connect:self.playerNode to:engine.mainMixerNode format:format]; Throws an exception: Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\"" I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer. Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes. I could not find any constant or function which can make such format, ASDB or settings. The smartest way I could think of, which does not work:   AudioStreamBasicDescription toDesc;   FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,                      2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);   AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc]; Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated. So what is the correct way to do this?
Posted
by
Post not yet marked as solved
2 Replies
345 Views
The very little and outdated 'documentation' shared by Apple about CoreAudio and CoreMIDI server plugins suggested to use syslog for logging. At least since Bug Sur syslog doesn't end up anywhere. (So, while you seem to think its OK to not document your APIs you could at least remove not working APIs then! Not to do so causes unnecessary and frustrating bug hunting?) Should we replace syslog by unified logging? For debugging purpose only our plugins write to our own log files. Where can I find suitable locations? Where is this documented? Thanks, hagen.
Posted
by
Post not yet marked as solved
0 Replies
268 Views
let volumePropertyAddress = AudioObjectPropertyAddress(         mSelector: kAudioHardwareServiceDeviceProperty_VirtualMainVolume,         mScope: kAudioDevicePropertyScopeOutput,         mElement: kAudioObjectPropertyElementMaster       )      let status = AudioObjectSetPropertyData(deviceId, &theAddress, 0, nil, size, &theValue) Then App freezes. Is it not possible to call the AudioObjectSetPropertyData method on the main thread
Posted
by
Post not yet marked as solved
0 Replies
367 Views
Hi, I've problem with an AU host (based on Audio Toolbox/Core Audio, not AVFoundation) when running on macOS 11 or later and Apple Silicon – it crashes after some operations in GUI. The weird is, it crashes in IOThread. Could this be caused by some inappropriate operation in GUI (eg. outside the main thread) that effects the IOThread? Sounds quite improbable to me. And I did not find anything suspicious in the code. There are two logs in the debugger: [AUHostingService Client] connection interrupted. rt_sender::signal_wait failed: 89 ... And here is the crash log: Crash log: ... Thanks, Tomas
Posted
by
Post not yet marked as solved
3 Replies
857 Views
Hello, my app crashed on the new MacOS12.x system, it works well on MacOS 11 BigSur. I'm developing an audio app on MacOS using AudioUnit, it sometime crashed when i switch devices. the relevant api is: AudioUnitSetProperty(audio_unit, kAudioOutputUnitProperty_CurrentDevice, kAudioUnitScope_Global, kAudioUnitOutputBus, &rnd_id, sizeof(rnd_id)); it troubles me for month, i can't find the reason or any useful info, any help will be appreciate. the crash log is: OS Version:      macOS 12.1 (21C51) Report Version:    12 Bridge OS Version:  6.1 (19P647) Crashed Thread:    43 schedule-thread Exception Type:    EXC_BAD_ACCESS (SIGSEGV) Exception Codes:   KERN_INVALID_ADDRESS at 0x00000a14f8969188 Exception Codes:   0x0000000000000001, 0x00000a14f8969188 Exception Note:    EXC_CORPSE_NOTIFY Application Specific Information: objc_msgSend() selector name: copy Thread 43 Crashed:: schedule-thread 0 libobjc.A.dylib         0x7ff815ef405d objc_msgSend + 29 1 CoreAudio            0x7ff817a237b9 HALC_ShellDevice::_GetPropertyData(unsigned int, AudioObjectPropertyAddress const&, unsigned int, void const*, unsigned int, unsigned int&, void*, unsigned int&, AudioObjectPropertyAddress&, bool&) const + 1133 2 CoreAudio            0x7ff817c57b81 invocation function for block in HALC_ShellObject::GetPropertyData(unsigned int, AudioObjectPropertyAddress const&, unsigned int, void const*, unsigned int, unsigned int&, void*) const + 107 3 CoreAudio            0x7ff817e8a606 HALB_CommandGate::ExecuteCommand(void () block_pointer) const + 98 4 CoreAudio            0x7ff817c56a98 HALC_ShellObject::GetPropertyData(unsigned int, AudioObjectPropertyAddress const&, unsigned int, void const*, unsigned int, unsigned int&, void*) const + 376 5 CoreAudio            0x7ff817b04235 HAL_HardwarePlugIn_ObjectGetPropertyData(AudioHardwarePlugInInterface**, unsigned int, AudioObjectPropertyAddress const*, unsigned int, void const*, unsigned int*, void*) + 349 6 CoreAudio            0x7ff817c16109 HALPlugIn::ObjectGetPropertyData(HALObject const&, AudioObjectPropertyAddress const&, unsigned int, void const*, unsigned int&, void*) const + 59 7 CoreAudio            0x7ff817bd2f5d HALObject::GetPropertyData(AudioObjectPropertyAddress const&, unsigned int, void const*, unsigned int&, void*) const + 461 8 CoreAudio            0x7ff817f2ffca HALDevice::GetPropertyData(AudioObjectPropertyAddress const&, unsigned int, void const*, unsigned int&, void*) const + 644 9 CoreAudio            0x7ff8179809ab AudioObjectGetPropertyData + 275 10 AudioDSP              0x134b6df19 0x13490b000 + 2502425 11 AudioDSP              0x134b69776 0x13490b000 + 2484086 12 AudioDSP              0x134cc4eb4 0x13490b000 + 3907252 13 AudioDSP              0x134cc5f56 0x13490b000 + 3911510 14 AudioDSP              0x134e17b2c 0x13490b000 + 5294892 15 AudioDSP              0x134e0f4d1 0x13490b000 + 5260497
Posted
by
Post not yet marked as solved
1 Replies
343 Views
We're trying to join our audio worker threads to a CoreAudio HAL audio workgroup, but haven't managed to this working yet. Here's what we do: Fetch audio workgroup handle from the CoreAudio device: UInt32 Count = sizeof(os_workgroup_t); os_workgroup_t pWorkgroup = NULL; ::AudioDeviceGetProperty(SomeCoreAudioDeviceHandle, kAudioUnitScope_Global, 0, kAudioDevicePropertyIOThreadOSWorkgroup, &Count, &pWorkgroup); This succeeds on a M1 Mini for the "Apple Inc.: Mac mini Speakers" on OSX 11.1. The returned handle looks fine as well: [(NSObject*)pWorkgroup debugDescription] returns "{xref = 2, ref = 1, name = AudioHALC Workgroup}" Join some freshly created worker threads to the workgroup via: os_workgroup_join_token_s JoinToken; int Result = ::os_workgroup_join(pWorkgroup, &JoinToken); The problem: Result from os_workgroup_join always is EINVAL, Invalid argument - whatever we do. Both arguments, the workgroup handle and the join token are definitely valid. And the device hasn't been stopped or reinitialized here, so the workgroup should not be cancelled? Has anyone else managed to get this working? All examples out there seem to successfully use the AUHAL workgroup instead of the audio device HAL API.
Posted
by
Post not yet marked as solved
1 Replies
388 Views
Hi, I've released an open-source AUv3 MIDI processor plugin for iOS and macOS that records and plays MIDI messages in a sample accurate fashion and doesn't ever apply any quantization. I've tested this plugin with 120 beta testers and everything seemed to work fine. However, now that I've released it, there seems to be a problem in Logic Pro X on some Mac computers with MIDI FX processor plugins that are using Catalyst. You can find my plugin here: http://uwyn.com/mtr/ ... and the source code here: https://github.com/gbevin/MIDITapeRecorder When I trace the AUv3 instantiation, I see Logic Pro X obtaining the internalRenderBlock several times, but then never ever calling it. This means there's no render callback and there's never any MIDI parameter events received. I've talked to the developer of ZOA, which is also a MIDI processor plugin using Catalyst and he's running into exactly the same problem: https://www.audiosymmetric.com/zoa.html Another developer that’s working on a MIDI processor plugin has been trying to track this down for weeks also. When I test this on my M1 Max MacBook Pro, is always internalRenderBlock, however an my M1 MacBook Air and Intel 2019 MacBook Pro, it is never called. Any thoughts or ideas to work around this would be really helpful. Thanks!
Posted
by
Post not yet marked as solved
1 Replies
452 Views
Curious if there is a sound way for an AUv3 component to identify how many other instances of it that are running on a device. For instance, if GarageBand has 4 tracks and all of the tracks use the same AUv3 component, is there a sound way for each one to obtain a unique index value? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
403 Views
Hi, Wondering if anyone has found a solution to the automatic volume reduction on the host computer using the OSX native screen share application. The volume reduction makes it nearly impossible to comfortably continue working on the host computer when there is any audio involved. Is there a way to bypass to this function? It seems to be the same native function that FaceTime uses to reduce the system audio volume to create priority for the application. Please help save my speakers! Thanks.
Posted
by
Post not yet marked as solved
0 Replies
346 Views
I’m using AVFoundation to access camera on iPad. But with AVFoundation, CoreMedia is also imported, which in-turn imports CoreAudio and CoreVideo. Keeping privacy concerns in mind, is there any way by which I can ensure that the app is never able to access Microphone or Video Recording? AVfoundation CoreMedia
Posted
by