Core Audio Types

RSS for tag

Interact with audio streams, complex buffers, and audiovisual timestamps that use specialized data types using Core Audio Types.

Core Audio Types Documentation

Posts under Core Audio Types tag

8 Posts
Sort by:
Post not yet marked as solved
3 Replies
223 Views
I see unexpected behavior when using AudioObjectGetPropertyData() to get the Channel Number Name or the Channel Category Name for the iPhone Microphone or the MacBook Pro Microphone audio devices. I am running macOS 14.4 Sonoma on a Intel MacBook Pro 15" 2019. I have a test program that loops though all audio devices on a system, and all channels on each device. It uses AudioObjectGetPropertyData() to get the device name and manufacturer name and then iterate over the input and output channels getting Channel Number Name, Channel Name and Channel Category. I would expect some of these values (like channel Name frequently is) to be empty CFStrings. Or for others to return FALSE to AudioObjectHasProperty() if the driver does not implement the property. And that is how things behave for most devices... ... except for the MacBook Pro Microphone and iPhone Microphone devices. There I get AudioObjectHasProperty() return TRUE but then a AudioObjectGetPropertyData() call with the exact same AudioObjectPropertyAddress returns with an error code 'WHAT'. Took me a little while to realize the error cord being returned was 'WHAT' not 'what' and I added a modified checkError() function here to capture that and more. So what surprised me is: If AudioObjectHasProperty() returns TRUE then I expect that the matching AudioObjectGetPropertyData() works. and What the heck is 'WHAT'? I assume it is supposed to mean 'what' aka kAudioHardwareUnspecifiedError. Why is that actual error value not returned? Are there other places that return 'WHAT' or capitalized versions of these standard OSStatus CoreAudio errors? The example program is not complex but is too long for here so it's on GitHub at https://github.com/Darryl-Ramm/Wot Here is some output from that program showing the unexpected behavior: output.txt
Posted
by dkr.
Last updated
.
Post not yet marked as solved
3 Replies
1.5k Views
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode. The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format. The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports. So the following:   AVAudioEngine *engine = [[AVAudioEngine alloc] init];   playerNode = [[AVAudioPlayerNode alloc] init];   AVAudioFormat *format = [[AVAudioFormat alloc] initWithSettings:utterance.voice.audioFileSettings];   [engine attachNode:self.playerNode];   [engine connect:self.playerNode to:engine.mainMixerNode format:format]; Throws an exception: Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\"" I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer. Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes. I could not find any constant or function which can make such format, ASDB or settings. The smartest way I could think of, which does not work:   AudioStreamBasicDescription toDesc;   FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,                      2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);   AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc]; Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated. So what is the correct way to do this?
Posted
by artium.
Last updated
.
Post not yet marked as solved
0 Replies
301 Views
At least under macOS Sonoma 14.2.1 kAudioFormatFlagIsBigEndian for 24bit audio doesn't seem to be supported by the CoreAudio engine when providing kAudioServerPlugInIOOperationWriteMix streaming buffers for our CoreAudio server plugin. Is that correct and to be expected? Or how should the AudioStreamBasicDescription be filled out on a kAudioStreamPropertyPhysicalFormat request to correctly announce 24bit big endian audio to CoreAudio? Thanks, hagen.
Posted
by hagen.
Last updated
.
Post not yet marked as solved
1 Replies
254 Views
HI My 2017 MacBook Air appears to be cracking when using face time, when using YouTube or playing music, speakers are fine and no distortion. The Mac is fully up to date, macOS Montery 12.7.3 Any help would be great, have tried input and output levels.
Posted
by ducks29.
Last updated
.
Post not yet marked as solved
0 Replies
377 Views
I have a music player that is able to save and restore AU parameters using the kAudioUnitProperty_ClassInfo property. For non apple AUs, this works fine. But for any of the Apple units, the class info can be set only the first time after the audio graph is built. Subsequent sets of the property do not stick even though the OSStatus code is 0 upon return. Previously this had worked fine. But sometime, not sure when, the Apple provided AUs changed their behavior and is now causing me problems. Can anyone help shed light on this ? Thanks in advance for the help. Jeff Frey
Posted
by jfrey3460.
Last updated
.
Post not yet marked as solved
1 Replies
2.0k Views
Hi, Wondering if anyone has found a solution to the automatic volume reduction on the host computer using the OSX native screen share application. The volume reduction makes it nearly impossible to comfortably continue working on the host computer when there is any audio involved. Is there a way to bypass to this function? It seems to be the same native function that FaceTime uses to reduce the system audio volume to create priority for the application. Please help save my speakers! Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
732 Views
I found an app for make PC ***** an takes audio from other android devices. However, when i connect with Mac it doesn't work. I use airplay for now but there are latency and quality problems. I have Late 13 Macbook Pro which uses airplay 1 technology. It is slower than newer one. My Wifi router in my room but as i said i want to connect with bluetooth. Why this problem appears. How to work on this?
Posted
by benian.
Last updated
.