Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Posts under Core Audio tag

52 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Detect when (internal or external) microphone is being used
Hello, I used kAudioDevicePropertyDeviceIsRunningSomewhere to check if an internal or external microphone is being used. My code works well for the internal microphone, and for microphones which are connected using a cable. External microphones which are connected using bluetooth are not reporting their status. The status is always requested successfully, but it is always reported as inactive. Main relevant parts in my code : static inline AudioObjectPropertyAddress makeGlobalPropertyAddress(AudioObjectPropertySelector selector) { AudioObjectPropertyAddress address = { selector, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster, }; return address; } static BOOL getBoolProperty(AudioDeviceID deviceID, AudioObjectPropertySelector selector) { AudioObjectPropertyAddress const address = makeGlobalPropertyAddress(selector); UInt32 prop; UInt32 propSize = sizeof(prop); OSStatus const status = AudioObjectGetPropertyData(deviceID, &address, 0, NULL, &propSize, &prop); if (status != noErr) { return 0; //this line never gets executed in my tests. The call above always succeeds, but it always gives back "false" status. } return static_cast<BOOL>(prop == 1); } ... __block BOOL microphoneActive = NO; iterateThroughAllInputDevices(^(AudioObjectID object, BOOL *stop) { if (getBoolProperty(object, kAudioDevicePropertyDeviceIsRunningSomewhere) != 0) { microphoneActive = YES; *stop = YES; } }); What could cause this and how could it be fixed? Thank you for your help in advance!
2
0
612
Nov ’23
UnsafeMutableRawPointer with XCode 15
After update XCode to 15, I encountered a crash with UnsafeMutableRawPointer. To recreate the problem I write this simple test code. class ViewController: UIViewController { override func viewDidLoad() { super.viewDidLoad() test() } private func test() { var abl = AudioBufferList() let capacity = 4096 let lp1 = UnsafeMutableAudioBufferListPointer(&amp;amp;abl) let outputBuffer1 = UnsafeMutablePointer&amp;lt;Int8&amp;gt;.allocate(capacity: capacity) let outputBuffer2 = UnsafeMutablePointer&amp;lt;Int8&amp;gt;.allocate(capacity: capacity) // It crashed here lp1[0].mData = UnsafeMutableRawPointer(outputBuffer1) lp1[0].mNumberChannels = 1 lp1[0].mDataByteSize = UInt32(capacity) lp1[1].mData = UnsafeMutableRawPointer(outputBuffer2) lp1[1].mNumberChannels = 1 lp1[1].mDataByteSize = UInt32(capacity) let lp2 = UnsafeMutableAudioBufferListPointer(&amp;amp;abl) let data = ( UnsafeMutablePointer&amp;lt;Int16&amp;gt;.allocate(capacity: 4096), packet: 1 ) lp2[0].mData = UnsafeMutableRawPointer(data.0) } } I checked the XCode 15 Release Notes and found out they did something with the pointer default initialization P1020R1 - Smart pointer creation with default initialization Is this causing the problem or I'm doing it wrong? Because it work perfectly fine with XCode 14.3.1 and below P/s: I can't provide the full crash logs cause it's company property but I can provide these: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 5 Application Specific Information: stack buffer overflow Thread 5 name: Dispatch queue: com.apple.NSXPCConnection.user.endpoint Thread 5 Crashed: 0 libsystem_kernel.dylib 0x20419ab78 __pthread_kill + 8 1 libsystem_pthread.dylib 0x23de0c3bc pthread_kill + 268 2 libsystem_c.dylib 0x1d780c44c __abort + 128 3 libsystem_c.dylib 0x1d77f7868 __stack_chk_fail + 96 Clearly there are something wrong with the memory address after init UnsafeMutableRawPointer from the UnsafeMutablePointer&amp;lt;Int8&amp;gt;
6
1
823
Nov ’23
FB13398940: Removing a CMIOObjectPropertyListenerBlock ...doesn't do anything?
I've added a listener block for camera notifications. This works as expected: the listener block is invoked then the camera is activated/deactivated. However, when I call CMIOObjectRemovePropertyListenerBlock to remove the listener block, though the call succeeds, camera notifications are still delivered to the listener block. Since in the header file it states this function "Unregisters the given CMIOObjectPropertyListenerBlock from receiving notifications when the given properties change." I'd assume that once called, no more notifications would be delivered? Sample code: #import <Foundation/Foundation.h> #import <CoreMediaIO/CMIOHardware.h> #import <AVFoundation/AVCaptureDevice.h> int main(int argc, const char * argv[]) { AVCaptureDevice* camera = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; OSStatus status = -1; CMIOObjectID deviceID = 0; CMIOObjectPropertyAddress propertyStruct = {0}; propertyStruct.mSelector = kAudioDevicePropertyDeviceIsRunningSomewhere; propertyStruct.mScope = kAudioObjectPropertyScopeGlobal; propertyStruct.mElement = kAudioObjectPropertyElementMain; deviceID = (UInt32)[camera performSelector:NSSelectorFromString(@"connectionID") withObject:nil]; CMIOObjectPropertyListenerBlock listenerBlock = ^(UInt32 inNumberAddresses, const CMIOObjectPropertyAddress addresses[]) { NSLog(@"Callback: CMIOObjectPropertyListenerBlock invoked"); }; status = CMIOObjectAddPropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), listenerBlock); if(noErr != status) { NSLog(@"ERROR: CMIOObjectAddPropertyListenerBlock() failed with %d", status); return -1; } NSLog(@"Monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID); sleep(10); status = CMIOObjectRemovePropertyListenerBlock(deviceID, &propertyStruct, dispatch_get_main_queue(), listenerBlock); if(noErr != status) { NSLog(@"ERROR: 'AudioObjectRemovePropertyListenerBlock' failed with %d", status); return -1; } NSLog(@"Stopped monitoring %@ (uuid: %@ / %x)", camera.localizedName, camera.uniqueID, deviceID); sleep(10); return 0; } Compiling and running this code outputs: Monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21) Callback: CMIOObjectPropertyListenerBlock invoked Callback: CMIOObjectPropertyListenerBlock invoked Stopped monitoring FaceTime HD Camera (uuid: 3F45E80A-0176-46F7-B185-BB9E2C0E436A / 21) Callback: CMIOObjectPropertyListenerBlock invoked Callback: CMIOObjectPropertyListenerBlock invoked Note the last two log messages showing that the CMIOObjectPropertyListenerBlock is still invoked ...even though CMIOObjectRemovePropertyListenerBlock has successfully been invoked. Am I just doing something wrong here? Or is the API broken?
3
0
731
Dec ’23
Microphone icon in menu bar not disappear
Hello, I am an audio developer, currently using macOS version 14.1.1. I noticed that after disabling the microphone, the small yellow dot in the Control Center disappears immediately, but the one in the menu bar takes about 20 seconds to disappear. I tested the built-in Voice Memos app and found the same behavior. Our users may be concerned about their privacy being violated, even though the software is not using the microphone at that time. We believe this is a bug, and the microphone icon in the menu bar should disappear immediately after the microphone is no longer in use. Do you have plans to fix this issue in future versions? Additionally, is there any workaround for the current version? As a supplement, we are using CoreAudio API with AudioDeviceStart & AudioDeviceStop, not AudioUnit.
0
0
717
Nov ’23
Audio stops working on macOS Sonoma
I report here some messages from Apple Community for an untracked bug in macOS Sonoma (from 14.0 to 14.2 beta 4 at the time). https://discussions.apple.com/thread/255214328 Original message 1: I've finally noticed a pattern that occurs rather frequently on macOS Sonoma. I was blaming Bluetooth issues before, but it looks like it's more about audio in general. What happens is that at some point, all audio freezes. The hotkeys for the audio controls show "Stop" sign, like there are no audio outputs connected, the taskbar is completely unresponsive: Control Centre shows a spinning circle, the sidebar is not opening (Spotlight works, though). If you go to the System Settings, some menu items will be unresponsive: Sound doesn't open, Bluetooth does not open, Accessibility and Siri & Spotlights all do not open. Then, a new bug appeared that I've just started to notice recently. The screen is flashing like there's an Accessibility feature enabled that uses warning flash instead of sound. It appears just randomly, out of nowhere. Immediately after that, sound works just normally. When this is happening, and video/audio content in the browser and wherever does not work, Tidal shows many random errors, and Firefox just completely hangs when you try to play a video on YouTube. I've tried to stop coreaudiod and it did restart the daemon, but nothing else happened. The device is a very fresh M1 Max MacBook, and nothing like that was happening on Ventura. I've had audio cracks on another M1 Pro laptop, but this one didn't even have those. P.S. This is happened just when I was writing this post, and I've disabled Bluetooth just before. Now, the Bluetooth section in the Settings is opening, but others are still unresponsive. For reference - I have yabai and BetterSnapTool installed, which modify system behavior, but with system protection enabled. Siri is disabled. I've tried to stop a bunch of random processes when this happened, but none helped so far. This issue constantly haunts me since I've upgraded, and it's extremely annoying. Original message 2: Yes, I'm thinking it's a combination of Bluetooth and audio issues. I've got all apps that are trying to use audio crashing after I'm just connecting my Bluetooth earbuds. Now I see that the coreaudiod is just not running this time - I've tried to connect to a Slack Huddle, and it just hanged, sound is unresponsive again and the Settings app is not working as I mentioned before. I've checked the Activity Monitor and found that the process that works with audio on macOS (coreaudiod) is not running. I've attempted to launch coreaudiod with sudo launchctl load /system/library/launchdaemons/com.apple.audio.coreaudiod.plis, and got Load failed: 5: Input/output error as a response. After a while, when I disabled the earbuds it's started again on its own and coreaudiod is running, and the audio controls are working once more. Original message 3: Just accidentally looked at the Console App while looking for logs for other things, and found out that my codeaudiod crashing by cooldown every day 10 to 50 times with intervals from 1 second to a couple of hours, around 5 minutes on average. The crash is the following: Crashed Thread: 18 Dispatch queue: com.apple.audio.device.~:AMS2_StackedOutput:0.event Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000000 I also found that avconferenced fails too occasionally, though very rare - I believe that's the process that connects iPad as a second screen, and it _too fails with sigsegv on 0x0 - though not that it's some unique bug to attempt to read memory at 0, maybe just a coincidence. @Flomaster do you use Sidecar by chance? My message: I too have this problem on my MacBook Pro M2 Pro since upgrading to macOS Sonoma. It mainly occurs with AirPods Pro 2 but I have also had this happen to me using OnePlus Buds. The blockages are the same as you have experienced and, as I often work in video conferences, blocking MS Teams or Google Meet is really becoming a serious problem. Desperate, I tried installing macOS Sonoma 14.2 beta but none of the updates solved the problem. I even tried a full restore, re-importing the data from Time Machine but to no avail. Indeed, with Beta 4 the problem seems to have worsened because the AirPods now even struggle to connect.
2
3
2.6k
Jan ’24
Avoiding microphone permission popup on macOS Sonoma
I am working on an app that uses Core Audio through JUCE library for audio. The problem I'm trying to solve is that when the app is using a full duplex audio interface such as one from Focusrite Scarlett series for output, the app shows a dialog requesting permission to use microphone. The root cause of the issue is that by default, Core Audio opens full duplex devices for both input and output. On previous macOS versions, I was able to work around the problem by disabling the input stream before starting the IOProc by setting AudioHardwareIOProcStreamUsage to all zero for input. On macOS Sonoma this disables input so that the microphone indicator is not shown, but the permission popup is still shown. What other reasons there are to show the popup? I have noticed that Chrome and Slack have the same problem that they show the microphone popup when trying to play sounds on the Focusrite, but for example Deezer manages without the popup.
2
0
821
Dec ’23
Access raw audio data with AudioQueueBuffer
I am trying to get the raw audio data from the system microphone using AudioToolbox and CoreFoundation frameworks. So far the writing packets to file logic works but when I try to capture the raw data into a file I am getting white noise. Callback function looks like this: static void MyAQInputCallback(void *inUserData, AudioQueueRef inQueue, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) { MyRecorder *recorder = (MyRecorder *)inUserData; if (inNumPackets > 0) { CheckError(AudioFileWritePackets(recorder->recordFile, FALSE, inBuffer->mAudioDataByteSize, inPacketDesc, recorder->recordPacket, &inNumPackets, inBuffer->mAudioData), "AudioFileWritePackets failed"); recorder->recordPacket += inNumPackets; int sampleCount = inBuffer->mAudioDataByteSize / sizeof(AUDIO_DATA_TYPE_FORMAT); AUDIO_DATA_TYPE_FORMAT* samples = (AUDIO_DATA_TYPE_FORMAT*)inBuffer->mAudioData; FILE *fp = fopen(filename, "a"); for (int i = 0; i < sampleCount; i++){ fprintf(fp, "%i;\n",samples[i]); } fclose(fp); } if (recorder->running) CheckError(AudioQueueEnqueueBuffer(inQueue, inBuffer, 0, NULL), "AudioQueueEnqueueBuffer failed"); } Some parameters: NumberRecordBuffers = 3 buffer duration = 0.1 format->mFramesPerPacket = 4096 samplerate = 44100 inNumPackets = 1 recordFormat.mFormatID = kAudioFormatAppleLossless; recordFormat.mChannelsPerFrame = 1; recordFormat.mBitsPerChannel = 16; Is this the correct way to do this? I could not find much information in the documentation. Any help is appreciated. Thank you in advance.
4
0
814
Dec ’23
ClassInfo Audio Unit Property not being set
I have a music player that is able to save and restore AU parameters using the kAudioUnitProperty_ClassInfo property. For non apple AUs, this works fine. But for any of the Apple units, the class info can be set only the first time after the audio graph is built. Subsequent sets of the property do not stick even though the OSStatus code is 0 upon return. Previously this had worked fine. But sometime, not sure when, the Apple provided AUs changed their behavior and is now causing me problems. Can anyone help shed light on this ? Thanks in advance for the help. Jeff Frey
0
0
561
Jan ’24
CoreAudio server plugin: 24bit big endian audio streams
At least under macOS Sonoma 14.2.1 kAudioFormatFlagIsBigEndian for 24bit audio doesn't seem to be supported by the CoreAudio engine when providing kAudioServerPlugInIOOperationWriteMix streaming buffers for our CoreAudio server plugin. Is that correct and to be expected? Or how should the AudioStreamBasicDescription be filled out on a kAudioStreamPropertyPhysicalFormat request to correctly announce 24bit big endian audio to CoreAudio? Thanks, hagen.
0
0
498
Feb ’24
Audio Server Plugin entitlements and communication
I am currently working on planning a multi-component software system that consists of an Audio Server Plugin and an application for user interaction. I have very little experience with IPC/XPC and its performance implications, so I hope I can find a little guidance here. The Audio Server plugin publishes a number of multi-channel output devices on which it should perform computations and pass the result on to a different Core Audio device. My concerns here are: Can the plugin directly access other CoreAudio devices for audio output or is this prohibited by the sandboxing? If it cannot, would relaying the audio data via XPC be a good idea in terms of low latency stability? Can I use metal compute from within the Audio Server plugin? I have not found any information about metal related sandboxing entitlements. I am also concerned about performance implications as above. Regarding the user interface application, I would like to know: If a process that has not been started by launchd can communicate with the Audio Server plugin using XPC. If not, would a user agent instead of an app be a better choice? Or are there other communication channels that would work with sandboxing? Thank you very much! Andreas
0
0
616
Feb ’24
Audio Workgroups: Aux threads joined to workgroup executed on E-Cores when App is in background
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups. When the Standalone App or Logic Pro is in the foreground and active all is well and as expected. However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore. The processing thread itself stays on a p-core. But has to wait for the other threads to finish. How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
0
0
481
Mar ’24
With OS X Sonoma 14.4 update there is no rights to relaunch coreaudiod
Some of installers which we have suddenly become broken for users running the latest version of OS X, I found that the reason was that we install Core Audio HAL driver and because I wanted to avoid system reboot I relaunched coreaudio daemon via from a pkg post-install script. sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod So with the OS update the command fails, if a computer has SIP enabled (what is the default). sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod Password: Could not kickstart service "com.apple.audio.coreaudiod": 1: Operation not permitted It would be super nice if either the change can be: reverted OR I and similar people to know a workaround of how to hot-plug (and unplug) such a HAL driver.
5
1
3.3k
Apr ’24
Issue with kAudioProcessPropertyDevices property
The CoreAudio framework has a process class property kAudioProcessPropertyDevices, which is used to obtain an array of AudioObjectIDs that represent the devices currently used by the process for output. But this property behaves incorrectly. Specifically, if a process switches from one microphone to another while streaming, this property returns the output device ID as the input device ID. Steps to reproduce: run FaceTime select "MacBook Pro Microphone" as an input device from the FaceTime menu select "MacBook Pro Speaker" as an output device from the FaceTime menu start a call get kAudioProcessPropertyDevices for Input scope: returns ID1 - "MacBook Pro Microphone" [CORRECT] get kAudioProcessPropertyDevices for Output scope: returns ID2 - "MacBook Pro Speaker" [CORRECT] change the input device in the FaceTime menu to any other microphone ("AirPods Pro" - ID3) get kAudioProcessPropertyDevices for Input scope: returns ID2 "MacBook Pro Speaker" but should be ID3 "AirPods Pro" [INCORRECT] get "kAudioProcessPropertyDevices" for Output scope: returns ID2 "MacBook Pro Speaker" [CORRECT] Monitoring the property change for kAudioProcessPropertyDevices could provide a means to track audio streaming processes, but its current flaw renders it unusable. So I'm curious if the macOS developers plan to address this issue in future releases, or if anyone can come up with a reliable alternative for identifying processes and associated audio devices being used for playback or recording.
0
1
382
Mar ’24
Is AudioObjectShow() supposed to work? On what audio objects?
Hi Hopefully a simple question. I just reached for AudioObjectShow() to help debug stuff and it does not appear to work on audio devices or audio streams. It prints nothing for them. It does work on AudiokAudioObjectSystemObject, I've not explored what else it does or does not work on. I could not find any other posts about this, it it expected to work? On all audio objects? I'm on macOS 14.4. Here is a simple demo. AudioObjectShow() prints out info for the System AudiokAudioObjectSystemObject but then prints nothing as we loop through the audio devices in the system (and same for all streams on all these devices, but I'm not showing that here). #include <CoreAudio/AudioHardware.h> static const AudioObjectPropertyAddress devicesAddress = { kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain }; static const AudioObjectPropertyAddress nameAddress = { kAudioObjectPropertyName, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMain }; int main(int argc, const char *argv[]) { UInt32 size; AudioObjectID *audioDevices; AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &devicesAddress, 0, NULL, &size); audioDevices = (AudioObjectID *) malloc(size); AudioObjectGetPropertyData(kAudioObjectSystemObject, &devicesAddress, 0, NULL, &size, audioDevices); UInt32 nDevices = size / sizeof(AudioObjectID); printf("--- AudioObjectShow(kAudioObjectSystemObject):\n"); AudioObjectShow(kAudioObjectSystemObject); for (int i=0; i < nDevices; i++) { printf("-------------------------------------------------\n"); printf("audioDevices[%d] = 0x%x\n", i, audioDevices[i]); AudioObjectGetPropertyDataSize(audioDevices[i], &nameAddress, 0, NULL, &size); CFStringRef cfString = malloc(size); AudioObjectGetPropertyData(audioDevices[i], &nameAddress, 0, NULL, &size, &cfString); CFShow(cfString); //AudioObjectShow() give us anything? printf("--- AudioObjectShow(audioDevices[%d]=0x%x):\n", i, audioDevices[i]); AudioObjectShow(audioDevices[i]); printf("---\n"); } } Start of output... AudioObjectID: 0x1 Class: Audio System Object Name: The Audio System Object ------------------------------------------------- audioDevices[0] = 0xd2 Darryl\u2019s iPhone Microphone --- AudioObjectShow(audioDevices[0]=0xd2): --- ------------------------------------------------- audioDevices[1] = 0x41 LG UltraFine Display Audio --- AudioObjectShow(audioDevices[1]=0x41): --- ------------------------------------------------- audioDevices[2] = 0x3b LG UltraFine Display Audio --- AudioObjectShow(audioDevices[2]=0x3b): --- ------------------------------------------------- audioDevices[3] = 0x5d BlackHole 16ch --- AudioObjectShow(audioDevices[3]=0x5d): --- -------------------------------------------------
1
0
461
Mar ’24
wot WHAT is what? Unexpected CoreAudio get property errors/behaviors
I see unexpected behavior when using AudioObjectGetPropertyData() to get the Channel Number Name or the Channel Category Name for the iPhone Microphone or the MacBook Pro Microphone audio devices. I am running macOS 14.4 Sonoma on a Intel MacBook Pro 15" 2019. I have a test program that loops though all audio devices on a system, and all channels on each device. It uses AudioObjectGetPropertyData() to get the device name and manufacturer name and then iterate over the input and output channels getting Channel Number Name, Channel Name and Channel Category. I would expect some of these values (like channel Name frequently is) to be empty CFStrings. Or for others to return FALSE to AudioObjectHasProperty() if the driver does not implement the property. And that is how things behave for most devices... ... except for the MacBook Pro Microphone and iPhone Microphone devices. There I get AudioObjectHasProperty() return TRUE but then a AudioObjectGetPropertyData() call with the exact same AudioObjectPropertyAddress returns with an error code 'WHAT'. Took me a little while to realize the error cord being returned was 'WHAT' not 'what' and I added a modified checkError() function here to capture that and more. So what surprised me is: If AudioObjectHasProperty() returns TRUE then I expect that the matching AudioObjectGetPropertyData() works. and What the heck is 'WHAT'? I assume it is supposed to mean 'what' aka kAudioHardwareUnspecifiedError. Why is that actual error value not returned? Are there other places that return 'WHAT' or capitalized versions of these standard OSStatus CoreAudio errors? The example program is not complex but is too long for here so it's on GitHub at https://github.com/Darryl-Ramm/Wot Here is some output from that program showing the unexpected behavior: output.txt
3
0
515
Mar ’24
failed to start VoiceProcessingIO AudioUnit on VisionPro (os 1.1.1)
Hello, We are trying to use an audio calling functionality for visionOS with no success since the update of visionOS. We do not used CallKit for this flow. We set the AudioSession as followed: [sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options: (AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionAllowBluetoothA2DP | AVAudioSessionCategoryOptionMixWithOthers) error:&error_]; We are creating our Audio unit as followed: AudioComponentDescription desc_; desc_.componentType = kAudioUnitType_Output; desc_.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc_.componentManufacturer = kAudioUnitManufacturer_Apple; desc_.componentFlags = 0; desc_.componentFlagsMask = 0; AudioComponent comp_ = AudioComponentFindNext(NULL, &desc_); IMSXThrowIfError(AudioComponentInstanceNew(comp_, &_audioUnit),"couldn't create a new instance of Apple Voice Processing IO."); UInt32 one_ = 1; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, audioUnitElementIOInput, &one_, sizeof(one_)), "could not enable input on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, audioUnitElementIOOutput, &one_, sizeof(one_)), "could not enable output on Apple Voice Processing IO"); IMSTagLogInfo(kIMSTagAudio, @"Rate: %ld", _rate); bool isInterleaved = _channel == 2 ? true : false; self.ioFormat = CAStreamBasicDescription(_rate, _channel, CAStreamBasicDescription::kPCMFormatInt16, isInterleaved); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the input client format on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the output client format on Apple Voice Processing IO"); UInt32 maxFramesPerSlice_ = 4096; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO"); UInt32 propSize_ = sizeof(UInt32); IMSXThrowIfError(AudioUnitGetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, &propSize_), "couldn't get max frames per slice on Apple Voice Processing IO"); AURenderCallbackStruct renderCallbackStruct_; renderCallbackStruct_.inputProc = playbackCallback; renderCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &renderCallbackStruct_, sizeof(renderCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); AURenderCallbackStruct inputCallbackStruct_; inputCallbackStruct_.inputProc = recordingCallback; inputCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, 0, &inputCallbackStruct_, sizeof(inputCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); And as soon as we try to start the AudioUnit we have the following error: PhaseIOImpl.mm:1514 phaseextio@0x107a54320: failed to start IO directions 0x3, num IO streams [1, 1]: Error Domain=com.apple.coreaudio.phase Code=1346924646 "failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6" UserInfo={NSLocalizedDescription=failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6} We do not use PHASE framework on our side and the error is not clear to us nor documented anywhere. We also try to use a AudioUnit that only do Speaker witch works perfectly, but as soon as we try to record from an AudioUnit the start failed as well with the error AVAudioSessionErrorCodeCannotStartRecording We suppose that somehow inside PHASE an IO VOIP audio unit is running that prevent us from stoping/killing it when we try to create our own, and stop the whole flow. It used to work on visonOS 1.0.1 Regards, Summit-tech
0
0
537
Mar ’24