I'd like to know:
Let's say there's a backgrounded app which has microphone access, such as Signal or SoundHound or Shazam. It's established that these apps are allowed to record audio in the user's environment even after being backgrounded, seemingly for as long as they want and even upload that sound data.
But can they ALSO continue recording even while another app that is in the foreground is using the microphone, such as the Phone app or Signal?
AudioUnit
RSS for tagCreate audio unit extensions and add sophisticated audio manipulation and processing capabilities to your app using AudioUnit.
Posts under AudioUnit tag
30 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I've been generating new Audio Unit Extension apps with Xcode 16 (and newer), and although they generally work initially, it is easy (although I'm not sure how to do it reliably) to cause the app to no longer be able to instantiate the audiounit. Generally the call to AVAudioUnit.findComponent fails and SimplePlayEngine hits the fatalError("Failed to find component with type...")
In the most recent project, merely adding files to the extension (without making any use of them) caused it to go off the rails.
If I "Archive" the app+plugin, there is no audio unit extension in the bundle.
If I switch to the audiounit extension and build it it's fine. If I look at the build folder in Library/Developer/Xcode/project_folder the extension_name.appex is there.
Any ideas? If I can coax an unmodified audio unit extension project to exhibit this behavior I'll attach it here. Right now what I have has code I don't want to share.
I have a werid case that shouldn't happen according to https://forums.developer.apple.com/forums/thread/706390
I have an audio unit which runs in FCP and I want it to launch a sandboxed app as a child process.
If I sign the child app with just "com.apple.security.app-sandbox" entitlement it crashes with SYSCALL_SET_PROFILE error.
According to the article referenced above: "This indicates that the process tried to setup its sandbox profile but that failed, in this case because it already has a sandbox profile."
This makes sense because audio units run in a sandboxed environment (in AUHostingService process).
So I added "com.apple.security.inherit" to the entitlements plist and now I get "Process is not in an inherited sandbox." error.
According to the article referenced above: "Another cause of a trap within _libsecinit_appsandbox is when a nonsandboxed process runs another program as a child process and that other program’s executable has the com.apple.security.app-sandbox and com.apple.security.inherit entitlements. That is, the child process wants to inherit its sandbox from its parent but there’s nothing to inherit."
And this doesn't make sense at all. The first error indicates the child process is trying to create a sandboxed environment within a parent sandboxed environment while the second error indicates there's no a parent sandboxed environment...
I specifically checked the child process has "com.apple.security.app-sandbox" and "com.apple.security.inherit" entitlements only.
If I remove all entitlements from the child process it launches and runs fine from the audio unit plugin. And if I remove "com.apple.security.inherit" but leave "com.apple.security.app-sandbox" I can successfully launch the app in standalone mode (in Finder).
For the testing puroses I use a simple Hello World desktop application generated by XCode (Obj-C).
Does anybody have an idea what can be the reason for such a weird behavior?
Hi!
I am creating a aumi AUv3 extension and I am trying to achieve simultaneous connections to multiple other avaudionodes. I would like to know it is possible to route the midi to different outputs inside the render process in the AUv3.
I am using connectMIDI(_:to:format:eventListBlock:) to connect the output of the AUv3 to multiple AvAudioNodes. However, when I send midi out of the AUv3, it gets sent to all the AudioNodes connected to it. I can't seem to find any documentation on how to route the midi only to one of the connected nodes. Is this possible?
For an upcoming update of one of my apps, I’m facing an issue:
The .rate parameter of a AVAudioUnitTimePitch allows me to slow down an audio track without any issues: setting .rate to 0.7 or 0.8 results in an almost perfect playback without changing pitch.
However, whenever the .rate parameter is greater than 1 (e.g. 1.1 or 1.15), I’m starting to hear audio artifacts (“flattering”) in the audio output which is not so nice (even at .overlap = 32).
Intuitively, I’d’ve thought that speeding up the file should contain less artifacts than slowing it down??
I’ve tried different sample rates (44.1 kHz and 48 kHz), but same result.
Grateful for any input on this 🙏
I’m working on a memo app that records audio from the iPhone’s microphone (and other devices like MacBook or iPad) and processes it in 10-second chunks at a target sample rate of 16 kHz. However, I’ve encountered limitations with installTap in AVAudioEngine, which doesn’t natively support configuring a target sample rate on the mic input (the default being 44.1 kHz).
To address this, I tried using AVAudioMixerNode to downsample the mic input directly. Although everything seems correctly configured, no audio is recorded—just a flat signal with zero levels. There are no errors, and all permissions are granted, so it seems like an issue with downsampling rather than the mic setup itself.
To make progress, I implemented a workaround by tapping and resampling each chunk tapped using installTap (every 50ms in my case) with AVAudioConverter. While this works, it can introduce artifacts at the beginning and end of each chunk, likely due to separate processing instead of continuous downsampling.
Here are the key issues and questions I have:
1. Can we change the mic input sample rate directly using AVAudioSession or another native API in AVAudio? Setting up the desired sample rate initially would be ideal for my use case.
2. Are there alternatives to installTap for recording audio at a different sample rate or for continuously downsampling the live input without chunk-based artifacts?
This issue seems longstanding, as noted in a 2018 forum post:
https://forums.developer.apple.com/forums/thread/111726
Any guidance on configuring or processing mic input at a lower sample rate in real-time would be greatly appreciated. Thank you!
Somehow I have a corrupted audio plugin authentication problem. I’m on a silicon Mac M1 and two audio plugins that were installed and working will now not authenticate. The vendors both are unable to troubleshoot and I think the issue is a corrupted low level file. One product authenticates correctly when I created a new user but another plugin only authenticates on the original user account and not on the newly created user. Reinstalling the plugins and the Mac OS does not fix the issue. Any thoughts?
I'm running into an issue where in some cases, when the AUHostingServiceXPC_arrow process is shut down by Logic, the process is terminated abruptly without calling AP_Close on all of the plugins hosted in the process. In our case, we have filesystem resources we need to clean up, and having stale files around from the last run can cause issues in new sessions, so this leak is having some pretty gnarly effects.
I can reproduce the issue using only Apple sample plugins, and it seems to be triggered by a timeout. If I have two different AU plugins in the session, and I add a 1 second sleep to the destructor of one of the sample plugins, Logic will force terminate the process and the remaining destructors are not called (even for the plugins without the 1 second sleep).
Is there a way to avoid this behavior? Or to safely clean up our plugin even if other plugins in the session take a second to tear down?
After investing more than a week into getting a bunch of audio unit projects converted into app + appex + framework, they all are now correctly loaded in-process in the demo host app that is part of Xcode's template.
However, Logic Pro adamantly refuses to load them in-process.
Does Logic Pro simply not do that ever, or is there some hint or configuration my plugins need to provide to enable that? If it is unsupported, will it be supported in some future version of Logic?
The entire point of investing that week was performance, which is moot if it is impossible to test the impact of loading in-process in a real-world usage scenario.
When we tested the audio quality of our VoIP App, we found that when the iOS18.0 device was played with AirPods Pro 2, we could hear noises similar to peak clipping and distortion, especially when the sound source played was loud and high-pitched. Here is the device information we tested:
Model: iPhone 16 Pro Max, iPhone 15 Pro
System version: iOS 18.0 (22A3354)
Bluetooth headset model: AirPods Pro 2
Bluetooth firmware version: 6F8
We tested multiple apps (including phone calls, FaceTime, Zoom, WeChat, Tencent Meeting), and they all had the above noise problem.
We also found two phenomena:
If we use the same iOS 18 device to connect HUAWEI FreeBuds Pro or FreeBuds 2, there is no such noise problem;
If we use an iOS 17 device to connect to the same AirPods Pro 2 for testing, there is no such noise problem;
Therefore, we suspect that it is caused by the compatibility problem between iOS 18.0 and AirPods firmware 6F8. The firmware version of our AirPods Pro 2 is 6F8, which was released on June 26, and iOS 18.0 was released on September 16. Maybe they are not very compatible. I hope that subsequent firmware updates can fix this problem.
Hi!
I have an AVAudioSequencer with some AVMusicTracks that are filled with AVParameterEvents.
If I toggle the isMuted property of a track, it will instantly mute when changed to true. However, after turning the muting to false, the events will only triggers on the next round of a loop and not instantly. Is this intended behaviour, and is there some way to get the events to trigger immediately after toggling the isMuted to be false?
SoundRecognition causes Input/Output callbacks to have varying Buffer sizes and introduces Glitching
Hello,
We have noticed an issue with SoundRecognition that causes glitching with our AudioUnit setup in Smule.
Input and output frame sizes are inconsistent.
Input frame size does not match [AVAudioSession sharedInstance].IOBufferDuration
My best guess is that SoundRecognition influences the input frame size and not the output frame size.
To reproduce use the example app here:
https://github.com/MarkoGill/SoundRecognitionBug
Hardware/OS
iPhone 14 Pro on iOS 18 -> Experiences the problem
iPhone 11 on iOS 18 -> Experiences the problem
iPhone 15 on iOS 18 -> Not experiencing the problem
Reproduction Steps
Enable Sound Recognition (Settings > Accessibility > Sound Recognition > On)
Enable a Sound for detection (Sounds > Dog > On)
Open the example app with headset (it routes input to output)
Notice glitching occurs
Check the logs. Record and Playback buffer sizes vary
Example Log:
AU input sample rate: 48000.000000
AU output sample rate: 48000.000000
hardware sample rate: 48000.000000
hardware buffer size: 1104.000000
updated record frame counts: 1024
updated playback frame counts: 1104
Notes:
You can disable Sound Recognition, restart the app, and playback behaves correctly.
I've got a bunch of AudioUnit projects approaching release, and am attempting to build an installer for them.
All are based on the AudioUnit template in Xcode 14.
What actually governs how the system detects an AudioUnit? The instructions I have seen say that the built .appex should be renamed to have the extension .component and installed into /Library/Audio/Plug-Ins/Components/ - great, I am able to build a signed installer that does that (i.e. strip out the built Application project that is part of the AudioUnit template but useless to, say, a Logic Pro user), include the .appex that declares the plugin and embeds a Framework that contains the actual code (so it can be loaded in-process).
auval -l does not show it after running the installer, nor does Console show anything logged suggesting that it was found but malformed or something like that.
Meanwhile, simply building the project causes auval -l to show an install of it in the build directory, and I have noticed that if I delete that, auval -l would still show the plugin installed, but now in the location I exported an archive of the project (!!). What black magic is this?
However, deleting both the recent build and the archive, after running the installer, and there is no indication that AudioComponentRegistry even sees the copy of it in one of the two locations actually documented to be valid install locations for an AudioUnit.
I have, however, installed one third-party free AUv3 which installed into /Library/Audio/Plug-Ins/Components/
Am I misunderstanding something about how this works? Is there some string other than AudioComponentRegistry I should filter on in Console that might provide a clue why my AudioUnit installed there is not picked up? Must I ship the semi-pointless Application that is part of the Xcode template project, and whatever magical mechanism detects it when I build will work its magic on end-users' machines?
Or could the problem be that the Framework with the actual code under Contents/Frameworks inside the audio unit, rather than installed independently into /Library/Frameworks?
I've got a bunch of Audio Units I've been developing over the last few months - in Xcode 14 based on the Audio Unit template that ships in Xcode.
The DSP heavy-lifting is all done in Rust libraries, which I build for Intel and Apple Silicon, merge using lipo and build XCFrameworks from. There are several of these - one with the DSP code, and several others used from the UI - a mix of SwiftUI and Cocoa.
Getting the integration of Rust, Objective C, C++ and Swift working and automated took a few weeks (my first foray into C++ since the 1990s), but has been solid, reliable and working well - everything loads in Logic Pro and Garage Band and works.
But now I'm attempting the (woefully underdocumented) process of making 13 audio unit projects able to be loaded in-process - which means moving all of the actual code into a Framework. And therein lies the problem:
None of the xcframeworks are modular, which leads to the dreaded "Include of non-modular header inside framework module". Imported directly into the app extension project with "Allow Non-modular Includes in Framework Modules" works fine - but in a framework this seems to be verboten.
So, the obvious solution would be to add a module map to each xcframework, and then, poof, they're modular.
BUT... due to a peculiar limitation of Xcode's build system I've spent days searching for a workaround for, it is only possible to have ONE xcframework containing a module.modulemap file in any project. More than that and xcodebuild will try to clobber them with each other and the build will fail. And there appears to be NO WAY to name the file anything other than module.modulemap inside an xcframework and have it be detected. So I cannot modularize my frameworks, because there is more than one of them.
How the heck does one work around this? A custom module map file somewhere (that the build should find and understand applies to four xcframeworks - how?)? Something else?
I've seen one dreadful workaround - https://medium.com/@florentmorin/integrating-swift-framework-with-non-modular-libraries-d18098049e18 - given that I'm generating a lot of the C and Objective C code for the audio in Rust, I suppose I could write a tool that parses the header files and generates Objective C code that imports each framework and declares one method for every single Rust call. But it seems to me there has to be a correct way to do this without jumping through such hoops.
Thoughts?
Hello! The new lower latency support for AirPods in Game Mode is impressive, but I'm not sure of the best way to handle the transition into/out of Game Mode while audio is playing. In order to lower the latency, the system appears to drop some number of samples, with the result being a good deal less latency. My use case is macOS where it's easier to switch in/out of the fullscreen game (a simple swipe left), thus causing more issues for Game Mode since the audio is playing the entire time. It would be nice if offscreen games could remain in game mode, but I understand not wanting to give developers that control.
Are there any best practices for avoiding or masking the audio glitch caused by this skip-ahead? Is there a system event I can receive to know when Game Mode is about to be enabled or disabled, where I could perhaps fade out the audio? My callback checks the inTimestamp->mSampleTime value to detect gaps, but it only rarely detects a Game Mode gap, even though the audio skip-ahead always happens.
BTW, I am currently only developing on macOS (15.0) and I'm working at a low level with AudioUnit callbacks and a SpatialMixer. I am not currently using any higher-level audio APIs.
And here's a few questions I don't necessarily expect answers to, but it doesn't hurt to ask: Is there any additional technical details about how this latency reduction works, or exactly how much of a reduction is achieved (or said another way, how many samples are dropped)? How much does this affect AirPods battery life? And finally, is there a way to query the actual latency value? I check the value for kAudioDevicePropertyLatency but it seems to always report 160ms for AirPods. Thanks!
Hello everyone,
I'm new to Core Audio and still haven't found my footing. I'm learning how to capture audio from the default device, using Audio Units. On my MacBook, the default audio input is mono. But when I write a piece of code to capture audio using AUHAL, I'm discovering that I need to provide an AudioBufferList with two channels, not one. Also, when I try to capture audio from an audio interface with 20 audio inputs, I must provide an AudioBufferList with two channels, and not with 20 channels. To investigate the issue, I wrote a small diagnostic program, which opens the default audio device and probes it for the number of channels. Depending on which way I'm probing, I'm getting different results. When I probe the stream format, I'm getting information that there is 1 channels. But when I probe the input audio unit, I'm getting information that there are 2 input channels.
Here's my program to demonstrate the issue:
// InputDeviceChannels.m
// Compile with:
// clang -framework CoreAudio -framework AudioToolbox -framework CoreFoundation -framework AudioUnit -o InputDeviceChannels InputDeviceChannels.m
//
// On my system, this prints:
// Device Name: MacBook Pro Microphone
// Number of Channels (Stream Format): 1
// Number of Elements (Element Count): 2
#import <AudioToolbox/AudioToolbox.h>
#import <AudioUnit/AudioUnit.h>
#import <CoreAudio/CoreAudio.h>
#import <Foundation/Foundation.h>
void printDeviceInfo(AudioUnit audioUnit) {
UInt32 size;
OSStatus err;
AudioStreamBasicDescription streamFormat;
size = sizeof(streamFormat);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 1,
&streamFormat, &size);
if (err != noErr) {
printf("Error getting stream format\n");
exit(1);
}
int numChannels = streamFormat.mChannelsPerFrame;
UInt32 elementCount;
size = sizeof(elementCount);
err = AudioUnitGetProperty(audioUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0,
&elementCount, &size);
if (err != noErr) {
printf("Error getting element count\n");
exit(1);
}
printf("Number of Channels (Stream Format): %d\n", numChannels);
printf("Number of Elements (Element Count): %d\n", elementCount);
}
void printDeviceName(AudioDeviceID deviceID) {
UInt32 size;
OSStatus err;
CFStringRef deviceName = NULL;
size = sizeof(deviceName);
err = AudioObjectGetPropertyData(
deviceID,
&(AudioObjectPropertyAddress){kAudioDevicePropertyDeviceNameCFString,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain},
0, NULL, &size, &deviceName);
if (err != noErr) {
printf("Error getting device name\n");
exit(1);
}
char deviceNameStr[256];
if (!CFStringGetCString(deviceName, deviceNameStr, sizeof(deviceNameStr),
kCFStringEncodingUTF8)) {
printf("Error converting device name to C string\n");
exit(1);
}
CFRelease(deviceName);
printf("Device Name: %s\n", deviceNameStr);
}
int main(int argc, const char *argv[]) {
@autoreleasepool {
OSStatus err;
// Get the default input device ID
AudioDeviceID input_device_id = kAudioObjectUnknown;
{
UInt32 property_size = sizeof(input_device_id);
AudioObjectPropertyAddress input_device_property = {
kAudioHardwarePropertyDefaultInputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMain,
};
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &input_device_property, 0, NULL,
&property_size, &input_device_id);
if (err != noErr || input_device_id == kAudioObjectUnknown) {
printf("Error getting default input device ID\n");
exit(1);
}
}
// Print the device name using the input device ID
printDeviceName(input_device_id);
// Open audio unit for the input device
AudioComponentDescription desc = {kAudioUnitType_Output, kAudioUnitSubType_HALOutput,
kAudioUnitManufacturer_Apple, 0, 0};
AudioComponent component = AudioComponentFindNext(NULL, &desc);
AudioUnit audioUnit;
err = AudioComponentInstanceNew(component, &audioUnit);
if (err != noErr) {
printf("Error creating AudioUnit\n");
exit(1);
}
// Enable IO for input on the AudioUnit and disable output
UInt32 enableInput = 1;
UInt32 disableOutput = 0;
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input,
1, &enableInput, sizeof(enableInput));
if (err != noErr) {
printf("Error enabling input on AudioUnit\n");
exit(1);
}
err = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output,
0, &disableOutput, sizeof(disableOutput));
if (err != noErr) {
printf("Error disabling output on AudioUnit\n");
exit(1);
}
// Set the current device to the input device
err =
AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_CurrentDevice,
kAudioUnitScope_Global, 0, &input_device_id, sizeof(input_device_id));
if (err != noErr) {
printf("Error setting device for AudioUnit\n");
exit(1);
}
// Initialize AudioUnit
err = AudioUnitInitialize(audioUnit);
if (err != noErr) {
printf("Error initializing AudioUnit\n");
exit(1);
}
// Print device info
printDeviceInfo(audioUnit);
// Clean up
AudioUnitUninitialize(audioUnit);
AudioComponentInstanceDispose(audioUnit);
}
return 0;
}
It prints:
Device Name: MacBook Pro Microphone
Number of Channels (Stream Format): 1
Number of Elements (Element Count): 2
I tried to set the number of channels to 1 on the input unit, but it didn’t change anything. After calling setNumberOfChannels(1, audioUnit), I’m still getting the same output.
Note 1: I know that I can ignore one channel, etc, etc. My purpose here is not to "somehow get it to work", I already did that. My purpose is to understand the API, so that I'll be able to write code that handles any number of audio inputs.
Note 2: I already read a bunch of documentation, especially this here: https://developer.apple.com/library/archive/technotes/tn2091/ - perhaps the channel map could help here, but I can’t make sense of it - I tried to use it based on my understanding but I only got the -50 OSStatus.
How should I understand this? Is it that that audio unit is an abstraction layer and automatically converts mono input into stereo input? Can I ask AUHAL to provide me the same number of input channels that the audio device has?
Hello macOS gurus, I am writing an AUv3 plug-in and wanted to add support for additional formats such as CLAP and VST3. These plug-ins must reside in an appropriate folder /Library/Audio/Plug-Ins/ or ~/Library/Audio/Plug-Ins/. The typical way these are delivered is with old school installers.
I have been experimenting with delivering theses formats in a sandboxed app. I was using the com.apple.security.temporary-exception.files.absolute-path.read-write entitlement to place a symlink in the system folder that points to my CLAP and VST3 plug-ins in the bundle. Everything was working very nicely until I realize that on my Mac I had changed the permissions on these folders from
to
The problem is that when the folder has the original system permissions, my attempt to place the symlink fails, even with the temporary exception entitlement.
Here's the code I'm using with systemPath = "/Library/Audio/Plug-Ins/VST3/"
static func symlinkToBundle(fileName: String, fileExt: String, from systemPath: String) throws {
guard let bundlePath = Bundle.main.resourcePath?.appending("/\(fileName).\(fileExt)") else {
print("File not in bundle")
}
let fileManager = FileManager.default
do {
try fileManager.createSymbolicLink(atPath: systemPath, withDestinationPath: bundlePath)
} catch {
print(error.localizedDescription)
}
}
So the question is ... Is there a way to reliably place this symlink in /Library/... from a sandboxed app using the temporary exception entitlements? I understand there will probably be issues with App Review but for now I am just trying to explore my options.
Thanks.
Hello,
my app works as Auv3 plugin.
I am interested in copying / pasting LogicPro chord track.
After I copy chord track in LogicPro and read UIPasteBoard.general in the app, I can see:
["LogicPasteBoardMarker": <OS_dispatch_data: data[0x3024599c0] = { leaf, size = 1, buf = 0x10a758000 }>]
How can I access these data? Thank you.
I connect two AVAudioNodes by using
- (void)connectMIDI:(AVAudioNode *)sourceNode to:(AVAudioNode *)destinationNode format:(AVAudioFormat * __nullable)format eventListBlock:(AUMIDIEventListBlock __nullable)tapBlock
and add a AUMIDIEventListBlock tap block to it to capture the MIDI events.
Both AUAudioUnits of the AVAudioNodes involved in this connection are set to use MIDI 1.0 UMP events:
[[avAudioUnit AUAudioUnit] setHostMIDIProtocol:(kMIDIProtocol_1_0)];
But all the MIDI voice channel events received are automatically converted to UMP MIDI 2.0 format. Is there something else I need to set so that the tap receives MIDI 1.0 UMPs?
(Note: My app can handle MIDI 2.0, so it is not really a problem. So this question is mainly to find out if I forgot to set the protocol somewhere...).
Thanks!!
Hello,
We are trying to use an audio calling functionality for visionOS with no success since the update of visionOS. We do not used CallKit for this flow.
We set the AudioSession as followed:
[sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options: (AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionAllowBluetoothA2DP | AVAudioSessionCategoryOptionMixWithOthers) error:&error_];
We are creating our Audio unit as followed:
AudioComponentDescription desc_;
desc_.componentType = kAudioUnitType_Output;
desc_.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc_.componentManufacturer = kAudioUnitManufacturer_Apple;
desc_.componentFlags = 0;
desc_.componentFlagsMask = 0;
AudioComponent comp_ = AudioComponentFindNext(NULL, &desc_);
IMSXThrowIfError(AudioComponentInstanceNew(comp_, &_audioUnit),"couldn't create a new instance of Apple Voice Processing IO.");
UInt32 one_ = 1;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, audioUnitElementIOInput, &one_, sizeof(one_)), "could not enable input on Apple Voice Processing IO");
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, audioUnitElementIOOutput, &one_, sizeof(one_)), "could not enable output on Apple Voice Processing IO");
IMSTagLogInfo(kIMSTagAudio, @"Rate: %ld", _rate);
bool isInterleaved = _channel == 2 ? true : false;
self.ioFormat = CAStreamBasicDescription(_rate, _channel, CAStreamBasicDescription::kPCMFormatInt16, isInterleaved);
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the input client format on Apple Voice Processing IO");
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the output client format on Apple Voice Processing IO");
UInt32 maxFramesPerSlice_ = 4096;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO");
UInt32 propSize_ = sizeof(UInt32);
IMSXThrowIfError(AudioUnitGetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, &propSize_), "couldn't get max frames per slice on Apple Voice Processing IO");
AURenderCallbackStruct renderCallbackStruct_;
renderCallbackStruct_.inputProc = playbackCallback;
renderCallbackStruct_.inputProcRefCon = (__bridge void *)self;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &renderCallbackStruct_, sizeof(renderCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO");
AURenderCallbackStruct inputCallbackStruct_;
inputCallbackStruct_.inputProc = recordingCallback;
inputCallbackStruct_.inputProcRefCon = (__bridge void *)self;
IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, 0, &inputCallbackStruct_, sizeof(inputCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO");
And as soon as we try to start the AudioUnit we have the following error:
PhaseIOImpl.mm:1514 phaseextio@0x107a54320: failed to start IO directions 0x3, num IO streams [1, 1]: Error Domain=com.apple.coreaudio.phase Code=1346924646 "failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6" UserInfo={NSLocalizedDescription=failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6}
We do not use PHASE framework on our side and the error is not clear to us nor documented anywhere.
We also try to use a AudioUnit that only do Speaker witch works perfectly, but as soon as we try to record from an AudioUnit the start failed as well with the error AVAudioSessionErrorCodeCannotStartRecording
We suppose that somehow inside PHASE an IO VOIP audio unit is running that prevent us from stoping/killing it when we try to create our own, and stop the whole flow.
It used to work on visonOS 1.0.1
Regards,
Summit-tech