AudioUnit

RSS for tag

Create audio unit extensions and add sophisticated audio manipulation and processing capabilities to your app using AudioUnit.

AudioUnit Documentation

Posts under AudioUnit tag

37 Posts
Sort by:
Post not yet marked as solved
0 Replies
312 Views
Hello, We are trying to use an audio calling functionality for visionOS with no success since the update of visionOS. We do not used CallKit for this flow. We set the AudioSession as followed: [sessionInstance setCategory:AVAudioSessionCategoryPlayAndRecord mode:AVAudioSessionModeVoiceChat options: (AVAudioSessionCategoryOptionAllowBluetooth | AVAudioSessionCategoryOptionAllowBluetoothA2DP | AVAudioSessionCategoryOptionMixWithOthers) error:&error_]; We are creating our Audio unit as followed: AudioComponentDescription desc_; desc_.componentType = kAudioUnitType_Output; desc_.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc_.componentManufacturer = kAudioUnitManufacturer_Apple; desc_.componentFlags = 0; desc_.componentFlagsMask = 0; AudioComponent comp_ = AudioComponentFindNext(NULL, &desc_); IMSXThrowIfError(AudioComponentInstanceNew(comp_, &_audioUnit),"couldn't create a new instance of Apple Voice Processing IO."); UInt32 one_ = 1; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, audioUnitElementIOInput, &one_, sizeof(one_)), "could not enable input on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, audioUnitElementIOOutput, &one_, sizeof(one_)), "could not enable output on Apple Voice Processing IO"); IMSTagLogInfo(kIMSTagAudio, @"Rate: %ld", _rate); bool isInterleaved = _channel == 2 ? true : false; self.ioFormat = CAStreamBasicDescription(_rate, _channel, CAStreamBasicDescription::kPCMFormatInt16, isInterleaved); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the input client format on Apple Voice Processing IO"); IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &_ioFormat, sizeof(self.ioFormat)), "couldn't set the output client format on Apple Voice Processing IO"); UInt32 maxFramesPerSlice_ = 4096; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, sizeof(UInt32)), "couldn't set max frames per slice on Apple Voice Processing IO"); UInt32 propSize_ = sizeof(UInt32); IMSXThrowIfError(AudioUnitGetProperty(self.audioUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maxFramesPerSlice_, &propSize_), "couldn't get max frames per slice on Apple Voice Processing IO"); AURenderCallbackStruct renderCallbackStruct_; renderCallbackStruct_.inputProc = playbackCallback; renderCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &renderCallbackStruct_, sizeof(renderCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); AURenderCallbackStruct inputCallbackStruct_; inputCallbackStruct_.inputProc = recordingCallback; inputCallbackStruct_.inputProcRefCon = (__bridge void *)self; IMSXThrowIfError(AudioUnitSetProperty(self.audioUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Input, 0, &inputCallbackStruct_, sizeof(inputCallbackStruct_)), "couldn't set render callback on Apple Voice Processing IO"); And as soon as we try to start the AudioUnit we have the following error: PhaseIOImpl.mm:1514 phaseextio@0x107a54320: failed to start IO directions 0x3, num IO streams [1, 1]: Error Domain=com.apple.coreaudio.phase Code=1346924646 "failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6" UserInfo={NSLocalizedDescription=failed to pause/resume stream 6B273F5B-D6EF-41B3-8460-0E34B00D10A6} We do not use PHASE framework on our side and the error is not clear to us nor documented anywhere. We also try to use a AudioUnit that only do Speaker witch works perfectly, but as soon as we try to record from an AudioUnit the start failed as well with the error AVAudioSessionErrorCodeCannotStartRecording We suppose that somehow inside PHASE an IO VOIP audio unit is running that prevent us from stoping/killing it when we try to create our own, and stop the whole flow. It used to work on visonOS 1.0.1 Regards, Summit-tech
Posted
by
Post marked as solved
4 Replies
1.5k Views
Some of installers which we have suddenly become broken for users running the latest version of OS X, I found that the reason was that we install Core Audio HAL driver and because I wanted to avoid system reboot I relaunched coreaudio daemon via from a pkg post-install script. sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod So with the OS update the command fails, if a computer has SIP enabled (what is the default). sudo launchctl kickstart -kp system/com.apple.audio.coreaudiod Password: Could not kickstart service "com.apple.audio.coreaudiod": 1: Operation not permitted It would be super nice if either the change can be: reverted OR I and similar people to know a workaround of how to hot-plug (and unplug) such a HAL driver.
Post not yet marked as solved
0 Replies
264 Views
We develop virtual instruments for Mac/AU and are trying to get our AU-Plugins and our Standalone player to work with Audio Workgroups. When the Standalone App or Logic Pro is in the foreground and active all is well and as expected. However when the App or Logic Pro is not in focus all my auxiliary threads are running on E-Cores. Even though they are properly joined to the processing thread's workgroup. This leads to a lot of audible drop outs because deadlines are not met anymore. The processing thread itself stays on a p-core. But has to wait for the other threads to finish. How can I opt out of this behaviour? Our users certainly have use cases where they expect the Player to run smoothly even though they currently have a different App in focus.
Posted
by
Post not yet marked as solved
0 Replies
296 Views
Hi everybody, I'm trying to use the multi input of an usb device using the AVAudioEngine. My aim is to connect different inputNode channels to 2 or more different audionode (f.e. mixer). I'm able to get a spefic input channel from the engine inputNode with OSStatus err = AudioUnitSetProperty(avEngine.inputNode.audioUnit, kAudioOutputUnitProperty_ChannelMap, kAudioUnitScope_Output, 1, outputChannelMap, propSize); but this will change the routing to all the input node and to all the destination mixer nodes. How to send channel 1 of inputNode to a mixerNode1 and channel 2 to another mixerNode2?
Posted
by
Post not yet marked as solved
1 Replies
342 Views
Is this an uncaught C++ exception that could have originated from my code? or something else? (this report is from a tester) (also, why can't crash reporter tell you info about what exception wasn't caught?) (Per instructions here, to view the crash report, you'll need to rename the attached .txt to .ips to view the crash report) thanks! AudulusAU-2024-02-14-020421.txt
Posted
by
Post not yet marked as solved
0 Replies
384 Views
Hello, I am working on an AUv3 extension project using SwiftUI in Xcode and have encountered a peculiar issue when implementing a simple alert on Mac Catalyst. The code is straightforward; it's merely an alert triggered by a button within a SwiftUI view. Here's the relevant portion of the code: import SwiftUI struct SwiftAUv3ExtensionMainView: View { var parameterTree: ObservableAUParameterGroup @State var showingAlert = false var body: some View { VStack { ParameterSlider(param: parameterTree.global.gain) Button(action: {showingAlert = true}, label: { Text("Button") }) } .alert("Alert", isPresented: $showingAlert, actions: {}, message: { Text("Message") }) } } The problem arises when this alert is displayed and subsequently closed. Upon closing the alert, the cursor turns into a spinning rainbow and the app freezes for several seconds. Additionally, Xcode's console displays the warning: -[NSWindow makeKeyWindow] called on _NSAlertPanel which returned NO from -[NSWindow canBecomeKeyWindow]. I am looking for insights or solutions to address this issue. Has anyone else experienced similar problems with SwiftUI alerts in AUv3 extension projects, especially when using Mac Catalyst? Any advice or suggestions would be greatly appreciated. Thank you.
Posted
by
Post not yet marked as solved
0 Replies
341 Views
This can be reproduced easily with XCode's generated AUv3-Extension Projects. For MIDI Processor type AUv3-Extensions, the contextName property is only set once during initializing when added as a MIDI FX within Logic Pro, but not after changing the track's name manually. For Music Effect type AUv3-Extensions, contextName is set initially when added as an Audio FX within Logic Pro as well as updated as expected after changing the tracks's name manually. Am I missing something or is this a Logic Pro bug? Thanks, Tobias
Posted
by
Post not yet marked as solved
1 Replies
550 Views
I tried the same code on ios17 and ios16 when enable address sanitizer, ios17 will crash, why? Can anyone help me? AudioComponent comp = {0}; AudioComponentDescription compDesc = {0}; compDesc.componentType = kAudioUnitType_Output; compDesc.componentSubType = kAudioUnitSubType_RemoteIO; compDesc.componentManufacturer = kAudioUnitManufacturer_Apple; compDesc.componentFlags = 0; compDesc.componentFlagsMask = 0; comp = AudioComponentFindNext(NULL, &compDesc); if (comp == NULL) { assert(false); } AudioUnit tempAudioUnit; osResult = AudioComponentInstanceNew(comp, &tempAudioUnit); if (osResult != noErr) { assert(false); }
Posted
by
Post not yet marked as solved
0 Replies
511 Views
I am trying to migrate an Audio Unit host based on the AUv2 C API to the newer AUv3 API. While the migration itself was relatively straightforward (in terms of getting it to compile), the actual rendering fails at run-time with error -10876 aka. kAudioUnitErr_NoConnection. The app does not use AUGraph or AVAudioEngine, perhaps that is an issue? Since the AUv3 and the AUv2 API are bridged in both directions and the rendering works fine with the v2 API, I would expect there to be some way to make it work via the v3 API though. Perhaps someone has an idea why (or under which circumstances) the render block throws this error? For context, the app is Mixxx, an open-source DJing application, and here is the full diff between my AUv2 -> v3 migration: https://github.com/fwcd/mixxx/pull/5/files
Posted
by
Post not yet marked as solved
2 Replies
569 Views
Hi, I have an app that has been developed with AudioUnit RemoteIO with renderCallbacks. The app has been performing fine, except on iOS 17 with devices like iPhone 14 or iPhone 15. On iPhone 14, the same app (a metronomic device) was performing fine with iOS 16, and when the customer updated to iOS 17, suddenly the audio was glitchy, had ghost sounds and sound artifacts. This does not happen on iPhone 11 Pro with iOS 17 (works fine!). However, I have been able to reproduce it on iPhone 15 Pro with iOS 17. It works ok at lower BPM and when the BPM goes over a certain threshold, the audio starts getting glitchy. The audio buffers are precomputed, so the render callback is relatively straightforward. Has anyone else seen this kind of issue on iPhone 14/iPhone 15 running iOS 17? I'm following up with Apple on this, but thought I would see if others are facing similar issues with their apps. Thanks, Sridhar
Posted
by
Post not yet marked as solved
0 Replies
449 Views
I'm battling with Audio Workgroups on macOS. I've got it working for Standalone apps, getting the workgroup from the HAL/Device, and for AUv2/AUv3 plugins. I can verify that my plugin/app's processing threads are executing together with the main workgroup thread, using P-cores. So far so good! Now, I'm trying to get this working over IPC with my ***** app. From the documentation, I figured that I can get the mach port from the main audio workgroup (in my Audio Unit) using the os_workgroup_copy_port call. Then I pass this port over IPC to my ***** process, where I want to create a new workgroup from this mach port (which should be slaved to the master workgroup), using the os_workgroup_create_with_port call. However, when doing this, I get an access violation error in my external process. In my test case, I'm hosting an AUv2 in the AUXPC_arrow process (with Logic), and sending the mach port id over to my ***** App, which is also signed with the appropriate entitlements for accessing mach ports (I think): com.apple.security.temporary-exception.mach-lookup.global-name Now, the question is, should this automagically allow me to use a mach port owned by the AUXPC process? Does that process ALSO have to use some specific entitlement? I of course cannot change the entitlements of Apple's bundles. Many thanks for any assistance.
Posted
by
Post not yet marked as solved
2 Replies
822 Views
I have some visualisation-heavy AUv3's, and the goal is not to perform graphics-intensive tasks if the plugin window is not opened inside the host app (such as Logic Pro). On iOS, this is easily accomplished by the viewWillAppear, etc overrides. But on macOS, it seems these overrides are not called every time the user opens / closes the plugin windows in the host application. I did try some alternate methods, like trying to traverse the view / controller hierarchy, or make use of the window property, to no avail. What substitute mechanism could I use to determine visibility status of an AUv3 on macOS? Thanks in advance, Zoltan
Posted
by
Post not yet marked as solved
1 Replies
573 Views
While playing sound, I need to create an AudioUnit to record the microphone at the same time. In order to use echo cancellation, i choose kAudioUnitSubType_VoiceProcessingIO subType to init AudioUnit. It works well in iOS 16 and below. But in iOS 17, the playing volume decreases while playing audio and record. Thank you for your help and hope to see your suggestions.
Posted
by
Post not yet marked as solved
0 Replies
455 Views
Is it still possible to use MAP_JIT in mmap and execute said code pages in AUs? When I try to operate on a page with proper flags that worked in a prior version of MacOS, I get a SIGBUS. I am working on getting a minimal example up and running, but this just doesn't seem right to me.
Posted
by
Post not yet marked as solved
1 Replies
562 Views
I'm trying to build an audio unit that can load another one. The first step, listing and getting the components only works in the example code if and when the audio unit is loaded by its accompanying main app. But when loaded inside Logic Pro for example, the listed components will be limited to Apple-manufactured ones. On another forum, although not for an AppEx, someone indicated that the solution was to enable Inter-App Audio, which is now deprecated. Tried all three methods of AVAudioUnitComponentManager to get the component list. Please advise. Many thanks, Zoltan
Posted
by
Post not yet marked as solved
1 Replies
512 Views
Hi guys, I’m building an audio unit and I need to add a 3rd party framework. Validating the AU with Logics plugin manager fails saying it could not find the framework. It says it is neither in /System/Library/Frameworks nor under @rpath. Then it was complaining that @ path expansion violates security policy. Logic will still load the plugin fine, if I force it to use it, though, which seems weird. Then I exchanged the @rpath reference in the AudioUnit with install_name_tool. That worked but told me that it had to break code signing. When I re-ran auval, the “not found” errors went away, but it still could not be loaded, supposedly because of the broken signing. Btw, I could only get detailled information about auvals complaints if I ran Logic and thus the scanning process inside the debuugger. If I did the same thing outside the debugger, auval would only say something like 'could not load, result code 0xSomething' but not give me any details. In both cases Logic would still load the plugin if forced to. What should I do here? Cheers and thanks in advance :-)
Posted
by
Post not yet marked as solved
0 Replies
517 Views
Some users reported issues when using our audio units with recent version of Logic Pro. After investigation, it seems like there is an issue where the modal won't appear if the view is not open on Logic with AUHostingServices. After trying with a JUCE example plugin on latest version, it also fails to run a modal while the view is not open. The system call results in a NSModalResponseAbort return code. I'm nearly sure that's a recent regression of the AUHostingServices, but may be it's always been there. Could you help us finding a solution or at least a workaround ? Thanks in advance, Best regards
Posted
by
Post not yet marked as solved
5 Replies
1.3k Views
I'm the developer of a small utility for Mac called "MusicDeviceHost". https://apps.apple.com/us/app/musicdevicehost/id1261046263?mt=12 As the name suggests, it is a host application for audio units (music device components). See also "Using Sound Canvas VA with QMidi": https://youtu.be/F9C4BiBR A problem occurs while trying to authorize the "Sound Canvas VA" component, Roland Cloud Manager (v3.0.3) returns the following error: “Authorization Error - RM Service not connected Error Connecting to Roland Cloud Manager Service” I guess the error is caused by some permission denied to the sandboxed application version. The NOT sandboxed version of MDH actually works flawlessly. I am using the following entitlements: com.apple.security.app-sandbox com.apple.security.network.client So connecting to the service should work, because "com.apple.security.network.client" is enabled. At Roland, they say: "Cloud Manager isn't supported in a sandboxed environment." But as far as I can see, MainStage and other sandboxed apps works fine... So what is the right answer? Is there someone out there with the same issue? Thanks for helping :)
Posted
by
Post not yet marked as solved
0 Replies
441 Views
We developed an app on MacOS, it need to record audio data by AudioUnit. But if user choose the "voice isolatation" microphone mode, the high freq audio data all lost. We trired but we found the system did not give original audio data any more, can any body help?
Posted
by