AudioUnit

RSS for tag

Create audio unit extensions and add sophisticated audio manipulation and processing capabilities to your app using AudioUnit.

AudioUnit Documentation

Posts under AudioUnit tag

37 Posts
Sort by:
Post marked as solved
13 Replies
1.8k Views
I'm developing a sandboxed application with Xcode which allows the user to open and work with Audio Unit plugins. Working with a beta-tester having a lot of AUs on its laptop running on macOS 12.5.1, we encountered some weird crashes while opening some plugins (Krotos, Flux Audio, Sound Toys, etc.). The message we got was in French, I try to translate it but the original English version could be a little bit different: Impossible to open “NSCreateObjectFileImageFromMemory-p47UEwps” because the developper can not be verified. After this first warning, a Fatal Error 100001 message opens and the plugin seems crashed (but not the host). I easily found some music application users encountering similar issues on the web. From what I read, this error is related to new security rules introduced in macOS 12. And, effectively, some of these plugins tested on an older system work normally. I also read that some (insecure) entitlements of the Hardened Runtime should be able to fix this issue, especially Allow Unsigned Executable Memory Entitlement, whose the doc says: In rare cases, an app might need to override or patch C code, use the long-deprecated NSCreateObjectFileImageFromMemory (which is fundamentally insecure), or use the DVDPlayback framework. Add the Allow Unsigned Executable Memory Entitlement to enable these use cases. Otherwise, the app might crash or behave in unexpected ways. Unfortunately, checking this option didn't fix the issue. So, what I tried next was to add Disable Executable Memory Protection (no more success), and finally Allow DYLD Environment Variables and Allow Execution of JIT-compiled Code: none of them solved my problem. I really don't see what else to do, while I'm sure that a solution exists because the same plugins work perfectly on other application (Logic, Live Ableton). Any help would be greatly appreciated. Thanks !
Posted
by -dp.
Last updated
.
Post not yet marked as solved
0 Replies
457 Views
Is it still possible to use MAP_JIT in mmap and execute said code pages in AUs? When I try to operate on a page with proper flags that worked in a prior version of MacOS, I get a SIGBUS. I am working on getting a minimal example up and running, but this just doesn't seem right to me.
Posted
by sternj.
Last updated
.
Post not yet marked as solved
1 Replies
514 Views
Hi guys, I’m building an audio unit and I need to add a 3rd party framework. Validating the AU with Logics plugin manager fails saying it could not find the framework. It says it is neither in /System/Library/Frameworks nor under @rpath. Then it was complaining that @ path expansion violates security policy. Logic will still load the plugin fine, if I force it to use it, though, which seems weird. Then I exchanged the @rpath reference in the AudioUnit with install_name_tool. That worked but told me that it had to break code signing. When I re-ran auval, the “not found” errors went away, but it still could not be loaded, supposedly because of the broken signing. Btw, I could only get detailled information about auvals complaints if I ran Logic and thus the scanning process inside the debuugger. If I did the same thing outside the debugger, auval would only say something like 'could not load, result code 0xSomething' but not give me any details. In both cases Logic would still load the plugin if forced to. What should I do here? Cheers and thanks in advance :-)
Posted Last updated
.
Post not yet marked as solved
0 Replies
519 Views
Some users reported issues when using our audio units with recent version of Logic Pro. After investigation, it seems like there is an issue where the modal won't appear if the view is not open on Logic with AUHostingServices. After trying with a JUCE example plugin on latest version, it also fails to run a modal while the view is not open. The system call results in a NSModalResponseAbort return code. I'm nearly sure that's a recent regression of the AUHostingServices, but may be it's always been there. Could you help us finding a solution or at least a workaround ? Thanks in advance, Best regards
Posted Last updated
.
Post not yet marked as solved
0 Replies
445 Views
We developed an app on MacOS, it need to record audio data by AudioUnit. But if user choose the "voice isolatation" microphone mode, the high freq audio data all lost. We trired but we found the system did not give original audio data any more, can any body help?
Posted Last updated
.
Post not yet marked as solved
0 Replies
538 Views
I am developing a multi thread instrument plugin for audio unit V2. This topic is about a software synthesizer that has been proven to work on intel macs, and has been converted to apple silicon native. I have a problem when I use logic pro on apple silicon macs. Plug the created software synthesizer to the instrument track. Make the track not exist other than the track you created. Put it in recording mode. When the above steps are followed, the performance meter on the logic pro will show that the load is concentrated on one specific core and far exceeds the total load when the load is divided. This load occurs continuously and is resolved when another track is created and the track is selected. It is understandable as a specification that the load is concentrated on a particular core. However, the magnitude of the load is abnormal. In fact, when the peak exceeds 100%, it leads to the generation of acoustic noise. Also, in this case, the activity monitor included with macOS does not show any increase in the usage of a specific CPU core. Also, the time profiler included with XCode did not identify any location that took a large amount of time. We have examined various experimental programs and found that there is a positive correlation between the frequency of thread switches in multi threaded areas and the peak of this CPU spike. We even found a positive correlation between the frequency of thread switches in the multithreaded area and the peak of this CPU spike. Mutex is used for thread switch. In summary In summary, we speculate that performance seems to be worse when multi thread processing is done on a single core. Is there any solution to this problem at the developer level or at the customer level of logic pro? Symptom environment MacBookePro 16inch 2021 CPU: apple m1 max OS: macOS 12.6.3 Memory: 32GB Logic pro 10.7.9 Built-in speaker autido buffer size: 32 sample Performance meter before symptoms occurred A view of the performance meter with symptoms after the recording condition
Posted
by makotom.
Last updated
.
Post not yet marked as solved
1 Replies
1.8k Views
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation. When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low. It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
Posted
by ObCG.
Last updated
.
Post not yet marked as solved
1 Replies
2.2k Views
I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count. let inputNode = avEngine.inputNode print("Format #1: \(inputNode.outputFormat(forBus: 0))") // Format #1: <AVAudioFormat 0x600002bb4be0:  1 ch,  44100 Hz, Float32> try! inputNode.setVoiceProcessingEnabled(true) print("Format #2: \(inputNode.outputFormat(forBus: 0))") // Format #2: <AVAudioFormat 0x600002b18f50:  3 ch,  44100 Hz, Float32, deinterleaved> Is this expected? How can I interpret these channels? My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files. But when voice processing messes up with the channels layout, I cannot rely on this anymore.
Posted
by smialek.
Last updated
.
Post not yet marked as solved
0 Replies
855 Views
To my knowledge, you can use the AudioUnitSetProperty function to set the kAUVoiceIOProperty_VoiceProcessingEnableAGC property to disable AGC in AUv2. However there is no equivalent functionality available in AUv3. The closest option I found is the shouldBypassEffect property. How to disable AGC using the AUv3 API ?
Posted
by yycking.
Last updated
.
Post not yet marked as solved
2 Replies
839 Views
Hi, I've noticed that in Mac if when creating audiounit with VoiceProcessingIO it is ducking all the system audio. Is there a way to avoid it? This is how I create it:     AudioComponentDescription ioUnitDescription;     ioUnitDescription.componentType = kAudioUnitType_Output;     ioUnitDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO;     ioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;     ioUnitDescription.componentFlags = 0;     ioUnitDescription.componentFlagsMask = 0;     AudioComponent outputComponent = AudioComponentFindNext(NULL, &ioUnitDescription);     result = AudioComponentInstanceNew(outputComponent, &m_audioUnit);
Posted
by zvba.
Last updated
.
Post marked as solved
2 Replies
875 Views
I'm experimenting with getting my AUv3 plugins working correctly on iOS and MacOS using Catalyst. I'm having trouble getting the plugin windows to look right in Logic Pro X on MacOS. My plugin is designed to look right in Garageband's minimal 'letterbox' layout (1024x335, or ~1:3 aspect ratio) I have implemented supportedViewConfigurations to help the host choose the best display dimensions On iOS this works, although Logic Pro iPad doesn't seem to call supportedViewConfigurations at all, only Garageband does. On MacOS, Logic Pro does call supportedViewConfigurations but only provides oversized screen sizes, making the plugins look awkward. I can also remove the supportedViewConfigurations method on MacOS, but this introduces other issues: I guess my question boils down to this: how do I tell Logic Pro X on MacOS what the optimal window size of my plugin is, using Mac Catalyst?
Posted
by Brambos.
Last updated
.
Post not yet marked as solved
0 Replies
587 Views
A: iPhone SE 2nd (iOS 16.5) Used bluetooth model: Shokz OpenRun S803 B: Any mobile device A uses bluetooth microphone/speaker, and make a call to B using iPhone app. Mute the A's headphone. (The bluetooth device support mute by hardware). While A mutes, B speaks. Unmute A's headphone. Every time B speaks, B can hear the echo. Since there is no audio data during the hardware muted, VPIO don't recognize audio reference data to remove echo signal. Is there any alternative to resolve this echo in VoIP software using VPIO?
Posted
by ened.
Last updated
.
Post not yet marked as solved
0 Replies
1k Views
I have an app that supports AUv3 and can output both Audio and MIDI information. It has the Audio Unit Type set to "aumu" which is a 'music generator'. This has worked well In iOS hosts until now (like AUM, or ApeMatrix) - because they allow the selection of this type of Audio Unit both as an audio generator and as a MIDI sender. So if someone wants to use it as an audio generator, they can - same as someone who only wants to use it as MIDI FX. Logic Pro on the iPad has changed this since it's not seeing it under the Midi FX selection. Is there any configuration to bypass this, or do I have to change to a Midi FX AUv3 ('aumi') for this to show up there?
Posted Last updated
.
Post not yet marked as solved
1 Replies
2k Views
Hi, Wondering if anyone has found a solution to the automatic volume reduction on the host computer using the OSX native screen share application. The volume reduction makes it nearly impossible to comfortably continue working on the host computer when there is any audio involved. Is there a way to bypass to this function? It seems to be the same native function that FaceTime uses to reduce the system audio volume to create priority for the application. Please help save my speakers! Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
462 Views
i create a audio unit extension app in xcode template then i run the project in simulator is successful but when i try to run the project in my real device the program crash like this how to solve this problem, please tell me, thank you.
Posted Last updated
.
Post not yet marked as solved
0 Replies
457 Views
I found a lot of information, but i still have no idea of how to custom AudioUnit(not AVAudioUnit or AUAudioUnit), i mean if there is any way i can make my own AudioUnit like this, i do know this is AudioUnit provided by Apple. Now i want to make my own AudioUnit by this way. AudioComponentDescription audioDesc; audioDesc.componentType = kAudioUnitType_Output; audioDesc.componentSubType = kAudioUnitSubType_RemoteIO; audioDesc.componentManufacturer = kAudioUnitManufacturer_Apple; audioDesc.componentFlags = 0; audioDesc.componentFlagsMask = 0; AudioComponent inputComponent = AudioComponentFindNext(NULL, &audioDesc); AudioComponentInstanceNew(inputComponent, &audioUnit); if you have any idea, please tell me, thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
752 Views
I found an app for make PC ***** an takes audio from other android devices. However, when i connect with Mac it doesn't work. I use airplay for now but there are latency and quality problems. I have Late 13 Macbook Pro which uses airplay 1 technology. It is slower than newer one. My Wifi router in my room but as i said i want to connect with bluetooth. Why this problem appears. How to work on this?
Posted
by benian.
Last updated
.