AudioUnit

RSS for tag

Create audio unit extensions and add sophisticated audio manipulation and processing capabilities to your app using AudioUnit.

AudioUnit Documentation

Posts under AudioUnit tag

37 Posts
Sort by:
Post not yet marked as solved
2 Replies
838 Views
Hi, I've noticed that in Mac if when creating audiounit with VoiceProcessingIO it is ducking all the system audio. Is there a way to avoid it? This is how I create it:     AudioComponentDescription ioUnitDescription;     ioUnitDescription.componentType = kAudioUnitType_Output;     ioUnitDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO;     ioUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;     ioUnitDescription.componentFlags = 0;     ioUnitDescription.componentFlagsMask = 0;     AudioComponent outputComponent = AudioComponentFindNext(NULL, &ioUnitDescription);     result = AudioComponentInstanceNew(outputComponent, &m_audioUnit);
Posted
by
Post not yet marked as solved
1 Replies
2k Views
Hi, Wondering if anyone has found a solution to the automatic volume reduction on the host computer using the OSX native screen share application. The volume reduction makes it nearly impossible to comfortably continue working on the host computer when there is any audio involved. Is there a way to bypass to this function? It seems to be the same native function that FaceTime uses to reduce the system audio volume to create priority for the application. Please help save my speakers! Thanks.
Posted
by
Post not yet marked as solved
1 Replies
1.8k Views
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation. When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low. It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
Posted
by
Post not yet marked as solved
1 Replies
1.1k Views
Hi, This topic is about Workgroups. I create child processes and I'd like to communicate a os_workgroup_t to my child process so they can join the work group as well. As far as I understand, the os_workgroup_t value is local to the process. I've found that one can use os_workgroup_copy_port() and os_workgroup_create_with_port(), but I'm not familiar at all with ports and I wonder what would be the minimal effort to achieve that. Thank you very much! Alex
Posted
by
Post not yet marked as solved
1 Replies
2.2k Views
I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count. let inputNode = avEngine.inputNode print("Format #1: \(inputNode.outputFormat(forBus: 0))") // Format #1: <AVAudioFormat 0x600002bb4be0:  1 ch,  44100 Hz, Float32> try! inputNode.setVoiceProcessingEnabled(true) print("Format #2: \(inputNode.outputFormat(forBus: 0))") // Format #2: <AVAudioFormat 0x600002b18f50:  3 ch,  44100 Hz, Float32, deinterleaved> Is this expected? How can I interpret these channels? My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files. But when voice processing messes up with the channels layout, I cannot rely on this anymore.
Posted
by
Post not yet marked as solved
2 Replies
837 Views
Hi, I'm having trouble saving user presets in the plugin for Audio Units. This works well for saving the user presets in the Host, but I get an error when trying to save them in the plugin. I'm not using a parameter tree, but instead using the fullState's getter and setter for saving and retrieving a dictionary with the state. With some simplified parameters it looks something like this: var gain: Double = 0.0 var frequency: Double = 440.0     private var currentState: [String: Any] = [:] override var fullState: [String: Any]? {     get {       // Save the current state       currentState["gain"] = gain       currentState["frequency"] = frequency       // Return the preset state       return ["myPresetKey": currentState]     }     set {       // Extract the preset state       currentState = newValue?["myPresetKey"] as? [String: Any] ?? [:]       // Set the Audio Unit's properties       gain = currentState["gain"] as? Double ?? 0.0       frequency = currentState["frequency"] as? Double ?? 440.0     }  } This works perfectly well for storing user presets when saved in the host. When trying to save them in the plugin to be able to reuse them across hosts, I get the following error in the interface: "Missing key in preset state map". Note that I am testing mostly in AUM. I could not find any documentation for what the missing key is about and how can I get around this. Any ideas?
Posted
by
Post not yet marked as solved
3 Replies
1k Views
Back at the start of January this year we filed a bug report that still hasn't been acknowledged. Before we file a code level support ticket, we would love to know if there’s any one else out there who has experienced anything similar. We have read documentation (and again repeatedly) and searched the web over and still found no solution and this issue does look like it could be a bug in the system (or our coding) rather than proper behaviour. The app is a host for a v3 audio unit which itself is a workstation that can host other audio units. The host and audio unit are both working well in all other areas other than this issue. Note: This is not running on catalyst, it is a native Mac OS app. ( not iOS ) The problem is that when an AUv3 is hosted out of process (on the Mac) and then goes to fetch a list of all the other available installed audio units, the list that is returned from the system does not contain any of the other v3 audio units in the system. It only contains v2. We see this issue if we load our AU out of process in our own bespoke host, and also when it loads into Logic Pro which gives no choice but to load out of process. This means that, as it stands at the moment, when we release the app our users will have limited functionality in Logic Pro, and possibly by then, other hosts too. In our own host we can load our hosting AU in-process and then it can find and use all the available units both v2 and v3. So no issue there but sadly when loaded into the only other host that can do anything like this ( Logic Pro at the time of posting) it won't be able to use v3 units which is quite a serious limitation. SUMMARY v3 Audio Unit being hosted out of process. Audio unit fetches the list of available audio units on the system. v3 audio units are not provided in the list. Only v2 are presented. EXPECTED In some ways this seems to be the opposite of the the behaviour we would expect. We would expect to see and host ALL the other units that are installed on the system. “out of process” suggests the safer of the two options and so this looks like it could be related to some kind of sand boxing issue. But sadly we cannot work out a solution hence this report. Is Quin “The Eskimo” still around? Can you help us out here?
Posted
by
Post not yet marked as solved
0 Replies
752 Views
I found an app for make PC ***** an takes audio from other android devices. However, when i connect with Mac it doesn't work. I use airplay for now but there are latency and quality problems. I have Late 13 Macbook Pro which uses airplay 1 technology. It is slower than newer one. My Wifi router in my room but as i said i want to connect with bluetooth. Why this problem appears. How to work on this?
Posted
by
Post not yet marked as solved
0 Replies
455 Views
I found a lot of information, but i still have no idea of how to custom AudioUnit(not AVAudioUnit or AUAudioUnit), i mean if there is any way i can make my own AudioUnit like this, i do know this is AudioUnit provided by Apple. Now i want to make my own AudioUnit by this way. AudioComponentDescription audioDesc; audioDesc.componentType = kAudioUnitType_Output; audioDesc.componentSubType = kAudioUnitSubType_RemoteIO; audioDesc.componentManufacturer = kAudioUnitManufacturer_Apple; audioDesc.componentFlags = 0; audioDesc.componentFlagsMask = 0; AudioComponent inputComponent = AudioComponentFindNext(NULL, &audioDesc); AudioComponentInstanceNew(inputComponent, &audioUnit); if you have any idea, please tell me, thank you!
Posted
by
Post not yet marked as solved
0 Replies
460 Views
i create a audio unit extension app in xcode template then i run the project in simulator is successful but when i try to run the project in my real device the program crash like this how to solve this problem, please tell me, thank you.
Posted
by
Post marked as solved
2 Replies
874 Views
I'm experimenting with getting my AUv3 plugins working correctly on iOS and MacOS using Catalyst. I'm having trouble getting the plugin windows to look right in Logic Pro X on MacOS. My plugin is designed to look right in Garageband's minimal 'letterbox' layout (1024x335, or ~1:3 aspect ratio) I have implemented supportedViewConfigurations to help the host choose the best display dimensions On iOS this works, although Logic Pro iPad doesn't seem to call supportedViewConfigurations at all, only Garageband does. On MacOS, Logic Pro does call supportedViewConfigurations but only provides oversized screen sizes, making the plugins look awkward. I can also remove the supportedViewConfigurations method on MacOS, but this introduces other issues: I guess my question boils down to this: how do I tell Logic Pro X on MacOS what the optimal window size of my plugin is, using Mac Catalyst?
Posted
by
Post not yet marked as solved
0 Replies
1k Views
I have an app that supports AUv3 and can output both Audio and MIDI information. It has the Audio Unit Type set to "aumu" which is a 'music generator'. This has worked well In iOS hosts until now (like AUM, or ApeMatrix) - because they allow the selection of this type of Audio Unit both as an audio generator and as a MIDI sender. So if someone wants to use it as an audio generator, they can - same as someone who only wants to use it as MIDI FX. Logic Pro on the iPad has changed this since it's not seeing it under the Midi FX selection. Is there any configuration to bypass this, or do I have to change to a Midi FX AUv3 ('aumi') for this to show up there?
Posted
by
Post not yet marked as solved
0 Replies
585 Views
A: iPhone SE 2nd (iOS 16.5) Used bluetooth model: Shokz OpenRun S803 B: Any mobile device A uses bluetooth microphone/speaker, and make a call to B using iPhone app. Mute the A's headphone. (The bluetooth device support mute by hardware). While A mutes, B speaks. Unmute A's headphone. Every time B speaks, B can hear the echo. Since there is no audio data during the hardware muted, VPIO don't recognize audio reference data to remove echo signal. Is there any alternative to resolve this echo in VoIP software using VPIO?
Posted
by
Post not yet marked as solved
0 Replies
854 Views
To my knowledge, you can use the AudioUnitSetProperty function to set the kAUVoiceIOProperty_VoiceProcessingEnableAGC property to disable AGC in AUv2. However there is no equivalent functionality available in AUv3. The closest option I found is the shouldBypassEffect property. How to disable AGC using the AUv3 API ?
Posted
by
Post not yet marked as solved
3 Replies
1.2k Views
Hi community I'm developing an application for MacOS and i need to capture the mic audio stream. Currently using CoreAudio in Swift i'm able to capture the audio stream using IO Procs and have applied the AUVoiceProcessing for prevent echo from speaker device. I was able to connect the audio unit and perform the echo cancellation. The problem that i'm getting is that when i'm using AUVoiceProcessing the gain of the two devices get reduced and that affects the volume of the two devices (microphone and speaker). I have tried to disable the AGC using the property kAUVoiceIOProperty_VoiceProcessingEnableAGCbut the results are the same. There is any option to disable the gain reduction or there is a better approach to get the echo cancellation working?
Posted
by
Post not yet marked as solved
0 Replies
538 Views
I am developing a multi thread instrument plugin for audio unit V2. This topic is about a software synthesizer that has been proven to work on intel macs, and has been converted to apple silicon native. I have a problem when I use logic pro on apple silicon macs. Plug the created software synthesizer to the instrument track. Make the track not exist other than the track you created. Put it in recording mode. When the above steps are followed, the performance meter on the logic pro will show that the load is concentrated on one specific core and far exceeds the total load when the load is divided. This load occurs continuously and is resolved when another track is created and the track is selected. It is understandable as a specification that the load is concentrated on a particular core. However, the magnitude of the load is abnormal. In fact, when the peak exceeds 100%, it leads to the generation of acoustic noise. Also, in this case, the activity monitor included with macOS does not show any increase in the usage of a specific CPU core. Also, the time profiler included with XCode did not identify any location that took a large amount of time. We have examined various experimental programs and found that there is a positive correlation between the frequency of thread switches in multi threaded areas and the peak of this CPU spike. We even found a positive correlation between the frequency of thread switches in the multithreaded area and the peak of this CPU spike. Mutex is used for thread switch. In summary In summary, we speculate that performance seems to be worse when multi thread processing is done on a single core. Is there any solution to this problem at the developer level or at the customer level of logic pro? Symptom environment MacBookePro 16inch 2021 CPU: apple m1 max OS: macOS 12.6.3 Memory: 32GB Logic pro 10.7.9 Built-in speaker autido buffer size: 32 sample Performance meter before symptoms occurred A view of the performance meter with symptoms after the recording condition
Posted
by
Post marked as solved
13 Replies
1.7k Views
I'm developing a sandboxed application with Xcode which allows the user to open and work with Audio Unit plugins. Working with a beta-tester having a lot of AUs on its laptop running on macOS 12.5.1, we encountered some weird crashes while opening some plugins (Krotos, Flux Audio, Sound Toys, etc.). The message we got was in French, I try to translate it but the original English version could be a little bit different: Impossible to open “NSCreateObjectFileImageFromMemory-p47UEwps” because the developper can not be verified. After this first warning, a Fatal Error 100001 message opens and the plugin seems crashed (but not the host). I easily found some music application users encountering similar issues on the web. From what I read, this error is related to new security rules introduced in macOS 12. And, effectively, some of these plugins tested on an older system work normally. I also read that some (insecure) entitlements of the Hardened Runtime should be able to fix this issue, especially Allow Unsigned Executable Memory Entitlement, whose the doc says: In rare cases, an app might need to override or patch C code, use the long-deprecated NSCreateObjectFileImageFromMemory (which is fundamentally insecure), or use the DVDPlayback framework. Add the Allow Unsigned Executable Memory Entitlement to enable these use cases. Otherwise, the app might crash or behave in unexpected ways. Unfortunately, checking this option didn't fix the issue. So, what I tried next was to add Disable Executable Memory Protection (no more success), and finally Allow DYLD Environment Variables and Allow Execution of JIT-compiled Code: none of them solved my problem. I really don't see what else to do, while I'm sure that a solution exists because the same plugins work perfectly on other application (Logic, Live Ableton). Any help would be greatly appreciated. Thanks !
Posted
by
-dp
Post not yet marked as solved
0 Replies
445 Views
We developed an app on MacOS, it need to record audio data by AudioUnit. But if user choose the "voice isolatation" microphone mode, the high freq audio data all lost. We trired but we found the system did not give original audio data any more, can any body help?
Posted
by
Post not yet marked as solved
5 Replies
1.3k Views
I'm the developer of a small utility for Mac called "MusicDeviceHost". https://apps.apple.com/us/app/musicdevicehost/id1261046263?mt=12 As the name suggests, it is a host application for audio units (music device components). See also "Using Sound Canvas VA with QMidi": https://youtu.be/F9C4BiBR A problem occurs while trying to authorize the "Sound Canvas VA" component, Roland Cloud Manager (v3.0.3) returns the following error: “Authorization Error - RM Service not connected Error Connecting to Roland Cloud Manager Service” I guess the error is caused by some permission denied to the sandboxed application version. The NOT sandboxed version of MDH actually works flawlessly. I am using the following entitlements: com.apple.security.app-sandbox com.apple.security.network.client So connecting to the service should work, because "com.apple.security.network.client" is enabled. At Roland, they say: "Cloud Manager isn't supported in a sandboxed environment." But as far as I can see, MainStage and other sandboxed apps works fine... So what is the right answer? Is there someone out there with the same issue? Thanks for helping :)
Posted
by
Post not yet marked as solved
0 Replies
519 Views
Some users reported issues when using our audio units with recent version of Logic Pro. After investigation, it seems like there is an issue where the modal won't appear if the view is not open on Logic with AUHostingServices. After trying with a JUCE example plugin on latest version, it also fails to run a modal while the view is not open. The system call results in a NSModalResponseAbort return code. I'm nearly sure that's a recent regression of the AUHostingServices, but may be it's always been there. Could you help us finding a solution or at least a workaround ? Thanks in advance, Best regards
Posted
by