AudioUnit

RSS for tag

Create audio unit extensions and add sophisticated audio manipulation and processing capabilities to your app using AudioUnit.

AudioUnit Documentation

Posts under AudioUnit tag

48 Posts
Sort by:
Post not yet marked as solved
0 Replies
244 Views
How can you add a live audio player to Xcode where they will have a interactive UI to control the audio and they will be able to exit out of the app and or turn their device off and it will keep playing? Is their a framework or API that will work for this? Thanks! Really need help with this…. 🤩 I have looked everywhere and haven’t found something that works….
Posted Last updated
.
Post not yet marked as solved
0 Replies
289 Views
I'm trying to create a simple pure MIDI Audio Unit (AUv3) that could act as a pipe between for example an AVMusicTrack (played by an AVAudioSequencer) and an AVAudioUnitSampler. I used the default audio extension template generated by XCode 13.2.1 and modified just a few things: My audio unit has type kAudioUnitType_MIDIProcessor (aumi), from what I read it's the right candidate and the only one I can connect to a sampler. Else the app will crash with the Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: graphNodeSrc->IsMIDIProcessorNode() message. Maybe I'm missing something here, any suggestion? I overrode handleMIDIEvent() function in my AU's DSP Kernel just to print something when I receive MIDI events: void handleMIDIEvent(const AUMIDIEvent &midiEvent) override { cout << "MIDI Event" << endl; } I declared a MIDIOutputNames properly My goal is to follow the MIDI playing context from this AU, and edit some note messages depending on the context. My host provides AUMIDIOutputEventBlock, AUHostTransportStateBlock and AUHostMusicalContextBlock so that the AU can read the state and context and output MIDI to the host. If I make my AU a kAudioUnitType_MusicDevice (aumu), I do receive note events from the music track (even if I cannot connect my AU to the sampler as it's not a MIDI processor). But with a MIDI processor, I don't. Any clue why this is?
Posted
by PMNewzik.
Last updated
.
Post not yet marked as solved
0 Replies
277 Views
I know the VoiceProcessingIO audio unit will create a aggregate audio device. But there are error kAudioUnitErr_InvalidProperty (-10789) during getting kAudioOutputUnitProperty_OSWorkgroup property in recent macOS Monterey 12.2.1 or BigSur 11.6.4. os_workgroup_t workgroup = NULL; UInt32  sSize; OSStatus sStatus; sSize = sizeof(os_workgroup_t); sStatus = AudioUnitGetProperty(mAudioUnit, kAudioOutputUnitProperty_OSWorkgroup, kAudioUnitScope_Global, 1, &workgroup, &sSize); if (sStatus != noErr) { NSLog(@"Error %d", sStatus); } And the same code works fine on iOS 15.3.1 but not macOS. Have you any hint to resolve this issue?
Posted
by ened.
Last updated
.
Post not yet marked as solved
0 Replies
261 Views
Could someone explain what 'MALLOC_NANO' is? A snippet from the crash report this was contained in is here: Crashed Thread: 50 myThread 0x1722ac000 - 0x17244ffff Exception Type: EXC_BAD_ACCESS (SIGBUS) Exception Codes: KERN_PROTECTION_FAILURE at 0x0000600015c9e340 Exception Note: EXC_CORPSE_NOTIFY Termination Signal: Bus error: 10 Termination Reason: Namespace SIGNAL, Code 0xa Terminating Process: exc handler [35195] VM Regions Near 0x600015c9e340: MALLOC_NANO 600008000000-600010000000 [128.0M] rw-/rwx SM=PRV --> MALLOC_NANO 600010000000-600018000000 [128.0M] rw-/rwx SM=SHM MALLOC_NANO 600018000000-600020000000 [128.0M] rw-/rwx SM=PRV I'm guessing the root cause for this crash is using a deallocated object but I'd really like to just know more about MALLOC_NANO as I couldn't find much information elsewhere. Much appreciated!
Posted
by remd.
Last updated
.
Post not yet marked as solved
0 Replies
277 Views
When using VoiceProcessingIO audio unit with voicechat audio session mode to have echo cancellation, I can't play audio in stereo, it only allows mono audio. How can I enable stereo playback with echo cancellation? Is it some kind of limitation? since it isn't mentioned anywhere in the documentation.
Posted
by ObCG.
Last updated
.
Post not yet marked as solved
0 Replies
324 Views
I’m developing a voice communication app for the iPad with both playback and record and using AudioUnit of type kAudioUnitSubType_VoiceProcessingIO to have echo cancellation. When playing the audio before initializing the recording audio unit, volume is high. But if I'm playing the audio after initializing the audio unit or when switching to remoteio and then back to vpio the playback volume is low. It seems like a bug in iOS, any solution or workaround for this? Searching the net I only found this post without any solution: https://developer.apple.com/forums/thread/671836
Posted
by ObCG.
Last updated
.
Post not yet marked as solved
1 Replies
784 Views
When I change audio unit from VPIO(VoiceProcessingIO) to remoteIO, the volume was reduced and can not restore to normal level. There's a workaround by this post: h ttps://trac.pjsip.org/repos/ticket/1697 I use the workaround in my code and according flowing steps: stop and destory all the running IO AudioUnit (remoteIO and VPIO) run workaround code [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryRecord error:nil]; [[AVAudioSession sharedInstance] setActive:NO error:nil]; [[AVAudioSession sharedInstance] setActive:YES error:nil]; [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil]; 3. setup remoteIO and run. and it work perfectly! The volume of remoteIO was go to normally. But sadness happens on iOS 14, the workaround is not work at all.
Posted
by drummer.
Last updated
.
Post not yet marked as solved
0 Replies
233 Views
Hi, Not sure if this is by design or not, but whenever I connect a Bluetooth device to the app the AU callback stops producing frames for several seconds until the device is connected. I'm building a recording app that uses AVAssetWriter with fragmented segments (HLS buffers). When the callback freezes it suppose to create a gap in the audio but for some reason, the segment that is created does not contain an audio gap, and the audio just "jumps" in timestamps.
Posted
by YYfim.
Last updated
.
Post not yet marked as solved
0 Replies
185 Views
I have an audio unit that I can retrieve parameters and save those parameters in Logic Pro X. The values are boolean, float and integer. What I want to do is be able to save a string value, just like what Logic Pro X is doing with thier MIDI-FX plugin Scripter. Is this possible?
Posted Last updated
.
Post not yet marked as solved
1 Replies
389 Views
Hi, I've released an open-source AUv3 MIDI processor plugin for iOS and macOS that records and plays MIDI messages in a sample accurate fashion and doesn't ever apply any quantization. I've tested this plugin with 120 beta testers and everything seemed to work fine. However, now that I've released it, there seems to be a problem in Logic Pro X on some Mac computers with MIDI FX processor plugins that are using Catalyst. You can find my plugin here: http://uwyn.com/mtr/ ... and the source code here: https://github.com/gbevin/MIDITapeRecorder When I trace the AUv3 instantiation, I see Logic Pro X obtaining the internalRenderBlock several times, but then never ever calling it. This means there's no render callback and there's never any MIDI parameter events received. I've talked to the developer of ZOA, which is also a MIDI processor plugin using Catalyst and he's running into exactly the same problem: https://www.audiosymmetric.com/zoa.html Another developer that’s working on a MIDI processor plugin has been trying to track this down for weeks also. When I test this on my M1 Max MacBook Pro, is always internalRenderBlock, however an my M1 MacBook Air and Intel 2019 MacBook Pro, it is never called. Any thoughts or ideas to work around this would be really helpful. Thanks!
Posted
by gbevin.
Last updated
.
Post not yet marked as solved
0 Replies
369 Views
Hi, I've problem with an AU host (based on Audio Toolbox/Core Audio, not AVFoundation) when running on macOS 11 or later and Apple Silicon – it crashes after some operations in GUI. The weird is, it crashes in IOThread. Could this be caused by some inappropriate operation in GUI (eg. outside the main thread) that effects the IOThread? Sounds quite improbable to me. And I did not find anything suspicious in the code. There are two logs in the debugger: [AUHostingService Client] connection interrupted. rt_sender::signal_wait failed: 89 ... And here is the crash log: Crash log: ... Thanks, Tomas
Posted
by Audified.
Last updated
.
Post not yet marked as solved
0 Replies
482 Views
The voice call in our game has such a usage scenario: when the player does not wear the headset or Bluetooth, we use the VPIO mode to turn on the microphone. On the contrary, the player wears the headset or Bluetooth, and the remoteIO mode is used to turn on the microphone. So we have a switch between VPIO and RemoteIO。When we only use RemoteIO (the player has been wearing headphones or Bluetooth since the game), the output volume does not change before and after the microphone is switched on. When we only use the VPIO mode (the player has not been wearing headphones or Bluetooth since the game), before and after turning on the microphone, the output volume will reduce a little (because turning on the microphone will cause the phone to enter the VPIO and the echo cancellation will be turned on), the performance of the output volume in the above two scenarios is normal。However, when we switched between VPIO and RemotoIO (the player didn’t wear headphones or Bluetooth at the beginning, and brought headphones or Bluetooth in the middle of the game), we encountered some problems: As long as I have used VPIO mode before RemotoIO mode, the output volume in RemoteIO mode after turning on the microphone will be the same as in VPIO mode (normally the output volume in RemoteIO should be greater than in VPIO mode), turn off the microphone ,The output volume will become normal.What confuses me is: My understanding is that when we use RemoteIO, the mobile phone should not do some suppression-like speech algorithms, so when we only use RemoteIO, the output volume of the mobile phone does not change. When we use VPIO, the mobile phone will use echo Algorithms, and perhaps dynamic compression and some gain algorithm processing. At this time, the output volume will be reduced to better handle the echo. This behavior is normal. However, when I switch between VPIO and RemotIO, I feel that when I use RemoteIO (VPIO resources are released), some of the previous VPIO algorithm processing is still reserved (may be dynamic compression or gain algorithm), which finally leads to the output volume under RemoteIO is the same as that under VPIO, and this only happens on all of IOS14, the previous version is normal (anytime you enter the remoteIO mode, the volume does not change). I want to know, For IOS14 this behavior (VPIO and RemoteIO switching causes the volume of RemoteIO to decrease) is normal? If it is not normal, how can we solve it?
Posted
by bomff.
Last updated
.
Post not yet marked as solved
0 Replies
405 Views
Hi, Wondering if anyone has found a solution to the automatic volume reduction on the host computer using the OSX native screen share application. The volume reduction makes it nearly impossible to comfortably continue working on the host computer when there is any audio involved. Is there a way to bypass to this function? It seems to be the same native function that FaceTime uses to reduce the system audio volume to create priority for the application. Please help save my speakers! Thanks.
Posted Last updated
.
Post marked as solved
3 Replies
1.1k Views
I'm having trouble getting my iPad app / AUv3 synth working on macOS via Mac Catalyst. The app works fine in standalone mode but the DAWs aren't able to load. Error both from GarageBand and Logic Pro are too cryptic to decipher what's going wrong. This is on Big Sur. This is from Logic Pro auval: validating Audio Unit Mela 2 by Nikolozi:     AU Validation Tool     Version: 1.8.0      Copyright 2003-2019, Apple Inc. All Rights Reserved.     Specify -h (-help) for command options -------------------------------------------------- VALIDATING AUDIO UNIT: 'aumu' - 'Mel2' - 'NKLZ' -------------------------------------------------- Manufacturer String: Nikolozi AudioUnit Name: Mela 2 Component Version: 1.6.0 (0x10600) * * PASS -------------------------------------------------- TESTING OPEN TIMES: COLD: FATAL ERROR: OpenAComponent: result: 4,0x4 validation result: couldn’t be opened From GarageBand logs I have this (this happens when I try to load the synth as an instrument plug-in): 2021-06-13 10:23:12.078357+0400 GarageBand[99801:5732544] [lifecycle] [u 589AF1E2-2BE5-451F-A613-EC9BA71325E9:m (null)]  [com.nikolozi.Mela.InstrumentExtension(1.0)] Failed to start plugin; pkd returned an error: Error Domain=PlugInKit Code=4 "RBSLaunchRequest error trying to launch plugin com.nikolozi.Mela.InstrumentExtension(589AF1E2-2BE5-451F-A613-EC9BA71325E9): Error Domain=RBSRequestErrorDomain Code=5 "Launch failed." UserInfo={NSLocalizedFailureReason=Launch failed., NSUnderlyingError=0x7faa9232ca40 {Error Domain=NSPOSIXErrorDomain Code=153 "Unknown error: 153" UserInfo={NSLocalizedDescription=Launchd job spawn failed with error: 153}}}" UserInfo={NSLocalizedDescription=RBSLaunchRequest error trying to launch plugin com.nikolozi.Mela.InstrumentExtension(589AF1E2-2BE5-451F-A613-EC9BA71325E9): Error Domain=RBSRequestErrorDomain Code=5 "Launch failed." UserInfo={NSLocalizedFailureReason=Launch failed., NSUnderlyingError=0x7faa9232ca40 {Error Domain=NSPOSIXErrorDomain Code=153 "Unknown error: 153" UserInfo={NSLocalizedDescription=Launchd job spawn failed with error: 153}}}} 2021-06-13 10:23:12.078518+0400 GarageBand[99801:5732544] [plugin] Unable to acquire process assertion in beginUsing: with plugin identifier: com.nikolozi.Mela.InstrumentExtension, killing plugin 2021-06-13 10:23:12.078814+0400 GarageBand[99801:5732544] [plugin] PlugInKit error in beginUsing: with plugin identifier: com.nikolozi.Mela.InstrumentExtension, killing plugin 2021-06-13 10:23:12.153420+0400 GarageBand[99801:5730475] Failed to instantiate AU. Description: RBSLaunchRequest error trying to launch plugin com.nikolozi.Mela.InstrumentExtension(589AF1E2-2BE5-451F-A613-EC9BA71325E9): Error Domain=RBSRequestErrorDomain Code=5 "Launch failed." UserInfo={NSLocalizedFailureReason=Launch failed., NSUnderlyingError=0x7faa9232ca40 {Error Domain=NSPOSIXErrorDomain Code=153 "Unknown error: 153" UserInfo={NSLocalizedDescription=Launchd job spawn failed with error: 153}}} Reason: (null) I've tried Apple's sample code AUv3Filter. And turned on Mac Catalyst for the AUv3Filter iOS target. And it runs fine in Logic Pro. I'm not sure what's incompatible in my code that fails to work as AUv3 in Mac catalyst. Any known issues for Mac Catalyst+AUv3 combo that I should be aware of / investigate?
Posted Last updated
.
Post not yet marked as solved
0 Replies
279 Views
I created a multi-timbral instrument application based on multiple AVAudioUnitSampler instances (one per Midi channel), wrapped in a custom AVSampler class. I want to expose it also as an AUv3. I followed some articles and samples and put the view controller and other classes in a Framework target, created an AudioUnit extension target (with a dummy/empty class file as I've no implementation to provide). In the extension's Info.plist (NSExtensionAttributes) I added AudioComponentBundle (points to the AUFramework) and AudioComponents item with factoryFunction (points to $(PRODUCT_MODULE_NAME).MultiSamplerViewController), aumu type. Also added NSExtensionPrincipalClass pointing to AUFramework.MultiSamplerViewController. In the shared MultiSamplerViewController I implemented (AUAudioUnit *)createAudioUnitWithComponentDescription:(AudioComponentDescription)desc error:(NSError **)error {     return [[[multiSampler engine] outputNode] AUAudioUnit]; } It also contains an - (id)initWithCoder:(NSCoder*)decoder method, that instantiates the wrapping MultiSampler and starts an enclosed MidiManager. The host application target runs fine, however the AU extension plugin isn't listed in GarageBand (even after runnig the host application once). The target platform is iPad. I added code to load the appex plugin bundle, however it doesn't seem enough to register the plugin. Also I cannot use the AUAudioUnit registerSubclass as I've no concrete AU implementation class (I could pass [[[multiSampler engine] outputNode] AUAudioUnit] ?) I'm in the same configuration as an application built on AudioKit framework (that originally wrapped AVAudioUnitSampler - and now uses a custom implementation).
Posted
by cjed.
Last updated
.
Post not yet marked as solved
1 Replies
1.6k Views
I am a developer of Tencent. We found that after AirPods upgraded to the new firmware version 4A400, some AirPods microphones may have abnormal sound problems, especially on the iPhone with the system version iOS 13 . The specific performance is: the sound captured by AirPods microphones will appear intermittently with noise, broken sound, pitch shift, and tremor, and the sense of hearing and intelligibility is poor. Our users have reported that many times when using Tencent Meeting for video calls, the others can’t hear his/her voice, so we tested many iPhones and AirPods and found that they all have some problems. Let’s share our Specific test results: iPhone 11 Pro Max / iOS 13.6.1 / AirPods 2, the sound periodically appears noise and tremor about every 20s, it sounds uncomfortable, and when using the system phone/FaceTime/WeChat call/Zoom test, the effect is the same ; Use the same iPhone as 1 and replace the earphones with AirPods Pro, and the test conditions are exactly the same as 1; Using the same AirPods Pro as 2 and changing the phone to iPhone X / iOS 13.6, there will be occasional discontinuities in the sound, and crackling noises will be heard; Use the same AirPods Pro as 2 and change the phone to iPhone Xs / iOS 13.7. At the beginning, there was continuous noise and pitch shifting. After a few minutes of speaking, it returned to normal, and then the sound remained normal; Using the same AirPods Pro as 2 and changing the phone to iPhone 12 Pro Max / iOS 15.0.2, the sound is completely normal. Why does the same AirPods, on different iPhones, have such different performance of the collected sound quality, and the same phenomenon of using various VoIP apps, and all of them have problems on iOS 13? Does the 4A400 firmware have compatibility issues with the iOS 13 system? We noticed that the previous AirPods hardware sampling rate was 16kHz, but after upgrading to 4A400, the hardware sampling rate changed to 24kHz. Is the above noise related to the change of the hardware sampling rate? Do I need to modify the Audio Unit parameters to solve the above problems? Our app has a very large group of personal and corporate users. When they find that there is a problem with the sound, they will give us feedback, which brings us more pressure. Hope to get a reply from Apple or other developers, thank you!
Posted
by doveshi.
Last updated
.
Post marked as solved
1 Replies
532 Views
I have just begun to start building plugins for Logic using the AUv3 format. After a few teething problems to say the least I have a basic plug in working (integrated with SwiftUI which is handy) but the install and validation process is still buggy and bugging me! Does the stand alone app which Xcode generates to create the plugin have to always be running separately to Logic for the AUv3 to be available? Is there no way to have it as a permanently available plugin without running that? If anyone has any links to a decent tutorial please.. there are very few I can find on YouTube or anywhere else and the Apple tutorials and examples aren't great.
Posted
by Waterboy.
Last updated
.
Post not yet marked as solved
5 Replies
468 Views
I have been trying for the last day and a half to try and get an AU .component bundle with no success. The bundle in question is an AU implementation of a .vst3 plugin which I also have a bundle for which notarises without any issue. the AU implementation is achieved with the Steinberg AU wrapper library which implements an AU (v2) component that loads the vst3 plugin implementation contained in a .vst bundle under the Resources folder of the AU component bundle. I code sign and package the bundle with, codesign --force --options runtime --timestamp --sign "Developer ID Application: ***" HBDynamicsAU.component/Contents/Resources/plugin.vst3/Contents/MacOS/HBDynamicsVST codesign --force --options runtime --timestamp --sign "Developer ID Application: ***" HBDynamicsAU.component/Contents/MacOS/HBDynamicsAU codesign --force --timestamp --sign "Developer ID Application: ***" HBDynamicsAU.component productbuild --component HBDynamicsAU.component /Library/Audio/Plug-Ins/Components --root ./presets/AU /Library/Audio/Presets/HarBal/Dynamics/ --timestamp --sign "Developer ID Installer: ***" HBDynamicsAU-1.0.2.intel.64.pkg Checking the output of the signing process with, pkgutil --check-signature HBDynamicsAU-1.0.2.intel.64.pkg yields, Package "HBDynamicsAU-1.0.2.intel.64.pkg": Status: signed by a certificate trusted by Mac OS X Certificate Chain: 1. Developer ID Installer: Paavo Jumppanen (***) SHA1 fingerprint: B8 BD FF DC 43 1A 6B 25 BE 39 21 F2 B5 D1 3F C2 D7 B6 0B 1F ----------------------------------------------------------------------------- 2. Developer ID Certification Authority SHA1 fingerprint: 3B 16 6C 3B 7D C4 B7 51 C9 FE 2A FA B9 13 56 41 E3 88 E1 86 ----------------------------------------------------------------------------- 3. Apple Root CA SHA1 fingerprint: 61 1E 5B 66 2C 59 3A 08 FF 58 D1 4A E2 24 52 D1 98 DF 6C 60 which looks fine to me. Also doing, spctl -a -vvv -t install HBDynamicsAU-1.0.2.intel.64.pkg yields, HBDynamicsAU-1.0.2.intel.64.pkg: accepted source=Developer ID origin=Developer ID Installer: *** which again looks fine. Now if I notarise it I get the following outcome in the log, logFormatVersion 1 jobId "9127060e-ea60-4044-82e8-ba6a7cd234c6" status "Invalid" statusSummary "Archive contains critical validation errors" statusCode 4000 archiveFilename "HBDynamicsAU-1.0.2.intel.64.pkg" uploadDate "2021-10-04T03:46:59Z" sha256 "17dc1ba78e55349501913ee31648a49850aa996d0c822131cf7625096f5d827c" ticketContents null issues 0 severity "error" code null path "HBDynamicsAU-1.0.2.intel.64.pkg/com.har_bal.HarBal.dynamics_1.0.2.au.pkg Contents/Payload/Library/Audio/Plug-Ins/Components/HBDynamicsAU.component/Contents/MacOS/HBDynamicsAU" message "The signature of the binary is invalid." docUrl null architecture "x86_64" The file it is complaining about is the AU wrapper dynamic library which is HBDynamicsAU. If I now check that in my bundle (ie. before it was packaged) I get, codesign --verify --verbose -r- HBDynamicsAU.component/Contents/MacOS/HBDynamicsAU HBDynamicsAU.component/Contents/MacOS/HBDynamicsAU: valid on disk HBDynamicsAU.component/Contents/MacOS/HBDynamicsAU: satisfies its Designated Requirement so it looks ok. Now to check that nothing is wrong with the package I installed the plugin by double clicking on the .pkg file in finder and then did the codesign check with the installed plugin in place with, codesign --verify --verbose -r- /Library/Audio/Plug-Ins/Components/HBDynamicsAU.component/Contents/MacOS/HBDynamicsAU /Library/Audio/Plug-Ins/Components/HBDynamicsAU.component/Contents/MacOS/HBDynamicsAU: valid on disk /Library/Audio/Plug-Ins/Components/HBDynamicsAU.component/Contents/MacOS/HBDynamicsAU: satisfies its Designated Requirement which looks perfectly fine as well. So what gives, why does it fail notarisation when every check you are told to do to debug issues says it is ok. This looks like a bug in the process but I have no idea. The limited information returned through the log is not enough to isolate the problem. How do I fix this and indeed, can I fix this? regards, Paavo. PS - Oh, and to check that it wasn't a problem with the actual dylib I swapped the libraries around in the package so the VST3 one was where the AU one was and visa versa and ran the notarisation again and it it again said the library under HBDynamicsAU.component/Contents/MacOS was the problem (which is now the vast one cos I swapped it) so it clearly has nothing to do with the library.
Posted
by PaavoJ.
Last updated
.
Post not yet marked as solved
0 Replies
329 Views
When I run my AUv3 synth inside a host on an iPad under certain conditions I'm receiving repeated MIDI events. Still figuring out what the exact trigger is. Could be a CPU overload, not sure yet. Thought I'd ask here if anyone else has ideas as to what might be going on. After inspecting the passed in variable AURenderEvent* realtimeEventListHead, it looks like it contains MIDI events that were already handled in previous callbacks. These MIDI events have timestamps that are older than the passed in AudioTimeStamp* timestamp. I sometimes receive the same events like 8 times in a row. i.e. in 8 render callbacks. And these events all have the same timestamp. So I'm not sure why I'm receiving them again. Could system be assuming they weren't handled and is sending them again? I'm on iOS 15. (Not sure if this is happening on iOS 14 also. Don't have an iOS 14 device to test it on.) I reproduced this issue, both in AUM and GarageBand.
Posted Last updated
.