Core Audio

RSS for tag

Interact with the audio hardware of a device using Core Audio.

Core Audio Documentation

Posts under Core Audio tag

54 Posts
Sort by:
Post not yet marked as solved
1 Replies
883 Views
Here is the process in My application.Mic -> AVCaptureOutput -> Audio(PCM) -> Audio Encoder -> AAC Packet (Encoded)Camera -> AVCaptureOutput -> Image -> Video Encoder -> H.264 Video Packet.(Encoded)So, My App is Movie Encoder.Crash is happened when camera is switched. (Front Camera <-> Back Camera)Crash line is AudioConverterFillComplexBuffer.maybe NativeInt16ToFloat32Scaled_ARM..what does that mean??? why???0 AudioCodecs 0x0000000183fbe2bc NativeInt16ToFloat32Scaled_ARM + 1321 AudioCodecs 0x0000000183f63708 AppendInputData(void*, void const*, unsigned int*, unsigned int*, AudioStreamPacketDescription const*) + 562 AudioToolbox 0x000000018411aaac CodecConverter::AppendExcessInput(unsigned int&) + 1963 AudioToolbox 0x000000018411a59c CodecConverter::EncoderFillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 6604 AudioToolbox 0x0000000184124ec0 AudioConverterChain::RenderOutput(CABufferList*, unsigned int, unsigned int&, AudioStreamPacketDescription*) + 1165 AudioToolbox 0x0000000184100d98 BufferedAudioConverter::FillBuffer(unsigned int&, AudioBufferList&, AudioStreamPacketDescription*) + 4446 AudioToolbox 0x00000001840d8c9c AudioConverterFillComplexBuffer + 3407 MovieEncoder 0x0000000100341fd4 __49-[AACEncoder encodeSampleBuffer:completionBlock:]_block_invoke (AACEncoder.m:247)
Posted
by
Post not yet marked as solved
7 Replies
12k Views
Hi,We recently started using AVAudioEngine in our app, and are receiving reports of crashes in the wild, specifically: com.apple.coreaudio.avfaudio required condition is false: hwFormatThese crashes are occurring on all iOS versions, including the latest (10.0.2 14A456).The crashes are always on a background thread. Here is one example stack trace:(CoreFoundation + 0x0012f1c0 ) __exceptionPreprocess (libobjc.A.dylib + 0x00008558 ) objc_exception_throw (CoreFoundation + 0x0012f090 ) +[NSException raise:format:arguments:] (AVFAudio + 0x00016788 ) AVAE_RaiseException(NSString*, ...) (AVFAudio + 0x0008f168 ) AVAudioIOUnit::_GetHWFormat(unsigned int, unsigned int*) (AVFAudio + 0x0008ee64 ) ___ZN13AVAudioIOUnit22IOUnitPropertyListenerEPvP28OpaqueAudioComponentInstancejjj_block_invoke_2 (libdispatch.dylib + 0x000011fc ) _dispatch_call_block_and_release (libdispatch.dylib + 0x000011bc ) _dispatch_client_callout (libdispatch.dylib + 0x0000f440 ) _dispatch_queue_serial_drain (libdispatch.dylib + 0x000049a4 ) _dispatch_queue_invoke (libdispatch.dylib + 0x00011388 ) _dispatch_root_queue_drain (libdispatch.dylib + 0x000110e8 ) _dispatch_worker_thread3 (libsystem_pthread.dylib + 0x000012c4 ) _pthread_wqthread (libsystem_pthread.dylib + 0x00000db0 ) start_wqthreadWhen this crash occurs, the main thread is generally responding to an audio route change. Here is one example stack trace:(libsystem_kernel.dylib + 0x0000116c ) mach_msg_trap (libsystem_kernel.dylib + 0x00000fd8 ) mach_msg (libdispatch.dylib + 0x0001cac0 ) _dispatch_mach_msg_send (libdispatch.dylib + 0x0001c214 ) _dispatch_mach_send_drain (libdispatch.dylib + 0x0001d414 ) _dispatch_mach_send_push_and_trydrain (libdispatch.dylib + 0x000174a8 ) _dispatch_mach_send_msg (libdispatch.dylib + 0x000175cc ) dispatch_mach_send_with_result (libxpc.dylib + 0x00002c80 ) _xpc_connection_enqueue (libxpc.dylib + 0x00003cf0 ) xpc_connection_send_message_with_reply (MediaRemote + 0x00011edc ) MRMediaRemoteServiceGetPickedRouteHasVolumeControl (MediaPlayer + 0x0009d57c ) -[MPAVRoutingController _pickableRoutesDidChangeNotification:] (CoreFoundation + 0x000c9228 ) __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ (CoreFoundation + 0x000c892c ) _CFXRegistrationPost (CoreFoundation + 0x000c86a8 ) ___CFXNotificationPost_block_invoke (CoreFoundation + 0x00137b98 ) -[_CFXNotificationRegistrar find:object:observer:enumerator:] (CoreFoundation + 0x0000abf0 ) _CFXNotificationPost (Foundation + 0x000066b8 ) -[NSNotificationCenter postNotificationName:object:userInfo:] (MediaServices + 0x00003100 ) -[MSVDistributedNotificationObserver _handleDistributedNotificationWithNotifyToken:] (MediaServices + 0x00002f10 ) __78-[MSVDistributedNotificationObserver initWithDistributedName:localName:queue:]_block_invoke (libsystem_notify.dylib + 0x00009ea4 ) ___notify_dispatch_local_notification_block_invoke (libdispatch.dylib + 0x000011fc ) _dispatch_call_block_and_release (libdispatch.dylib + 0x000011bc ) _dispatch_client_callout (libdispatch.dylib + 0x00005d68 ) _dispatch_main_queue_callback_4CF (CoreFoundation + 0x000dcf28 ) __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ (CoreFoundation + 0x000dab14 ) __CFRunLoopRun (CoreFoundation + 0x00009044 ) CFRunLoopRunSpecific (GraphicsServices + 0x0000c194 ) GSEventRunModal (UIKit + 0x0007b624 ) -[UIApplication _run] (UIKit + 0x0007635c ) UIApplicationMainThis looks like the same issue as https://forums.developer.apple.com/message/36184#36184One potential complication is that we are calling AVAudioEngine methods off of the main thread. My belief is that this is safe - but I can't find any official reference to confirm that it is. We find that AVAudioEngine method calls can block, which is why we moved the work off the main thread.We are listening for audio engine configuration change notifications and handling them similarly to the AVAudioEngine sample code.I have attempted to reproduce this issue locally by performing various actions in combination (receiving phone calls, suspending the app, plugging in or unplugging headphones, etc) with no luck.Any thoughts on what conditions might be triggering this exception? Hopefully, I can at least narrow down a set of conditions to allow me to reproduce the crash in a controlled environment.Thanks,Rob
Posted
by
Post not yet marked as solved
4 Replies
1.9k Views
Hello,I am having problems getting my AUv3 Instrument with an inputBus to work. As a standalone app (with the SimplePlayEngine of the sample code integrated) it seems to work fine, the plugin also passes the auval test without errors. But when I try to use the plugin in a host application (like garageband / logic / host of the sample code) I can't get any output, the internalRenderBlock is not even being called. I narrowed it down to the inputBusses property, so it seems that I am doing something wrong with setting up the input bus.To reproduce, take the InstrumentDemo of the Apple sample code, and in the init method initialize an inputBusBuffer, create an inputBusArray with the bus of the inputBusBuffer. Set the inputBusArray as the return value for the inputBusses property and allocateRenderResources of the inputBusBuffer in the allocateRenderResourcesAndReturnError (and deallocateRenderResources in the deallocateRenderResources call). All of this is done analogous to the inputBus setup in the FilterDemo example.I also explicitly set the channelCapabilities to Stereo In, Stereo Out.Omitting the further processing in the internalRenderBlock, shouldn't this work to the point that internalRenderBlock is getting called? Ít is getting called in the App, and auval validation succeeds, but it is not being called in any host.Am I missing something here?Any help will be much appreciated!
Posted
by
Post not yet marked as solved
3 Replies
2.1k Views
I'm using a VoiceProcessingIO audio unit in my VoIP application on Mac. The problem is, at least since Mojave, AudioComponentInstanceNew blocks for at least 2 seconds. Profiling shows that internally it's waiting on some mutex and then on some message queue. My code to initialize the audio unit is as follows: OSStatus status; AudioComponentDescription desc; AudioComponent inputComponent; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO; desc.componentFlags = 0; desc.componentFlagsMask = 0; desc.componentManufacturer = kAudioUnitManufacturer_Apple; inputComponent = AudioComponentFindNext(NULL, &desc); status = AudioComponentInstanceNew(inputComponent, &unit);Here's a profiler screenshot showing the two system calls in question.So, is this a bug or an intended behavior?
Posted
by
Post not yet marked as solved
2 Replies
4.9k Views
Hi! I am trying to develop an application that uses Core MIDI to instantiate a connection to my Macbook via Audio MIDI Setup. I have created a client within my application and that shows up under the directory in Audio MIDI Setup on my macbook. Now I am stuck trying to figure out how to send MIDI from my app to my computer. I have tried MIDISend, and using CreateOutputPort. Both are successful in the sense that I don't get zeros when printing to the console, but nothing changes in the DAW when I set the controller and number values to the exact numbers I created in my app.I have a feeling that I am missing a network connection within my app somehow so that it recognizes my computer as a source, but I have not yet found an effective method to do this.Any information as to how I get midi to send from my app to my DAW on my computer would be greatly appreciated!I am trying to make this for my final project in one of my coding classes.Thanks!-GH
Posted
by
Post not yet marked as solved
5 Replies
4.7k Views
That is, will my render callback ever be called after AudioOutputUnitStop() returns?In other words will it be safe to free resources used by the render callback or do I need to add realtime safe communication between the stopping thread and the callback thread?This question is intended for both macOS HAL Output and iOS Remote IO output units.
Posted
by
Post not yet marked as solved
1 Replies
2.3k Views
Does anyone have a working example on how to play OGG files with swift? I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C\++ to fill the PCM because this method seems to only be available in C\++ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes. I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac) I also tried using Cricket Audio framework below. https://github.com/sjmerel/ck It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue https://github.com/sjmerel/ck/issues/3 Right now I believe every player that can play OGGs on mac is written in Objective-C or C++. Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.
Posted
by
Post not yet marked as solved
36 Replies
14k Views
I have a USB audio interface that is causing kernel traps and the audio output to "skip" or dropout every few seconds. This behavior occurs with a completely fresh install of Catalina as well as Big Sur with the stock Music app on a 2019 MacBook Pro 16 (full specs below). The Console logs show coreaudiod got an error from a kernel trap, a "USB Sound assertion" in AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644, and the Music app "skipping cycle due to overload." I've added a short snippet from Console logs around the time of the audio skip/drop out. The more complete logs are at this gist: https://gist.github.com/djflux/08d9007e2146884e6df1741770de5105 I've also opened a Feedback Assistant ticket (FB9037528): https://feedbackassistant.apple.com/feedback/9037528 Does anyone know what could be causing this issue? Thanks for any help. Cheers, Flux aka Andy. Hardware Overview:  Model Name: MacBook Pro  Model Identifier: MacBookPro16,1  Processor Name: 8-Core Intel Core i9  Processor Speed: 2.4 GHz  Number of Processors: 1  Total Number of Cores: 8  L2 Cache (per Core): 256 KB  L3 Cache: 16 MB  Hyper-Threading Technology: Enabled  Memory: 64 GB  System Firmware Version: 1554.80.3.0.0 (iBridge: 18.16.14347.0.0,0) System Software Overview: System Version: macOS 11.2.3 (20D91)  Kernel Version: Darwin 20.3.0  Boot Volume: Macintosh HD  Boot Mode: Normal  Computer Name: mycomputername  User Name: myusername  Secure Virtual Memory: Enabled  System Integrity Protection: Enabled USB interface: Denon DJ DS1 Snippet of Console logs error 21:07:04.848721-0500 coreaudiod HALS_IOA1Engine::EndWriting: got an error from the kernel trap, Error: 0xE00002D7 default 21:07:04.848855-0500 Music HALC_ProxyIOContext::IOWorkLoop: skipping cycle due to overload default 21:07:04.857903-0500 kernel USB Sound assertion (Resetting engine due to error returned in Read Handler) in /AppleInternal/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleUSBAudio/AppleUSBAudio-401.4/KEXT/AppleUSBAudioDevice.cpp at line 6644 ... default 21:07:05.102746-0500 coreaudiod Audio IO Overload inputs: 'private' outputs: 'private' cause: 'Unknown' prewarming: no recovering: no default 21:07:05.102926-0500 coreaudiod   CAReportingClient.mm:508  message {   HostApplicationDisplayID = "com.apple.Music";   cause = Unknown;   deadline = 2615019;   "input_device_source_list" = Unknown;   "input_device_transport_list" = USB;   "input_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2";   "io_buffer_size" = 512;   "io_cycle" = 1;   "is_prewarming" = 0;   "is_recovering" = 0;   "issue_type" = overload;   lateness = "-535";   "output_device_source_list" = Unknown;   "output_device_transport_list" = USB;   "output_device_uid_list" = "AppleUSBAudioEngine:Denon DJ:DS1:000:1,2"; }: (null)
Posted
by
Post not yet marked as solved
30 Replies
26k Views
I'm very excited about the new AirTag product and am wondering if there will be any new APIs introduced in iOS 14.5+ to allow developers to build apps around them outside the context of the Find My network? The contexts in which I am most excited about using AirTags are: Gaming Health / Fitness-focused apps Accessibility features Musical and other creative interactions within apps I haven't been able to find any mention of APIs. Thanks in advance for any information that is shared here. Alexander
Posted
by
Post not yet marked as solved
1 Replies
2k Views
Hi, Wondering if anyone has found a solution to the automatic volume reduction on the host computer using the OSX native screen share application. The volume reduction makes it nearly impossible to comfortably continue working on the host computer when there is any audio involved. Is there a way to bypass to this function? It seems to be the same native function that FaceTime uses to reduce the system audio volume to create priority for the application. Please help save my speakers! Thanks.
Posted
by
Post not yet marked as solved
3 Replies
1.5k Views
I receive a buffer from[AVSpeechSynthesizer convertToBuffer:fromBuffer:] and want to schedule it on an AVPlayerNode. The player node's output format need to be something that the next node could handle and as far as I understand most nodes can handle a canonical format. The format provided by AVSpeechSynthesizer is not something thatAVAudioMixerNode supports. So the following:   AVAudioEngine *engine = [[AVAudioEngine alloc] init];   playerNode = [[AVAudioPlayerNode alloc] init];   AVAudioFormat *format = [[AVAudioFormat alloc] initWithSettings:utterance.voice.audioFileSettings];   [engine attachNode:self.playerNode];   [engine connect:self.playerNode to:engine.mainMixerNode format:format]; Throws an exception: Thread 1: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 \"(null)\"" I am looking for a way to obtain the canonical format for the platform so that I can use AVAudioConverter to convert the buffer. Since different platforms have different canonical formats, I imagine there should be some library way of doing this. Otherwise each developer will have to redefine it for each platform the code will run on (OSX, iOS etc) and keep it updated when it changes. I could not find any constant or function which can make such format, ASDB or settings. The smartest way I could think of, which does not work:   AudioStreamBasicDescription toDesc;   FillOutASBDForLPCM(toDesc, [AVAudioSession sharedInstance].sampleRate,                      2, 16, 16, kAudioFormatFlagIsFloat, kAudioFormatFlagsNativeEndian);   AVAudioFormat *toFormat = [[AVAudioFormat alloc] initWithStreamDescription:&toDesc]; Even the provided example for iPhone, in the documentation linked above, uses kAudioFormatFlagsAudioUnitCanonical and AudioUnitSampleType which are deprecated. So what is the correct way to do this?
Posted
by
Post not yet marked as solved
1 Replies
1.2k Views
In the AudioBufferList extension, there is a comment above the allocate function     /// The memory should be freed with `free()`.     public static func allocate(maximumBuffers: Int) -> UnsafeMutableAudioBufferListPointer But when I try to call free on the returned pointer, free (buffer) XCode complains: Cannot convert value of type 'UnsafeMutableAudioBufferListPointer' to expected argument type 'UnsafeMutableRawPointer?' How should the pointer be free'd? I tried free (&buffer) XCode didn't complain, but when I ran the code, I got an error in the console. malloc: *** error for object 0x16fdfee70: pointer being freed was not allocated I know the call to allocate was successful. Thanks, Mark
Posted
by
Post not yet marked as solved
1 Replies
1.1k Views
Hi, This topic is about Workgroups. I create child processes and I'd like to communicate a os_workgroup_t to my child process so they can join the work group as well. As far as I understand, the os_workgroup_t value is local to the process. I've found that one can use os_workgroup_copy_port() and os_workgroup_create_with_port(), but I'm not familiar at all with ports and I wonder what would be the minimal effort to achieve that. Thank you very much! Alex
Posted
by
Post not yet marked as solved
9 Replies
3.6k Views
I am getting an error in iOS 16. This error doesn't appear in previous iOS versions. I am using RemoteIO to playback live audio at 4000 hz. The error is the following: Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets This is how the audio format and the callback is set: // Set the Audio format AudioStreamBasicDescription audioFormat; audioFormat.mSampleRate = 4000; audioFormat.mFormatID = kAudioFormatLinearPCM; audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioFormat.mFramesPerPacket = 1; audioFormat.mChannelsPerFrame = 1; audioFormat.mBitsPerChannel = 16; audioFormat.mBytesPerPacket = 2; audioFormat.mBytesPerFrame = 2; AURenderCallbackStruct callbackStruct; // Set output callback callbackStruct.inputProc = playbackCallback; callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self); status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &callbackStruct, sizeof(callbackStruct)); Note that the mSampleRate I set is 4000 Hz. In iOS 15 I get 0.02322 seconds of buffer duration (IOBufferDuration) and 93 frames in each callback. This is expected, because: number of frames * buffer duration = sampling rate 93 * 0.02322 = 4000 Hz However, in iOS 16 I am getting the aforementioned error in the callback. Input data proc returned inconsistent 2 packets for 186 bytes; at 2 bytes per packet, that is actually 93 packets Since the number of frames is equal to the number of packets, I am getting 1 or 2 frames in the callback and the buffer duration is of 0.02322 seconds. This didn't affect the playback of the "raw" signal, but it did affect the playback of the "processed" signal. number of frames * buffer duration = sampling rate 2 * 0.02322 = 0.046 Hz That doesn't make any sense. This error appears for different sampling rates (8000, 16000, 32000), but not for 44100. However I would like to keep 4000 as my sampling rate. I have also tried to set the sampling rate by using the setPreferredSampleRate(_:) function of AVAudioSession, but the attempt didn't succeed. The sampling rate was still 44100 after calling that function. Any help on this issue would be appreciated.
Posted
by
Post marked as solved
34 Replies
14k Views
Hi all,  Apple dropping on-going development for FireWire devices that were supported with the Core Audio driver standard is a catastrophe for a lot of struggling musicians who need to both keep up to date on security updates that come with new OS releases, and continue to utilise their hard earned investments in very expensive and still pristine audio devices that have been reduced to e-waste by Apple's seemingly tone-deaf ignorance in the cries for on-going support.  I have one of said audio devices, and I'd like to keep using it while keeping my 2019 Intel Mac Book Pro up to date with the latest security updates and OS features.  Probably not the first time you gurus have had someone make the logical leap leading to a request for something like this, but I was wondering if it might be somehow possible of shoe-horning the code used in previous versions of Mac OS that allowed the Mac to speak with the audio features of such devices to run inside the Ventura version of the OS.  Would it possible? Would it involve a lot of work? I don't think I'd be the only person willing to pay for a third party application or utility that restored this functionality. There has to be 100's of thousands of people who would be happy to spare some cash to stop their multi-thousand dollar investment in gear to be so thoughtlessly resigned to the scrap heap.  Any comments or layman-friendly explanations as to why this couldn’t happen would be gratefully received!  Thanks,  em
Posted
by
Post not yet marked as solved
6 Replies
1.9k Views
hi, so i have a little bit of work left on the Asus Xonar family of audio devices. thanks to APPUL's samplepciaudiodriver code and their excellent documentation, Evegeny Gavrilov's kxAudio driver for MAC and Takashi Iwai's exceptional documentation of the ALSA API i have something that is ready for testing. the stats look good, but unfortunately i this is my second HDAV1.3 deluxe. the other one is also in the same room consuming all of my devices with powered audio outputs. no matter, i am in the process of acquiring another xonar sound card in this family. which brings me to my question: what is the benefit of getting an apple developer account for 99 dollars a year? will i be able to distribute a beta kext with my signature that will allow people to test the binary? i don't think others could run a self-signed kext built on one machine, on another, correct? so would a developer license allow others to test a binary built on my machine, assuming they're x86? my hope is that the developer program would allow me to test the binaries and solicit input from enthusiast mac pro owners WORLD WIDE. i them hope to create a new program that will give us the wealth mixers/controls this fantastic line is capable of providing.
Posted
by
Post not yet marked as solved
2 Replies
836 Views
Hi, I'm having trouble saving user presets in the plugin for Audio Units. This works well for saving the user presets in the Host, but I get an error when trying to save them in the plugin. I'm not using a parameter tree, but instead using the fullState's getter and setter for saving and retrieving a dictionary with the state. With some simplified parameters it looks something like this: var gain: Double = 0.0 var frequency: Double = 440.0     private var currentState: [String: Any] = [:] override var fullState: [String: Any]? {     get {       // Save the current state       currentState["gain"] = gain       currentState["frequency"] = frequency       // Return the preset state       return ["myPresetKey": currentState]     }     set {       // Extract the preset state       currentState = newValue?["myPresetKey"] as? [String: Any] ?? [:]       // Set the Audio Unit's properties       gain = currentState["gain"] as? Double ?? 0.0       frequency = currentState["frequency"] as? Double ?? 440.0     }  } This works perfectly well for storing user presets when saved in the host. When trying to save them in the plugin to be able to reuse them across hosts, I get the following error in the interface: "Missing key in preset state map". Note that I am testing mostly in AUM. I could not find any documentation for what the missing key is about and how can I get around this. Any ideas?
Posted
by
Post not yet marked as solved
0 Replies
751 Views
I found an app for make PC ***** an takes audio from other android devices. However, when i connect with Mac it doesn't work. I use airplay for now but there are latency and quality problems. I have Late 13 Macbook Pro which uses airplay 1 technology. It is slower than newer one. My Wifi router in my room but as i said i want to connect with bluetooth. Why this problem appears. How to work on this?
Posted
by
Post not yet marked as solved
0 Replies
542 Views
I am currently working on a project that involves real-time audio processing in my iOS/macOS application. I have been exploring the Audio Unit Hosting API and specifically the AUHAL units for handling audio input and output. My goal is to establish a direct connection between an input AUHAL unit and an output AUHAL unit to achieve seamless real-time audio processing. I've been researching and experimenting with the API, but I haven't been able to find a clear solution or documentation regarding this specific scenario. Has anyone attempted such a configuration or encountered similar requirements? I would greatly appreciate any insights, suggestions, or pointers to relevant documentation that could help me achieve this direct connection between the input and output AUHAL units. Thank you in advance for your time and assistance. Best regards, Yosemite
Posted
by