AVAudioEngine

RSS for tag

Use a group of connected audio node objects to generate and process audio signals and perform audio input and output.

AVAudioEngine Documentation

Posts under AVAudioEngine tag

78 Posts
Sort by:
Post not yet marked as solved
1 Replies
313 Views
I work on a video conferencing application, which makes use of AVAudioEngine and the videoChat AVAudioSession.Mode This past Friday, an internal user reported an "audio cutting in and out" issue with their new iPhone 14 Pro, and I was able to reproduce the issue later that day on my iPhone 14 Pro Max. No other iOS devices running iOS 16 are exhibiting this issue. I have narrowed down the root cause to the videoChat AVAudioSession.Mode after changing line 53 of the ViewController.swift file in Apple's "Using Voice Processing" sample project (https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing) from: try session.setCategory(.playAndRecord, options: .defaultToSpeaker) to try session.setCategory(.playAndRecord, mode: .videoChat, options: .defaultToSpeaker) This only causes issues on my iPhone 14 Pro Max device, not on my iPhone 13 Pro Max, so it seems specific to the new iPhones only. I am also seeing the following logged to the console using either device, which appears to be specific to iOS 16, but am not sure if it is related to the videoChat issue or not: 2022-09-19 08:23:20.087578-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:71  Invalid input size for property 1684431725 2022-09-19 08:23:20.087605-0700 AVEchoTouch[2388:1474002] [as] ATAudioSessionPropertyManager.mm:225  Invalid input size for property 1684431725 I am assuming 1684431725 is 'dfcm' but I am not sure what Audio Session Property that might be.
Posted Last updated
.
Post not yet marked as solved
1 Replies
871 Views
I tried to change my code from let urlString = Bundle.main.path(forResource: "X", ofType: "mp3")! to let urlString = Bundle.main.path(forResource: "X", ofType: "caf")! after I already uploaded the file. But it throws a fatal error: Unexpectedly found nil while unwrapping an Optional value I double check the filename and it's correct. and the caf file does play.
Posted Last updated
.
Post not yet marked as solved
0 Replies
217 Views
I'm trying to send the audio track in the webrtc app. I create the audio track like this: private var factory: RTCPeerConnectionFactory = RTCPeerConnectionFactory(encoderFactory: RTCDefaultVideoEncoderFactory(), decoderFactory: RTCDefaultVideoDecoderFactory()) .... let audioTrack = factory.audioTrack(withTrackId: "ARDAMSa0") if mediaStream == nil { self.mediaStream = self.factory.mediaStream(withStreamId: "0") } self.mediaStream.addAudioTrack(audioTrack) Then I add it to peerConnection the usual way. The problem is my audio starts with some delay (2 seconds). I hear it on another peer in almost 2 seconds. Video starts immediately. The connection is perfect. I see an error log: [aurioc] AURemoteIO.cpp:1128 failed: -66635 (enable 3, outf< 1 ch, 48000 Hz, Int16> inf< 1 ch, 48000 Hz, Int16>) In the SDP offer I see that the audio codec is "opus", channels=2, clockRate=48000, which seems completely fine. Question: Is this opus codec is a problem for iOS? is there any extra RTCAudioSession steps I have to do to be able to start the audio immediately? I tried: RTCAudioSession.sharedInstance().setPreferredIOBufferDuration(0.005) Also I tried setting RTCAudioSession to various categories. Nothing helps. There's still a lag at the start. I'm not even sure what it's potentially connected with. I'm testing on iPhone 11, 12, iOS 15.6.1. Tried multiple webrtc branches. (m88, m90+), all the same. Any ideas appreciated. Thanks
Posted Last updated
.
Post not yet marked as solved
0 Replies
152 Views
Hi, I have multiple audio files I want to decide which channel goes to which output. For example, how to route four 2-channel audio files to an 8-channel output. Also If I have an AVAudioPlayerNode playing a 2-channel track through headphones, can I flip the channels on the output for playback, i.e flip left and right? I have read the following thread which seeks to do something similar, but it is from 2012 and I do not quite understand how it would work in modern day. Many thanks, I am a bit stumped.
Posted
by jaolan.
Last updated
.
Post not yet marked as solved
0 Replies
221 Views
Hello, It seems like AVSpeechSynthesisVoice.speechVoices() now returns [] instead of a list of voices, and which means that no speechVoices are available. This leads to the impossibility to do any speech synthesis using iOS 16. Is anyone else experiencing this? Have you found a way to address it? Thanks
Posted
by MGG9.
Last updated
.
Post not yet marked as solved
1 Replies
227 Views
So I have successfully triggered a PTT notification, but when I try to play audio – any audio – it doesn't play. Seems to be an issue with initiating my AVAudioSession. If I do not initiate it, the sound plays (outside of the didActivateAudioSession; such as on view did load), so I know that it's not the audio playing code. For some reason, the AVAudioSession is not allowing me to play sound. Even when I put "PlayandRecord" and when I put "mix" in the options
Posted Last updated
.
Post marked as solved
1 Replies
328 Views
Summary: I've created an AUAudioUnit to host some third party signal processing code and am running into a edge case limitation where I can only process and supply output audio data (from the internalRenderBlock) if it's an exact multiple of a specific number of frames. More Detail: This third party code ONLY works with exactly 10ms of data at time. For example, say with 48khz audio, it only accepts 480 frames on each processing function call. If the AUAudioUnit's internalRenderBlock is called with 1024 frames as the frame count, I can use the pullInputBlock to get 480 frames, process it, another 480 frames, and process that, but what should I then do with the remaining 64 frames? Possible Solutions Foiled: a) It seems there's no way to indicate to the host that I have only consumed 960 frames and will only be supplying 960 frames of output. I thought perhaps the host would observe that if the outputData ABL buffers have less than the frame count passed into the internalRenderBlock, that it might appropriately advance the timestamp only by that much the next time time around, but it does not. So it's required that all the audio be processed before the block returns, but I can only do that if the block is requested to handle exactly a multiple of 10ms of data. b) I can't buffer up the "remainder" input and process it on the next internalRenderBlock cycle because all of the output must be provided on return as discussed in A. c) As an alternative, I see no way to have the unit explicitly indicate to the host, how many frames the unit can process at a time. maximumFramesToRender is the host telling the unit (not the reverse), and either way it's a maximum only, not a minumum as well. What can I do?
Posted
by swillits.
Last updated
.
Post not yet marked as solved
2 Replies
369 Views
We have developed iOS app using WebRTC and ARKit for the video communication. We have the option to toggle between headphone and speaker. We are facing an app crash when bluetooth headphone is connected and the user tries to toggle between speaker and headphone continuously. Also while toggle between speaker and headphone it throws log in console ARSessionDelegate is retaining 11 ARFrames. This can lead to future camera frames being dropped. Following is the crash : Fatal Exception: com.apple.coreaudio.avfaudio required condition is false: nil == owningEngine || GetEngine() == owningEngine Following are the code snippet : func headphoneOn() -> Bool {      audioSession.lockForConfiguration()      var bluetoothPort : AVAudioSessionPortDescription? = nil      var headphonePort : AVAudioSessionPortDescription? = nil      var inbuiltMicPort : AVAudioSessionPortDescription? = nil            if let availableInputs = AVAudioSession.sharedInstance().availableInputs {        for inputDevice in availableInputs {          if inputDevice.portType == .builtInMic {           inbuiltMicPort = inputDevice         } else if inputDevice.portType == .headsetMic || inputDevice.portType == .headphones {           headphonePort = inputDevice         } else if inputDevice.portType == .bluetoothHFP || inputDevice.portType == .bluetoothA2DP || inputDevice.portType == .bluetoothLE {           bluetoothPort = inputDevice         }       }     }            do {        try audioSession.overrideOutputAudioPort(AVAudioSession.PortOverride.none)        if let bluetoothPort = bluetoothPort {          try audioSession.setPreferredInput(bluetoothPort)          try audioSession.setMode(AVAudioSession.Mode.voiceChat.rawValue)          audioSession.unlockForConfiguration()          return true       } else if let headphonePort = headphonePort {          try audioSession.setPreferredInput(headphonePort)          try audioSession.setMode(AVAudioSession.Mode.voiceChat.rawValue)          audioSession.unlockForConfiguration()          return true       } else if let inbuiltMicPort = inbuiltMicPort {          try audioSession.setMode(AVAudioSession.Mode.default.rawValue)          try audioSession.setPreferredInput(inbuiltMicPort)          audioSession.unlockForConfiguration()          return true       } else {          audioSession.unlockForConfiguration()          return false       }             } catch let error as NSError {        debugPrint("#configureAudioSessionToSpeaker Error \(error.localizedDescription)")        audioSession.unlockForConfiguration()        return false     }         } func speakerOn() -> Bool {      audioSession.lockForConfiguration()      var inbuiltSpeakerPort : AVAudioSessionPortDescription? = nil            if let availableInputs = AVAudioSession.sharedInstance().availableInputs {        for inputDevice in availableInputs {          if inputDevice.portType == .builtInSpeaker {           inbuiltSpeakerPort = inputDevice         }       }     }      do { ///Audio Session: Set on Speaker        if let inbuiltSpeakerPort = inbuiltSpeakerPort {          try audioSession.setPreferredInput(inbuiltSpeakerPort)       }        try audioSession.setMode(AVAudioSession.Mode.videoChat.rawValue)        try audioSession.overrideOutputAudioPort(.speaker)        audioSession.unlockForConfiguration()        return true             } catch let error as NSError {        debugPrint("#configureAudioSessionToSpeaker Error \(error.localizedDescription)")        audioSession.unlockForConfiguration()        return false             }   } Any help will be appreciated to solve this crash issue.
Posted Last updated
.
Post not yet marked as solved
2 Replies
320 Views
I see in Crashlytics few users are getting this exception when connecting the inputNode to mainMixerNode in AVAudioEngine: Fatal Exception: com.apple.coreaudio.avfaudio required condition is false: format.sampleRate == hwFormat.sampleRate Here is my code: self.engine = AVAudioEngine() let format = engine.inputNode.inputFormat(forBus: 0) //main mixer node is connected to output node by default engine.connect(self.engine.inputNode, to: self.engine.mainMixerNode, format: format) Just want to understand how can this error occur and what is the right fix?
Posted Last updated
.
Post not yet marked as solved
0 Replies
243 Views
Am at the beginning of a voice recording app. I store incoming voice data into a buffer array, and write 50 of them to a file. The code works fine, Sample One. However, I would like the recorded files to be smaller. So here I try to add an AVAudioMixer to downsize the sampling. But this code sample gives me two errors. Sample Two The first error I get is when I call audioEngine.attach(downMixer). The debugger gives me nine of these errors: throwing -10878 The second error is a crash when I try to write to audioFile. Of course they might all be related, so am looking to include the mixer successfully first. But I do need help as I am just trying to piece these all together from tutorials, and when it comes to audio, I know less than anything else. Sample One //these two lines are in the init of the class that contains this function... node = audioEngine.inputNode recordingFormat = node.inputFormat(forBus: 0) func startRecording() { audioBuffs = [] x = -1 node.installTap(onBus: 0, bufferSize: 8192, format: recordingFormat, block: { [self] (buffer, _) in x += 1 audioBuffs.append(buffer) if x >= 50 { audioFile = makeFile(format: recordingFormat, index: fileCount) mainView?.setLabelText(tag: 3, text: "fileIndex = \(fileCount)") fileCount += 1 for i in 0...49 { do { try audioFile!.write(from: audioBuffs[i]); } catch { mainView?.setLabelText(tag: 4, text: "write error") stopRecording() } } ...cleanup buffer code } }) audioEngine.prepare() do { try audioEngine.start() } catch let error { print ("oh catch \(error)") } } Sample Two //these two lines are in the init of the class that contains this function node = audioEngine.inputNode recordingFormat = node.inputFormat(forBus: 0) func startRecording() { audioBuffs = [] x = -1 // new code let format16KHzMono = AVAudioFormat.init(commonFormat: AVAudioCommonFormat.pcmFormatInt16, sampleRate: 11025.0, channels: 1, interleaved: true) let downMixer = AVAudioMixerNode() audioEngine.attach(downMixer) // installTap on the mixer rather than the node downMixer.installTap(onBus: 0, bufferSize: 8192, format: format16KHzMono, block: { [self] (buffer, _) in x += 1 audioBuffs.append(buffer) if x >= 50 { // use a different format in creating the audioFile audioFile = makeFile(format: format16KHzMono!, index: fileCount) mainView?.setLabelText(tag: 3, text: "fileIndex = \(fileCount)") fileCount += 1 for i in 0...49 { do { try audioFile!.write(from: audioBuffs[i]); } catch { stopRecording() } } ...cleanup buffers... } }) let format = node.inputFormat(forBus: 0) // new code audioEngine.connect(node, to: downMixer, format: format)//use default input format audioEngine.connect(downMixer, to: audioEngine.outputNode, format: format16KHzMono)//use new audio format downMixer.outputVolume = 0.0 audioEngine.prepare() do { try audioEngine.start() } catch let error { print ("oh catch \(error)") } }
Posted
by SergioDCQ.
Last updated
.
Post not yet marked as solved
0 Replies
201 Views
I am using AVAudioConverter in Objective C. AVAudioConverter has bitRate and bitRateStrategy parameters defined. The default bitRateStrategy value is AVAudioBitRateStrategy_LongTermAverage. If I set bitRateStrategy to AVAudioBitRateStrategy_Constant there is no change. The bitRate variable works correctly. AVAudioConverter *audioConverter = [[AVAudioConverter alloc] initFromFormat:inputFormat toFormat:outputFormat]; audioConverter.bitRate = 128000; audioConverter.bitRateStrategy = AVAudioBitRateStrategy_Constant;
Posted
by pavele.
Last updated
.
Post not yet marked as solved
1 Replies
466 Views
This just seems like a useful thing to have when rendering audio. For example, let's say you have an effect that pitches audio up/down. That typically requires that you know the sample rate of the incoming audio. The way I do this right now is just to save the sample rate after the AUAudioUnit's render resources have been allocated, but being provided this info on a per-render-callback basis seems more useful. Another use case is for AUAudioUnit's on the input chain. Since the format for connections must match the hardware format, you can no longer explicitly set the format that you expect the audio to come in at. You can check the sample rate on the AVAudioEngine's input node or the sample rate on the AVAudioSession singleton, but when you are working with the audio from within the render callback, you don't want to be accessing those methods due to the possibility they are blocking calls. This is especially true when using the AVAudioSinkNode where you don't have the ability to set the sample rate before the underlying node's render resources are allocated. Am I missing something here, or does this actually seem useful?
Posted Last updated
.
Post not yet marked as solved
1 Replies
2.6k Views
Hi, I am building a realtime drum practise tool which listens to the players' practice and provides visual feedback on their accuracy. I use AVAudioSourceNode and AVAudioSinkNode for playing audio and for listening to player practise. Precious timing is the most important part of our app. To optimise audio latency I set PreferredIOBufferDuration to 64/48000sec (~1.33ms). My preferences work fine with builtin or wired audio devices. In these cases we can easily estimate the actual audio latency. However we would like to support Apple airPods (or other bluetooth earbuds) as well, but it seems to be impossible to predict the actual audio latency. let bufferSize: UInt32 = 64 let sampleRate: Double = 48000 let bufferDuration = TimeInterval(Double(bufferSize) / sampleRate) try? session.setCategory(AVAudioSession.Category.playAndRecord, options: [.defaultToSpeaker, .mixWithOthers, .allowBluetoothA2DP])     try? session.setPreferredSampleRate(Double(sampleRate))     try? session.setPreferredIOBufferDuration(bufferDuration)     try? session.setActive(true, options: .notifyOthersOnDeactivation) I use iPhone 12 mini and airPods 2 for testing. (Input always have to be the phone's builtin mic) let input = session.inputLatency // 2.438ms let output = session.outputLatency // 160.667ms let buffer = session.ioBufferDuration // 1.333ms let estimated = input + output + buffer * 2 // 165.771 session.outputLatency returns ca 160ms for my airPods. With the basic calculation above I can estimate a latency of 165.771ms, but when I measure the actual latency (time difference between heard and played sound ) I get significantly different values. If I connect my airPods and start playing immediately, the actual measured latency is ca 215-220ms at first, but it is continuously decreasing over time. After about 20-30mins of measuring the actual latency is around 155-160ms (just like the value that the session returns). However if I am using my airPods for a while before I start the measurement, the actual latency starts from ca 180ms (and decreasing over time the same way). On older iOS devices these differences are even larger. It feels like bluetooth connection needs to "warm up" or something. My questions would be: Is there any way to have a relatively constant audio latency with bluetooth devices? I thought maybe it depends on the actual bandwidth but I couldn't find anything on this topic. Can bandwidth change over time? Can I control it? I guess airPods support AAC codec. Is there any way to force them to use SBC? Does SBC codec work with lower latency? What is the best audioengine setting to support bluetooth devices with the lowest latency? Any other suggestion? Thank you
Posted Last updated
.
Post marked as solved
1 Replies
243 Views
Am trying to distinguish the differences in volumes between background noise, and someone speaking in Swift. Previously, I had come across a tutorial which had me looking at the power levels in each channel. It come out as the code listed in Sample One which I called within the installTap closure. It was ok, but the variance between background and the intended voice to record, wasn't that great. Sure, it could have been the math used to calculate it, but since I have no experience in audio data, it was like reading another language. Then I came across another demo. It's code was much simpler, and the difference in values between background noise and speaking voice was much greater, therefore much more detectable. It's listed here in Sample Two, which I also call within the installTap closure. My issue here is wanting to understand what is happening in the code. In all my experiences with other languages, voice was something I never dealt with before, so this is way over my head. Not looking for someone to explain this to me line by line. But if someone could let me know where I can find decent documentation so I can better grasp what is going on, I would appreciate it. Thank you Sample One func audioMetering(buffer:AVAudioPCMBuffer) { // buffer.frameLength = 1024 let inNumberFrames:UInt = UInt(buffer.frameLength) if buffer.format.channelCount > 0 { let samples = (buffer.floatChannelData![0]) var avgValue:Float32 = 0 vDSP_meamgv(samples,1 , &avgValue, inNumberFrames) var v:Float = -100 if avgValue != 0 { v = 20.0 * log10f(avgValue) } self.averagePowerForChannel0 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) self.averagePowerForChannel1 = self.averagePowerForChannel0 } if buffer.format.channelCount > 1 { let samples = buffer.floatChannelData![1] var avgValue:Float32 = 0 vDSP_meamgv(samples, 1, &avgValue, inNumberFrames) var v:Float = -100 if avgValue != 0 { v = 20.0 * log10f(avgValue) } self.averagePowerForChannel1 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel1) } } Sample Two private func getVolume(from buffer: AVAudioPCMBuffer, bufferSize: Int) -> Float { guard let channelData = buffer.floatChannelData?[0] else { return 0 } let channelDataArray = Array(UnsafeBufferPointer(start:channelData, count: bufferSize)) var outEnvelope = [Float]() var envelopeState:Float = 0 let envConstantAtk:Float = 0.16 let envConstantDec:Float = 0.003 for sample in channelDataArray { let rectified = abs(sample) if envelopeState < rectified { envelopeState += envConstantAtk * (rectified - envelopeState) } else { envelopeState += envConstantDec * (rectified - envelopeState) } outEnvelope.append(envelopeState) } // 0.007 is the low pass filter to prevent // getting the noise entering from the microphone if let maxVolume = outEnvelope.max(), maxVolume > Float(0.015) { return maxVolume } else { return 0.0 } }
Posted
by SergioDCQ.
Last updated
.
Post not yet marked as solved
0 Replies
295 Views
I have the following code to connect inputNode to mainMixerNode of AVAudioEngine: public func setupAudioEngine() { self.engine = AVAudioEngine() let format = engine.inputNode.inputFormat(forBus: 0) //main mixer node is connected to output node by default engine.connect(self.engine.inputNode, to: self.engine.mainMixerNode, format: format) do { engine.prepare() try self.engine.start() } catch { print("error couldn't start engine") } engineRunning = true } But I am seeing a crash in Crashlytics dashboard (which I can't reproduce). Fatal Exception: com.apple.coreaudio.avfaudio required condition is false: IsFormatSampleRateAndChannelCountValid(format) Before calling the function setupAudioEngine I make sure the AVAudioSession category is not playback where mic is not available. The function is called where audio route change notification is handled and I check this condition specifically. Can someone tell me what I am doing wrong? Fatal Exception: com.apple.coreaudio.avfaudio 0 CoreFoundation 0x99288 __exceptionPreprocess 1 libobjc.A.dylib 0x16744 objc_exception_throw 2 CoreFoundation 0x17048c -[NSException initWithCoder:] 3 AVFAudio 0x9f64 AVAE_RaiseException(NSString*, ...) 4 AVFAudio 0x55738 AVAudioEngineGraph::_Connect(AVAudioNodeImplBase*, AVAudioNodeImplBase*, unsigned int, unsigned int, AVAudioFormat*) 5 AVFAudio 0x5cce0 AVAudioEngineGraph::Connect(AVAudioNode*, AVAudioNode*, unsigned long, unsigned long, AVAudioFormat*) 6 AVFAudio 0xdf1a8 AVAudioEngineImpl::Connect(AVAudioNode*, AVAudioNode*, unsigned long, unsigned long, AVAudioFormat*) 7 AVFAudio 0xe0fc8 -[AVAudioEngine connect:to:format:] 8 MyApp 0xa6af8 setupAudioEngine + 701 (MicrophoneOutput.swift:701) 9 MyApp 0xa46f0 handleRouteChange + 378 (MicrophoneOutput.swift:378) 10 MyApp 0xa4f50 @objc MicrophoneOutput.handleRouteChange(note:) 11 CoreFoundation 0x2a834 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ 12 CoreFoundation 0xc6fd4 ___CFXRegistrationPost_block_invoke 13 CoreFoundation 0x9a1d0 _CFXRegistrationPost 14 CoreFoundation 0x408ac _CFXNotificationPost 15 Foundation 0x1b754 -[NSNotificationCenter postNotificationName:object:userInfo:] 16 AudioSession 0x56f0 (anonymous namespace)::HandleRouteChange(AVAudioSession*, NSDictionary*) 17 AudioSession 0x5cbc invocation function for block in avfaudio::AVAudioSessionPropertyListener(void*, unsigned int, unsigned int, void const*) 18 libdispatch.dylib 0x1e6c _dispatch_call_block_and_release 19 libdispatch.dylib 0x3a30 _dispatch_client_callout 20 libdispatch.dylib 0x11f48 _dispatch_main_queue_drain 21 libdispatch.dylib 0x11b98 _dispatch_main_queue_callback_4CF 22 CoreFoundation 0x51800 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ 23 CoreFoundation 0xb704 __CFRunLoopRun 24 CoreFoundation 0x1ebc8 CFRunLoopRunSpecific 25 GraphicsServices 0x1374 GSEventRunModal 26 UIKitCore 0x514648 -[UIApplication _run] 27 UIKitCore 0x295d90 UIApplicationMain 28 libswiftUIKit.dylib 0x30ecc UIApplicationMain(_:_:_:_:) 29 MyApp 0xc358 main (WhiteBalanceUI.swift) 30 ??? 0x104b1dce4 (Missing)
Posted Last updated
.
Post not yet marked as solved
0 Replies
412 Views
I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count. let inputNode = avEngine.inputNode print("Format #1: \(inputNode.outputFormat(forBus: 0))") // Format #1: <AVAudioFormat 0x600002bb4be0:  1 ch,  44100 Hz, Float32> try! inputNode.setVoiceProcessingEnabled(true) print("Format #2: \(inputNode.outputFormat(forBus: 0))") // Format #2: <AVAudioFormat 0x600002b18f50:  3 ch,  44100 Hz, Float32, deinterleaved> Is this expected? How can I interpret these channels? My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files. But when voice processing messes up with the channels layout, I cannot rely on this anymore.
Posted
by smialek.
Last updated
.
Post marked as solved
2 Replies
470 Views
Hi There, Whenever I want to use the microphone for my ShazamKit app while connected to AirPods my app crashes with a "Invalid input sample rate." message. I've tried multiple formats but keep getting this crash. Any pointers would be really helpful. func configureAudioEngine() { do { try audioSession.setCategory(.playAndRecord, options: [.mixWithOthers, .defaultToSpeaker, .allowAirPlay, .allowBluetoothA2DP ,.allowBluetooth]) try audioSession.setActive(false, options: .notifyOthersOnDeactivation) } catch { print(error.localizedDescription) } guard let engine = audioEngine else { return } let inputNode = engine.inputNode let inputNodeFormat = inputNode.inputFormat(forBus: 0) let audioFormat = AVAudioFormat( standardFormatWithSampleRate: inputNodeFormat.sampleRate, channels: 1 ) // Install a "tap" in the audio engine's input so that we can send buffers from the microphone to the signature generator. engine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioFormat) { buffer, audioTime in self.addAudio(buffer: buffer, audioTime: audioTime) } } ```
Posted Last updated
.
Post marked as solved
1 Replies
381 Views
Hi there, I'm building an audio app for iOS and ran into a very weird bug when using multiple AVAudioUnitSampler instances to play different instruments at the same time. I narrowed down the repro to: let sampler = AVAudioUnitSampler() self.audioEngine.attach(sampler) self.audioEngine.connect(sampler, to: self.audioEngine.mainMixerNode, format: nil) try! sampler.loadInstrument(at: Bundle.main.url(forResource: "SteinwayPiano-v1", withExtension: "aupreset")!) let sampler2 = AVAudioUnitSampler() self.audioEngine.attach(sampler2) self.audioEngine.connect(sampler2, to: self.audioEngine.mainMixerNode, format: nil) try! sampler2.loadInstrument(at: Bundle.main.url(forResource: "Rhodes-v1", withExtension: "aupreset")!) I get the following error on the console (real device and simulator): 2022-07-02 21:27:28.329147+0200 soundboard[23592:612358] [default]     ExtAudioFile.cpp:193  about to throw -42: open audio file 2022-07-02 21:27:28.329394+0200 soundboard[23592:612358] [default]      FileSample.cpp:52  about to throw -42: FileSample::LoadFromURL: ExtAudioFileOpenURL 2022-07-02 21:27:28.330206+0200 soundboard[23592:612358]     SampleManager.cpp:434  Failed to load sample 'Sounds/Rhodes/A_050__D3_4-ST.wav -- file:///Users/deermichel/Library/Developer/CoreSimulator/Devices/79ACB2AD-5155-4798-8E96-649964CB274E/data/Containers/Bundle/Application/A10C7802-3F4B-445D-A390-B6A84D8071EE/soundboard.app/': error -42 2022-07-02 21:27:28.330414+0200 soundboard[23592:612358] [default]     SampleManager.cpp:435  about to throw -42: Failed to load sample Sometimes the app crashes after that, but most importantly, sampler2 won't work. Now, if I only create one of the samplers, it works as expected. Also, if both samplers reference to the same aupreset, it works. Only if I try to load different samples, I end up in undefined behavior. Adding a audioEngine.detach(sampler) before creating sampler2 doesn't solve the issue either - it will fail loading the samples with the same error. However, when deferring removal of sampler and creation of sampler2 a little bit, it magically starts working (though that's not what I want because I need both samplers simultaneously): let sampler = AVAudioUnitSampler() self.audioEngine.attach(sampler) self.audioEngine.connect(sampler, to: self.audioEngine.mainMixerNode, format: nil) try! sampler.loadInstrument(at: Bundle.main.url(forResource: "SteinwayPiano-v1", withExtension: "aupreset")!) DispatchQueue.main.async { self.audioEngine.detach(sampler)   let sampler2 = AVAudioUnitSampler()   self.audioEngine.attach(sampler2)   self.audioEngine.connect(sampler2, to: self.audioEngine.mainMixerNode, format: nil)   try! sampler2.loadInstrument(at: Bundle.main.url(forResource: "Rhodes-v1", withExtension: "aupreset")!) } My samples are linked to the bundle as a folder alias - and I have a feeling that there is some exclusive lock... however I don't have the source code to debug the errors on the console further. Any help is appreciated, have a good one :)
Posted Last updated
.
Post not yet marked as solved
0 Replies
389 Views
I'm extending an AudioUnit to generate multi-channel output, and trying to write a unit test using AVAudioEngine. My test installs a tap on the AVAudioNode's output bus and ensures the output is not silence. This works for stereo. I've currently got: auto avEngine = [[AVAudioEngine alloc] init]; [avEngine attachNode:avAudioUnit]; auto format = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100. channels:channelCount]; [avEngine connect:avAudioUnit to:avEngine.mainMixerNode format:format]; where avAudioUnit is my AU. So it seems I need to do more than simply setting the channel count for the format when connecting, because after this code, [avAudioUnit outputFormatForBus:0].channelCount is still 2. Printing the graph yields: AVAudioEngineGraph 0x600001e0a200: initialized = 1, running = 1, number of nodes = 3 ******** output chain ******** node 0x600000c09a80 {'auou' 'ahal' 'appl'}, 'I' inputs = 1 (bus0, en1) <- (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] node 0x600000c09e00 {'aumx' 'mcmx' 'appl'}, 'I' inputs = 1 (bus0, en1) <- (bus0) 0x600000c14300, {'augn' 'brnz' 'brnz'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] outputs = 1 (bus0, en1) -> (bus0) 0x600000c09a80, {'auou' 'ahal' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] node 0x600000c14300 {'augn' 'brnz' 'brnz'}, 'I' outputs = 1 (bus0, en1) -> (bus0) 0x600000c09e00, {'aumx' 'mcmx' 'appl'}, [ 2 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved] So AVAudioEngine just silently ignores whatever channel counts I pass to it. If I do: auto numHardwareOutputChannels = [avEngine.outputNode outputFormatForBus:0].channelCount; NSLog(@"hardware output channels %d\n", numHardwareOutputChannels); I get 30, because I have an audio interface connected. So I would think AVAudioEngine would support this. I've also tried setting the format explicitly on the connection between the mainMixerNode and the outputNode to no avail.
Posted
by Audulus.
Last updated
.