AVAudioNode

RSS for tag

Use the AVAudioNode abstract class for audio generation, processing, or I/O block.

AVAudioNode Documentation

Posts under AVAudioNode tag

21 Posts
Sort by:
Post not yet marked as solved
1 Replies
61 Views
I see in Crashlytics few users are getting this exception when connecting the inputNode to mainMixerNode in AVAudioEngine: Fatal Exception: com.apple.coreaudio.avfaudio required condition is false: format.sampleRate == hwFormat.sampleRate Here is my code: self.engine = AVAudioEngine() let format = engine.inputNode.inputFormat(forBus: 0) //main mixer node is connected to output node by default engine.connect(self.engine.inputNode, to: self.engine.mainMixerNode, format: format) Just want to understand how can this error occur and what is the right fix?
Posted Last updated
.
Post marked as solved
4 Replies
1.1k Views
I have a working AUv3 AUAudioUnit app extension but I had to work around a strange issue: I found that the internalRenderBlock value is fetched and invoked before allocateRenderResources() is called. I have not found any documentation stating that this would be the case, and intuitively it does not make any sense. Is there something I am doing in my code that would be causing this to be the case? Should I *force* a call to allocateRenderResources() if it has not been called before internalRenderBlock is fetched? Thanks! Brad
Posted
by bradhowes.
Last updated
.
Post not yet marked as solved
1 Replies
326 Views
This just seems like a useful thing to have when rendering audio. For example, let's say you have an effect that pitches audio up/down. That typically requires that you know the sample rate of the incoming audio. The way I do this right now is just to save the sample rate after the AUAudioUnit's render resources have been allocated, but being provided this info on a per-render-callback basis seems more useful. Another use case is for AUAudioUnit's on the input chain. Since the format for connections must match the hardware format, you can no longer explicitly set the format that you expect the audio to come in at. You can check the sample rate on the AVAudioEngine's input node or the sample rate on the AVAudioSession singleton, but when you are working with the audio from within the render callback, you don't want to be accessing those methods due to the possibility they are blocking calls. This is especially true when using the AVAudioSinkNode where you don't have the ability to set the sample rate before the underlying node's render resources are allocated. Am I missing something here, or does this actually seem useful?
Posted Last updated
.
Post not yet marked as solved
0 Replies
93 Views
I have the following code to connect inputNode to mainMixerNode of AVAudioEngine: public func setupAudioEngine() { self.engine = AVAudioEngine() let format = engine.inputNode.inputFormat(forBus: 0) //main mixer node is connected to output node by default engine.connect(self.engine.inputNode, to: self.engine.mainMixerNode, format: format) do { engine.prepare() try self.engine.start() } catch { print("error couldn't start engine") } engineRunning = true } But I am seeing a crash in Crashlytics dashboard (which I can't reproduce). Fatal Exception: com.apple.coreaudio.avfaudio required condition is false: IsFormatSampleRateAndChannelCountValid(format) Before calling the function setupAudioEngine I make sure the AVAudioSession category is not playback where mic is not available. The function is called where audio route change notification is handled and I check this condition specifically. Can someone tell me what I am doing wrong? Fatal Exception: com.apple.coreaudio.avfaudio 0 CoreFoundation 0x99288 __exceptionPreprocess 1 libobjc.A.dylib 0x16744 objc_exception_throw 2 CoreFoundation 0x17048c -[NSException initWithCoder:] 3 AVFAudio 0x9f64 AVAE_RaiseException(NSString*, ...) 4 AVFAudio 0x55738 AVAudioEngineGraph::_Connect(AVAudioNodeImplBase*, AVAudioNodeImplBase*, unsigned int, unsigned int, AVAudioFormat*) 5 AVFAudio 0x5cce0 AVAudioEngineGraph::Connect(AVAudioNode*, AVAudioNode*, unsigned long, unsigned long, AVAudioFormat*) 6 AVFAudio 0xdf1a8 AVAudioEngineImpl::Connect(AVAudioNode*, AVAudioNode*, unsigned long, unsigned long, AVAudioFormat*) 7 AVFAudio 0xe0fc8 -[AVAudioEngine connect:to:format:] 8 MyApp 0xa6af8 setupAudioEngine + 701 (MicrophoneOutput.swift:701) 9 MyApp 0xa46f0 handleRouteChange + 378 (MicrophoneOutput.swift:378) 10 MyApp 0xa4f50 @objc MicrophoneOutput.handleRouteChange(note:) 11 CoreFoundation 0x2a834 __CFNOTIFICATIONCENTER_IS_CALLING_OUT_TO_AN_OBSERVER__ 12 CoreFoundation 0xc6fd4 ___CFXRegistrationPost_block_invoke 13 CoreFoundation 0x9a1d0 _CFXRegistrationPost 14 CoreFoundation 0x408ac _CFXNotificationPost 15 Foundation 0x1b754 -[NSNotificationCenter postNotificationName:object:userInfo:] 16 AudioSession 0x56f0 (anonymous namespace)::HandleRouteChange(AVAudioSession*, NSDictionary*) 17 AudioSession 0x5cbc invocation function for block in avfaudio::AVAudioSessionPropertyListener(void*, unsigned int, unsigned int, void const*) 18 libdispatch.dylib 0x1e6c _dispatch_call_block_and_release 19 libdispatch.dylib 0x3a30 _dispatch_client_callout 20 libdispatch.dylib 0x11f48 _dispatch_main_queue_drain 21 libdispatch.dylib 0x11b98 _dispatch_main_queue_callback_4CF 22 CoreFoundation 0x51800 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ 23 CoreFoundation 0xb704 __CFRunLoopRun 24 CoreFoundation 0x1ebc8 CFRunLoopRunSpecific 25 GraphicsServices 0x1374 GSEventRunModal 26 UIKitCore 0x514648 -[UIApplication _run] 27 UIKitCore 0x295d90 UIApplicationMain 28 libswiftUIKit.dylib 0x30ecc UIApplicationMain(_:_:_:_:) 29 MyApp 0xc358 main (WhiteBalanceUI.swift) 30 ??? 0x104b1dce4 (Missing)
Posted Last updated
.
Post not yet marked as solved
0 Replies
148 Views
I've noticed that enabling voice processing on AVAudioInputNode change the node's format - most noticeably channel count. let inputNode = avEngine.inputNode print("Format #1: \(inputNode.outputFormat(forBus: 0))") // Format #1: <AVAudioFormat 0x600002bb4be0:  1 ch,  44100 Hz, Float32> try! inputNode.setVoiceProcessingEnabled(true) print("Format #2: \(inputNode.outputFormat(forBus: 0))") // Format #2: <AVAudioFormat 0x600002b18f50:  3 ch,  44100 Hz, Float32, deinterleaved> Is this expected? How can I interpret these channels? My input device is an aggregate device where each channel comes from a different microphone. I then record each channels to separate files. But when voice processing messes up with the channels layout, I cannot rely on this anymore.
Posted
by smialek.
Last updated
.
Post not yet marked as solved
0 Replies
139 Views
Every time the AVAudioSession category is re-activated ( after being inactivated) and the audioengine is restarted (calling stop() and play()), the output from audioplayer node seems to ignore the audio session category, until explicitly connecting the audio player node again using audioEngine.connect(playerNode, to: audioEngine.outputNode, format: audioFile.processingFormat) The documentation regarding this behavior is not clear and would like to clarify the following. Should audioEngine.connect be called everytime the AVAudioSession is activated? Should audioEngine.connect be called after the audio player engine is stopped (audioEngine.stop())?
Posted Last updated
.
Post not yet marked as solved
0 Replies
197 Views
I am trying to mix the audio from 2 different hardware audio devices together in real-time and record the results. Does anybody have any idea how to do this? This is on macOS. Things I have tried and why it didn't work: Adding 2 audio AVCaptureDevices to an AVCaptureMovieFileOutput or AVAssetWriter. This results in a file that has 2 audio tracks. This doesn't work for me for various reasons. Sure I can mix them together with an AVAssetExportSession, but it needs to be real-time. Programmatically creating an aggregate device and recording that as an AVCaptureDevice. This "sort of" works, but it always results in a recording with strange channel issues. For example, if I combine a 1 channel mic and a 2 channel device, I get a recording with 3 channel audio (L R C). If I make an aggregate out of 2 stereo devices, I get a recording with quadraphonic sound(L R Ls Rs), which won't even play back on some players. If I always force it to stereo, all stereo tracks get turned to mono for some reason. Programmatically creating an aggregate device and trying to use it in an AVAudioEngine. I've had multiple problems with this, but the main one is that when the aggregate device is an input node, it only reports the format of its main device, and no sub-devices. And I can't force it to be 3 or 4 channels without errors. Use an AVCaptureSession to output the sample buffers of both devices, then convert and put those samples into their own AVPlayerNodes. Then mix those AVPlayerNodes into an AVAudioEngine mixer. This actually works, but the resulting audio lags so far behind real-time, that it is unusable. If I record a webcam video along with the audio, the lip-sync is off by like half a second. I really need help with this. If anybody has a way to do this, let me know. Some caveats that have also been tripping me up: The hardware devices that need to be recorded might not be the default input device for the system. The MBP built in mic might be the default device, but I need to record 2 other devices and disclose the built in mic. The devices usually don't have the same audio format. I might be mixing an lpcm mono int16 interleaved with a lpcm stereo float32 non-interleaved. It absolutely has to be real-time and 1 single audio track. It shouldn't be this hard, right?
Posted
by nitro805.
Last updated
.
Post not yet marked as solved
0 Replies
171 Views
Hi, I'm trying to send audio data via UDP. I am using Network.framework with networking, so to use send method in NWConnection sending data must be Data type or confirm to DataProtocol. To satisfy those conditions, I have implemented a method to convert from AVAudioPCMBuffer type to Data type. func makeDataFromPCMBuffer(buffer: AVAudioPCMBuffer, time: AVAudioTime) -> Data {         let audioBuffer = buffer.audioBufferList.pointee.mBuffers         let data: Data!         data = .init(bytes: audioBuffer.mData!, count: Int(audioBuffer.mDataByteSize))         return data     } Implementation above is referenced from this post The problem is that the size of converted data is too big to fit in UDP datagram and error below occurs when I try to send data. I have found out that initial size of buffer is too big to fit in maximumDatagramSize. Below is code regarding to buffer.         let tapNode: AVAudioNode = mixerNode         let format = tapNode.outputFormat(forBus: 0)         tapNode.installTap(onBus: 0, bufferSize: 4096, format: format, block: { (buffer, time) in          // size of buffer: AVAudioPCMBuffer is 19200 already.             let bufferData = self.makeDataFromPCMBuffer(buffer: buffer, time: time)             sharedConnection?.sendRecordedBuffer(buffer: bufferData)         }) I need to reduce size of AVAudioPCMBuffer to fit in UDP datagram, But I can't find right way to do it. What would be best way to make data fit in datagram? I thought of dividing data in half, but this is UDP so I'm not sure how to handle those datas when one data has lost. So I'm trying to make AVAudioPCMBuffer fit in datagram. Any help would be very appreciated!
Posted
by EricKwon.
Last updated
.
Post not yet marked as solved
0 Replies
191 Views
Hi, I'm creating a process to read an existing audio file, add an effect using AVAudioEngine, and then save it as another audio file. However, with the following method using an AVAudioPlayerNode, the save process must wait until the end of playback. import UIKit import AVFoundation class ViewController: UIViewController {          let engine = AVAudioEngine()     let playerNode = AVAudioPlayerNode()     let reverbNode = AVAudioUnitReverb()          override func viewDidLoad() {         super.viewDidLoad()         do {                          let url = URL(fileURLWithPath: Bundle.main.path(forResource: "original", ofType: "mp3")!)             let file = try AVAudioFile(forReading: url)                          // playerNode             engine.attach(playerNode)             // reverbNode             reverbNode.loadFactoryPreset(.largeChamber)             reverbNode.wetDryMix = 5.0             engine.attach(reverbNode)                          engine.connect(playerNode, to: reverbNode, format: file.processingFormat)             engine.connect(reverbNode, to: engine.mainMixerNode, format: file.processingFormat)             playerNode.scheduleFile(file, at: nil, completionCallbackType: .dataPlayedBack){ [self] _ in                 reverbNode.removeTap(onBus: 0)             }             // start             try engine.start()             playerNode.play()                          let url2 = URL(fileURLWithPath: fileInDocumentsDirectory(filename: "changed.wav"))             let outputFile = try! AVAudioFile(forWriting: url2, settings: playerNode.outputFormat(forBus: 0).settings)             reverbNode.installTap(onBus: 0, bufferSize: AVAudioFrameCount(reverbNode.outputFormat(forBus: 0).sampleRate), format: reverbNode.outputFormat(forBus: 0)) { (buffer, when) in                 do {                     try outputFile.write(from: buffer)                 } catch let error {                     print(error)                 }             }         } catch {             print(error.localizedDescription)         }     }     func getDocumentsURL() -> NSURL {         let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] as NSURL         return documentsURL     }          func fileInDocumentsDirectory(filename: String) -> String {         let fileURL = getDocumentsURL().appendingPathComponent(filename)         return fileURL!.path     } } Is there a way to complete the writing without waiting for the playback to complete? My ideal is to complete the write in the time required by CPU and storage performance. It seems that reverbNode.installTap(...) { (buffer, when) in ...} in the code is processed in parallel with the current playback position, so I would like to dramatically improve the processing speed. Best regards.
Posted Last updated
.
Post not yet marked as solved
0 Replies
239 Views
I am trying to use AVAudioEngine for listening to mic samples and playing them simultaneously via external speaker or headphones (assuming they are attached to iOS device). I tried the following using AVAudioPlayerNode and it works, but there is too much delay in the audio playback. Is there a way to hear sound realtime without delay? Why the scheduleBuffer API has so much delay I wonder. var engine: AVAudioEngine! var playerNode: AVAudioPlayerNode! var mixer: AVAudioMixerNode! var audioEngineRunning = false public func setupAudioEngine() { self.engine = AVAudioEngine() let input = engine.inputNode let format = input.inputFormat(forBus: 0) playerNode = AVAudioPlayerNode() engine.attach(playerNode) self.mixer = engine.mainMixerNode engine.connect(self.playerNode, to: self.mixer, format: playerNode.outputFormat(forBus: 0)) engine.inputNode.installTap(onBus: 0, bufferSize: 4096, format: format, block: { buffer, time in self.playerNode.scheduleBuffer(buffer, completionHandler: nil) }) do { engine.prepare() try self.engine.start() audioEngineRunning = true self.playerNode.play() } catch { print("error couldn't start engine") audioEngineRunning = false } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
232 Views
I am using AVAudioSession with playAndRecord category as follows: private func setupAudioSessionForRecording() { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setActive(false) try audioSession.setPreferredSampleRate(Double(48000)) } catch { NSLog("Unable to deactivate Audio session") } let options:AVAudioSession.CategoryOptions = [.allowAirPlay, .mixWithOthers] do { try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: options) } catch { NSLog("Could not set audio session category \(error)") } do { try audioSession.setActive(true) } catch { NSLog("Unable to activate AudioSession") } } Next I use AVAudioEngine to repeat what I say in the microphone to external speakers (on the TV connected with iPhone using HDMI Cable). //MARK:- AudioEngine var engine: AVAudioEngine! var playerNode: AVAudioPlayerNode! var mixer: AVAudioMixerNode! var audioEngineRunning = false public func setupAudioEngine() { self.engine = AVAudioEngine() engine.connect(self.engine.inputNode, to: self.engine.outputNode, format: nil) do { engine.prepare() try self.engine.start() } catch { print("error couldn't start engine") } audioEngineRunning = true } public func stopAudioEngine() { engine.stop() audioEngineRunning = false } The issue is I hear some kind of reverb/humming noise after I speak for a few seconds that keeps getting amplified and repeated. If I use a RemoteIO unit instead, no such noise comes out of speakers. I am not sure if my setup of AVAudioEngine is correct. I have tried all kinds of AVAudioSession configuration but nothing changes. The link to sample audio with background speaker noise is posted [here] in the Stackoverflow forum (https://stackoverflow.com/questions/72170548/echo-when-using-avaudioengine-over-hdmi#comment127514327_72170548)
Posted Last updated
.
Post not yet marked as solved
1 Replies
278 Views
I have a RemoteIO unit that successfully playbacks the microphone samples in realtime via attached headphones. I need to get the same functionality ported using AVAudioEngine, but I can't seem to make a head start. Here is my code, all I do is connect inputNode to playerNode which crashes. var engine: AVAudioEngine! var playerNode: AVAudioPlayerNode! var mixer: AVAudioMixerNode! var engineRunning = false private func setupAudioSession() { var options:AVAudioSession.CategoryOptions = [.allowBluetooth, .allowBluetoothA2DP] do { try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: AVAudioSession.Mode.default, options: options) try AVAudioSession.sharedInstance().setAllowHapticsAndSystemSoundsDuringRecording(true) } catch { MPLog("Could not set audio session category") } let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setActive(false) try audioSession.setPreferredSampleRate(Double(44100)) } catch { print("Unable to deactivate Audio session") } do { try audioSession.setActive(true) } catch { print("Unable to activate AudioSession") } } private func setupAudioEngine() { self.engine = AVAudioEngine() self.playerNode = AVAudioPlayerNode() self.engine.attach(self.playerNode) engine.connect(self.engine.inputNode, to: self.playerNode, format: nil) do { try self.engine.start() } catch { print("error couldn't start engine") } engineRunning = true } But starting AVAudioEngine causes a crash: libc++abi: terminating with uncaught exception of type NSException *** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: inDestImpl->NumberInputs() > 0 || graphNodeDest->CanResizeNumberOfInputs()' terminating with uncaught exception of type NSException How do I get realtime record and playback of mic samples via headphones working?
Posted Last updated
.
Post not yet marked as solved
1 Replies
379 Views
I am working on music application where multiple wav files are scheduling within time frame. Everything is working perfect except one scenario where there is small beef is coming while scheduling player node again. For example - one.wav is playing on PlayerNode1 and now I am rescheduling to second.wav after 2 second then there is small beep is coming. I have tried to stop node by checking isPlaying condition. Still it is not working. Am I doing anything wrong here. if playerNode.isPlaying { playerNode.stop() } playerNode.scheduleFile(audioFile, at: nil, completionHandler: nil) playerNode.play() I am using same player node for performance as there are 24 wav files that needs to be played in 1 minute so there is no point to keeping all player nodes. How would stop beef while rescheduling new audio file for same player node? I have shared link below to check issue. https://drive.google.com/file/d/1FjZtLUj_wUp0LQPyjIwfJNy67HWUlt0I/view?usp=sharing Expected result should be song continuity
Posted Last updated
.
Post not yet marked as solved
2 Replies
583 Views
I recently released my first ShazamKit app, but there is one thing that still bothers me. When I started I followed the steps as documented by Apple right here : https://developer.apple.com/documentation/shazamkit/shsession/matching_audio_using_the_built-in_microphone however when I was running this on iPad I receive a lot of high pitched feedback noise when I ran my app with this configuration. I got it to work by commenting out the output node and format and only use the input. But now I want to be able to recognise the song that’s playing from the device that has my app open and was wondering if I need the output nodes for that or if I can do something else to prevent the Mic. Feedback from happening. In short: What can I do to prevent feedback from happening Can I use the output of a device to recognise songs or do I just need to make sure that the microphone can run at the same time as playing music? Other than that I really love the ShazamKit API and can highly recommend to have a go with it! This is the code as documented in the above link (I just added the comments of what broke it for me) func configureAudioEngine() { // Get the native audio format of the engine's input bus. let inputFormat = audioEngine.inputNode.inputFormat(forBus: 0) // THIS CREATES FEEDBACK ON IPAD PRO let outputFormat = AVAudioFormat(standardFormatWithSampleRate: 48000, channels: 1) // Create a mixer node to convert the input. audioEngine.attach(mixerNode) // Attach the mixer to the microphone input and the output of the audio engine. audioEngine.connect(audioEngine.inputNode, to: mixerNode, format: inputFormat) // THIS CREATES FEEDBACK ON IPAD PRO audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: outputFormat) // Install a tap on the mixer node to capture the microphone audio. mixerNode.installTap(onBus: 0, bufferSize: 8192, format: outputFormat) { buffer, audioTime in // Add captured audio to the buffer used for making a match. self.addAudio(buffer: buffer, audioTime: audioTime) } }
Posted Last updated
.
Post marked as solved
1 Replies
549 Views
I'm developing a game that will use speech recognition to execute various commands. I am using code from Apple's Recognizing Speech in Live Audio documentation page. When I run this in a Swift Playground, it works just fine. However, when I make a SpriteKit game application (basic setup from Xcode's "New Project" menu option), I get the following error: required condition is false: IsFormatSampleRateAndChannelCountValid(hwFormat) Upon further research, it appears that my input node has no channels. The following is the relevant portion of my code, along with debug output: let inputNode = audioEngine.inputNode print("Number of inputs: \(inputNode.numberOfInputs)") // 1 print("Input Format: \(inputNode.inputFormat(forBus: 0))") // <AVAudioFormat 0x600001bcf200: 0 ch, 0 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved> let channelCount = inputNode.inputFormat(forBus: 0).channelCount print("Channel Count: \(channelCount)") // 0 <== Agrees with the inputFormat output listed previously // Configure the microphone input. print("Number of outputs: \(inputNode.numberOfOutputs)") // 1 let recordingFormat = inputNode.outputFormat(forBus: 0) print("Output Format: \(recordingFormat)") // <AVAudioFormat 0x600001bf3160: 2 ch, 44100 Hz, Float32, non-inter> inputNode.installTap(onBus: 0, bufferSize: 256, format: recordingFormat, block: audioTap) // <== This is where the error occurs. // NOTE: 'audioTap' is a function defined in this class. Using this defined function instead of an inline, anonymous function. The code snippet is included in the game's AppDelegate class (which includes import statements for Cocoa, AVFoundation, and Speech), and executes during its applicationDidFinishLaunching function. I'm having trouble understanding why Playground works, but a game app doesn't work. Do I need to do something specific to get the application to recognize the microphone? NOTE: This if for MacOS, NOT iOS. While the "How To" documentation cited earlier indicates iOS, Apple stated at WWDC19 that it is now supported on the MacOS. NOTE: I have included the NSSpeechRecognitionUsageDescription key in the applications plist, and successfully acknowledged the authorization request for the microphone.
Posted Last updated
.
Post not yet marked as solved
4 Replies
1.1k Views
Hi all, I'm using AVAudioEngine to play multiple nodes at various times (like GarageBand for example). So far I managed to play the various files at the right time using this code : DispatchQueue.global(qos: .background).async {             AudioManager.instance.audioEngine.attach(AudioManager.instance.mixer)             AudioManager.instance.audioEngine.connect(AudioManager.instance.mixer, to: AudioManager.instance.audioEngine.outputNode, format: nil)           // !important - start the engine *before* setting up the player nodes           try! AudioManager.instance.audioEngine.start()                      for audioFile in data {             // Create and attach the audioPlayer node for this file             let audioPlayer = AVAudioPlayerNode()             AudioManager.instance.audioEngine.attach(audioPlayer)             AudioManager.instance.nodes.append(audioPlayer)             // Notice the output is the mixer in this case             AudioManager.instance.audioEngine.connect(audioPlayer, to: AudioManager.instance.mixer, format: nil)             let fileUrl = audioFile.audio.fileUrl             if let file : AVAudioFile = try? AVAudioFile.init(forReading: fileUrl) {                 let time = audioFile.start > 0 ? AudioManager.instance.secondsToAVAudioTime(hostTime: mach_absolute_time(), time: Double(audioFile.start / CGFloat.secondsToPoints)) : nil                 audioPlayer.scheduleFile(file, at: time, completionHandler: nil)                 audioPlayer.play(at: time)             }           }         } Basically my data object contains struct that have a reference to an audio fileURL and the startPosition at which it should begin. That works great. now I would like to export all these tracks mixed into a single file and save it to the Document's directory of the user. How can I achieve this? Thanks for your help.
Posted
by radada.
Last updated
.
Post not yet marked as solved
0 Replies
477 Views
I have an AVMutableAudioMix and use MTAudioProcessingTap to process the audio data.But After I pass the buffer to AVAudioEngine and to render it with renderOffline,the audio has no any effects...How can I do it? Any idea? Here is the code for MTAudioProcessingTapProcessCallback var callback = MTAudioProcessingTapCallbacks(version: kMTAudioProcessingTapCallbacksVersion_0, clientInfo:UnsafeMutableRawPointer(Unmanaged.passUnretained(self.engine).toOpaque()), init: tapInit, finalize: tapFinalize, prepare: tapPrepare, unprepare: tapUnprepare) { tap, numberFrames, flags, bufferListInOut, numberFramesOut, flagsOut in                       guard MTAudioProcessingTapGetSourceAudio(tap, numberFrames, bufferListInOut, flagsOut, nil, numberFramesOut) == noErr else {         preconditionFailure()       }       let storage = MTAudioProcessingTapGetStorage(tap)       let engine = Unmanaged<Engine>.fromOpaque(storage).takeUnretainedValue()       // render the audio with effect       engine.render(bufferPtr: bufferListInOut,numberOfFrames: numberFrames)     } And here is the Engine code class Engine {   let engine = AVAudioEngine()       let player = AVAudioPlayerNode()   let pitchEffect = AVAudioUnitTimePitch()   let reverbEffect = AVAudioUnitReverb()   let rateEffect = AVAudioUnitVarispeed()   let volumeEffect = AVAudioUnitEQ()   let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 2, interleaved: false)!   init() {     engine.attach(player)     engine.attach(pitchEffect)     engine.attach(reverbEffect)     engine.attach(rateEffect)     engine.attach(volumeEffect)           engine.connect(player, to: pitchEffect, format: format)     engine.connect(pitchEffect, to: reverbEffect, format: format)     engine.connect(reverbEffect, to: rateEffect, format: format)     engine.connect(rateEffect, to: volumeEffect, format: format)     engine.connect(volumeEffect, to: engine.mainMixerNode, format: format)           try! engine.enableManualRenderingMode(.offline, format: format, maximumFrameCount: 4096)           reverbEffect.loadFactoryPreset(AVAudioUnitReverbPreset.largeRoom2)     reverbEffect.wetDryMix = 100     pitchEffect.pitch = 2100           try! engine.start()     player.play()   }       func render(bufferPtr:UnsafeMutablePointer<AudioBufferList>,numberOfFrames:CMItemCount) {     let buffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: 4096)!     buffer.frameLength = AVAudioFrameCount(numberOfFrames)     buffer.mutableAudioBufferList.pointee = bufferPtr.pointee     self.player.scheduleBuffer(buffer) {       try! self.engine.renderOffline(AVAudioFrameCount(numberOfFrames), to: buffer)     }   } }
Posted
by luckysmg.
Last updated
.
Post not yet marked as solved
1 Replies
640 Views
I have a kind of trivial request, I need to seek sound playback. The problem is I don't have a local sound file, I got a pointer to the sound instead (as well as other params), from (my internal) native lib. There is a method that I use in order to convert UnsafeRawPointer to AVAudioPCMBuffer ... var byteCount: Int32 = 0       var buffer: UnsafeMutableRawPointer?               defer {         buffer?.deallocate()         buffer = nil       }               if audioReader?.getAudioByteData(byteCount: &byteCount, data: &buffer) ?? false && buffer != nil {         let audioFormat = AVAudioFormat(standardFormatWithSampleRate: Double(audioSampleRate), channels: AVAudioChannelCount(audioChannels))!         if let pcmBuf = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: AVAudioFrameCount(byteCount)) {           let monoChannel = pcmBuf.floatChannelData![0]           pcmFloatData = [Float](repeating: 0.0, count: Int(byteCount))                       //>>> Convert UnsafeMutableRawPointer to [Int8] array           let int16Ptr: UnsafeMutablePointer<Int16> = buffer!.bindMemory(to: Int16.self, capacity: Int(byteCount))           let int16Buffer: UnsafeBufferPointer<Int16> = UnsafeBufferPointer(start: int16Ptr, count: Int(byteCount) / MemoryLayout<Int16>.size)           let int16Arr: [Int16] = Array(int16Buffer)           //<<<                       // Int16 ranges from -32768 to 32767 -- we want to convert and scale these to Float values between -1.0 and 1.0           var scale = Float(Int16.max) + 1.0           vDSP_vflt16(int16Arr, 1, &pcmFloatData[0], 1, vDSP_Length(int16Arr.count)) // Int16 to Float           vDSP_vsdiv(pcmFloatData, 1, &scale, &pcmFloatData[0], 1, vDSP_Length(int16Arr.count)) // divide by scale              memcpy(monoChannel, pcmFloatData, MemoryLayout<Float>.size * Int(int16Arr.count))           pcmBuf.frameLength = UInt32(int16Arr.count)                       usagePlayer.setupAudioEngine(with: audioFormat)           audioClip = pcmBuf         }       } ... So, at the end of the method, you can see this line audioClip = pcmBuf, where the prepared pcmBuf is pass to the local variable. Then, what I need is just to start it like this ... /*player is AVAudioPlayerNode*/ player.scheduleBuffer(buf, at: nil, options: .loops) player.play() ... and that is it, now I can hear the sound. But let's say I need to seek forward in 10 sec, in order to do this I need stop() the player node, set a new AVAudioPCMBuffer but this time with an offset of 10 sec. The problem is there is no method for offset, nor from the player node side nor AVAudioPCMBuffer. For example, if I would work with a file (instead of a buffer), I could use this method ... player.scheduleSegment(         file,         startingFrame: seekFrame,         frameCount: frameCount,         at: nil       ) ... There at least you can use startingFrame: seekFrame and frameCount: frameCount params. But in my case I don't use file I use buffer and the problem is that - there is no such params for buffer implementation. Looks like I can't implement seek logic if I use AVAudioPCMBuffer. What am I doing wrong?
Posted Last updated
.
Post not yet marked as solved
1 Replies
563 Views
I want to create a sort of soundscape in surround sound. Imagine something along the lines of the user can place the sound of a waterfall to their front right and the sound of frogs croaking to their left etc. etc. I have an AVAudioEngine playing a number of AVAudioPlayerNodes. I'm using AVAudioEnvironmentNode to simulate the positioning of these. The position seems to work correctly. However, I'd like these to work with head tracking so if the user moves their head the sounds from the players move accordingly. I can't figure out for to do it or find any docs on the subject. Is it possible to make AVAudioEngine output surround sound and if it can would the tracking just work automagically the same as it does when playing surround sound content using AVPlayerItem. If not is the only way to achieve this effect to use CMHeadphonemotionmanager and manually move the listener AVAudioEnvironmentNode listener around?
Posted Last updated
.
Post not yet marked as solved
0 Replies
446 Views
Using AVAudioEngine with an AVAudioPlayerNode which plays stereo sound works perfectly. I even understand that turning setVoiceProcessingEnabled on the inputNode turns the sound mono. But after I stop the session and the engine, and turn voice processing off, the sound remains mono. This issue is only present with the built in speakers. This is what the I/O formats of the nodes look like before and after the voiceProcessing on-off: Before: mainMixer input<AVAudioFormat 0x281896580: 2 ch, 44100 Hz, Float32, non-inter> mainMixer output<AVAudioFormat 0x2818919f0: 2 ch, 44100 Hz, Float32, non-inter> outputNode input<AVAudioFormat 0x281891cc0: 2 ch, 44100 Hz, Float32, non-inter> outputNode output<AVAudioFormat 0x281891770: 2 ch, 44100 Hz, Float32, non-inter> After: mainMixer input<AVAudioFormat 0x2818acaf0: 2 ch, 44100 Hz, Float32, non-inter> mainMixer output<AVAudioFormat 0x2818acaa0: 1 ch, 44100 Hz, Float32> outputNode input<AVAudioFormat 0x281898820: 1 ch, 44100 Hz, Float32> outputNode output<AVAudioFormat 0x2818958b0: 2 ch, 44100 Hz, Float32, non-inter> Sadly just changing the connection type does not solve anything, I already tried that with (this solves the stereo issue on headphones, but not on built in speakers): let format = AVAudioFormat(standardFormatWithSampleRate: 48000, channels: 2)! audioEngine.connect(audioEngine.mainMixerNode, to: audioEngine.outputNode, format: format)
Posted Last updated
.