AVAudioEngine

RSS for tag

Use a group of connected audio node objects to generate and process audio signals and perform audio input and output.

AVAudioEngine Documentation

Posts under AVAudioEngine tag

55 Posts
Sort by:
Post not yet marked as solved
0 Replies
87 Views
https://developer.apple.com/videos/play/wwdc2023/10235/ - In this WWDC session, at 3:19 - Apple has introduced **Other audio ducking ** feature In iOS17, we can control the amount of 'other audio' ducking through the AVAudioEngine. Is this also possible on AVAudioSession ? We are using an AVAudioSession for a VOIP call while concurrently attempting to play a video through an AVPlayer. However, the volume of the AVPlayer is considerably low. Does anyone have any ideas on how to achieve the level of control that AVAudioEngine offers?
Posted
by
Post not yet marked as solved
0 Replies
122 Views
I'm using AVAudioEngine to play AVAudioPCMBuffers. I'd like to synchronize some events with the playback. For example if the audio's frame position is >= some point && less than some point trigger some code. So I'm looking at - (void)installTapOnBus:(AVAudioNodeBus)bus bufferSize:(AVAudioFrameCount)bufferSize format:(AVAudioFormat * __nullable)format block:(AVAudioNodeTapBlock)tapBlock; Now I have frame positions calculated (predetermined before audio is scheduled I already made all necessary computations) . So I just need to fire code at certain points during playback: [playerNode installTapOnBus:bus bufferSize:bufferSize format:format block:^(AVAudioPCMBuffer * _Nonnull buffer, AVAudioTime * _Nonnull when) { //Inspect current audio here and fire... }]; [playerNode scheduleBuffer:fullbuffer atTime:startTime options:0 completionCallbackType:AVAudioPlayerNodeCompletionDataPlayedBack completionHandler:^(AVAudioPlayerNodeCompletionCallbackType callbackType) { // some code is here, not important to this question. }]; The problem I'm having is figuring out at what point in full buffer I'm at within the tap block. The tap block passes chunks (not the full audio buffer). I tried using the when parameter of the block to calculate the frame position relative to the entire audio but have be unsuccessful so far. I'm assuming the when parameter is relative to the buffer passed in the tap block (not my entire audio buffer I scheduled). Not installing a tap and just using a timer before scheduling my fullBuffer has given me good results but I'd rather avoid using a timer if possible and use sample time.
Post not yet marked as solved
1 Replies
372 Views
Hello everyone, I'm relatively new to iOS development, and I'm currently working on a Flutter plugin package. I want to use the AVFAudio package to load instrument sounds from an SF2 file into different channels. Specifically, I'd like to load individual instruments from the SF2 file onto separate channels. However, I've been struggling to find a way to achieve this. Could someone guide me on how to load SF2 instrument sounds into different channels using AVFAudio? I've tried various combinations of parameters (program number, soundbank MSB, and soundbank LSB), but none seem to work. If anyone has experience with AVFAudio and SF2 files, I'd greatly appreciate your help. Perhaps there's a proven approach or a way to determine the correct values for these parameters? Should I use a soundfont editor to inspect specific values within the SF2 file? Thank you in advance for any assistance! Best regards, Melih
Posted
by
Post not yet marked as solved
0 Replies
258 Views
Hi everybody, I'm trying to use the multi input of an usb device using the AVAudioEngine. My aim is to connect different inputNode channels to 2 or more different audionode (f.e. mixer). I'm able to get a spefic input channel from the engine inputNode with OSStatus err = AudioUnitSetProperty(avEngine.inputNode.audioUnit, kAudioOutputUnitProperty_ChannelMap, kAudioUnitScope_Output, 1, outputChannelMap, propSize); but this will change the routing to all the input node and to all the destination mixer nodes. How to send channel 1 of inputNode to a mixerNode1 and channel 2 to another mixerNode2?
Posted
by
Post not yet marked as solved
0 Replies
388 Views
Hi, I'm new to AVAudioEngine(and macOS programming in general). I'm trying to mix microphone audio with ScreenCaptureKit audio using AVAudioEngine without playing it back. I've created a AVAudioPlayerNode and scheduling buffers in my SCStream handler: playerNode.scheduleBuffer(samples) and have connected the playerNode to the mainMixerNode. audioEngine.connect(audioEngine.inputNode, to: audioEngine.mainMixerNode, format: micFormat) audioEngine.connect(playerNode, to: audioEngine.mainMixerNode, format: format) The problem is that mainMixerNode plays the audio to the speaker creating a feedback loop. How can I prevent the mixer output from being played back. Also: Is this the best way of mixing microphone input with some other input? I ran into AVAudioEngine's manual rendering mode, which seems like the way to go for mixing audio without playing it back. However, I couldn't figure out how to connect microphone input to the AVAudioEngine in manual rendering mode?
Posted
by
Post not yet marked as solved
0 Replies
271 Views
I have a PCM audio buffer (AVAudioPCMFormatInt16). When I try to play it using AVPlayerNode / AVAudioEngine an exception is thrown: "[[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr]: returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 (related thread https://forums.developer.apple.com/forums/thread/700497?answerId=780530022#780530022) If I convert the buffer to AVAudioPCMFormatFloat32 playback works. My questions are: Does AVAudioEngine / AVPlayerNode require AVAudioPCMBuffer to be in the Float32 format? Is there a way I can configure it to accept another format instead for my application? If 1 is YES is this documented anywhere? If 1 is YES is this required format subject to change at any point? Thanks! I was looking to watch the "AVAudioEngine in Practice" session video from WWDC 2014 but I can't find it anywhere (https://forums.developer.apple.com/forums/thread/747008).
Post marked as solved
1 Replies
359 Views
I am writing code to monitor the incoming audio levels in VisionOS. It works properly in the simulator, but gets an error on the device. Curious if anyone has any tips. I took out some of the code so it's a bit shorter, as it fails in setupAudioEngine when I try to start the engine with this error: Error starting audio engine: The operation couldn’t be completed. (com.apple.coreaudio.avfaudio error 561145187.) Thanks in advance! Here is my code: class AudioInputMonitor: ObservableObject { private var audioEngine: AVAudioEngine? @Published var inputLevel: Float = 0 init() { requestMicrophonePermission() } private func requestMicrophonePermission() { AVAudioApplication.requestRecordPermission { granted in DispatchQueue.main.async { if granted { self.setupAudioSessionAndEngine() } else { print("Microphone permission not granted") // Handle the case where permission is not granted } } } } private func setupAudioSessionAndEngine() { do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .measurement, options: []) try audioSession.setActive(true) self.setupAudioEngine() } catch { print("Failed to set up the audio session: \(error)") } } private func setupAudioEngine() { audioEngine = AVAudioEngine() guard let inputNode = audioEngine?.inputNode else { print("Failed to get the audio input node") return } let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { [weak self] (buffer, _) in self?.analyzeAudio(buffer: buffer) } do { try audioEngine?.start() } catch { print("Error starting audio engine: \(error.localizedDescription)") } } private func analyzeAudio(buffer: AVAudioPCMBuffer) { // removed to be brief } func stopMonitoring() { // removed to be brief } }
Posted
by
Post not yet marked as solved
0 Replies
393 Views
Hi, I'm relatively new to iOS development and kindly ask for some feedback on a strategy to achieve this desired behavior in my app. My Question: What would be the best strategy for sound effect playback when an app is in the background with precise timing? Is this even possible? Context: I created a basic countdown timer app (targeting iOS 17 with Swift/SwiftUI.). Countdown sessions can last up to 30-60 mins. When the timer is started it progresses through a series of sub-intervals and plays a short sound for each one. I used AVAudioPlayer and everything works fine when the app is in the foreground. I'm considering switching to AVAudioEngine b/c precise timing is very important and the AIs tell me this would have better precision. I'm already setting "App plays audio or streams audio/video using AirPlay" in my Plist, and have configured: AVAudioSession.sharedInstance().setCategory(.playback, mode: .default, options: .mixWithOthers) Curiously, when testing on my iPhone 13 mini, sounds sometimes still play when the app is in the background, but not always. What I've considered: Background Tasks: Would they make any sense for this use-case? Seems like not if the allowed time is short & limited by the system. Pre-scheduling all Sounds: Not sure this would even work and seems like a lot of memory would be needed (could be hundreds of intervals). ActivityKit Alerts: works but with a ~50ms delay which is too long for my purposes. Pre-Render all SFX to 1 large audio file: Seems like a lot of work and processing time and probably not worth it. I hope there's a better solution. I'd really appreciate any feedback.
Posted
by
Post not yet marked as solved
1 Replies
367 Views
My project has uses an AVAudioEngine with a very simple setup: A Speech recognizer running on a tap on the engine's input with separate AVAudioPlayerNodes handling playback. try session.setCategory(.playAndRecord, mode: .default, options: []) try session.setActive(true, options: .notifyOthersOnDeactivation) try session.setAllowHapticsAndSystemSoundsDuringRecording(true) filePlayerNode ---> engine.mainMixerNode bufferPlayerNode --> engine.mainMixerNode engine.mainMixerNode --> engine.outputNode //bufferPlayer.scheduleBuffer() is called on its own queue The input works fine since the buffers can be collected into a file and plays back correctly, and also because the recognizer works fine; but when I try to play the live audio by sending the buffer to the bufferPlayer on this or another device, the buffer audio plays at a very low volume, sometimes with severe distortions. If I lower the sample rate via AVAudioConverter, the distortions get worse. I've tried experimenting with the AVAudioSession category options, having separate AVAudioEngines, and much, much more, yet I still haven't figured this out. It's gotten to the point where I've fixed almost all the arcane and minor issues in my audio system, yet I still can't play back my voice properly. The ability to both play and record simultaneously is a basic feature of phones--when on speaker mode, a phone doesn't need to behave like a walkie-talkie. In my mind, it's inconceivable that the relatively new AVAudioEngine doesn't have a implementation for this, since the main issue (feedback loops) can be dealt with via a simple primitive circuit. Live video chat apps like FaceTime wouldn't be possible without this, yet to my surprise I found no answers online (what I did find were articles explaining how to write a file while playback is occurring). Is there truly no way to do this on AVAudioEngine? Am I missing something fundamental? Any pointers would be greatly appreciated
Posted
by
wmk
Post not yet marked as solved
0 Replies
330 Views
I’m exploring enabling speech-to-commands processing for a game, but would like to try and do a baseline of voice recognition within that to allow two people in close proximity to interact , but not interfere with each others voice commands to this system. (it’s for an accessible game idea)
Posted
by
Post not yet marked as solved
0 Replies
356 Views
Hi everyone, I was working on some code that involves recording audio with AVAudioEngine and got an issue that just crashes the app: EXC_BREAKPOINT Exception 6, Code 1, Subcode 4304279688 +0x009888 AudioRecordModule.setupAudioEngine +0x009788 AudioRecordModule.setupAudioEngine +0x00c5bc AudioRecordModule.handleConfigurationChange Below is the relevant code in the Recorder class. public class AudioRecordModule: Module { private var audioEngine: AVAudioEngine? private func startRecording(options recordingOptions: RecordingOptions) { try AVAudioSession.sharedInstance().setCategory(.playAndRecord, options: .mixWithOthers) try AVAudioSession.sharedInstance().setActive(true) outputFormat = AVAudioFormat( commonFormat: recordingOptions.bitDepth == 32 ? .pcmFormatInt32 : .pcmFormatInt16, sampleRate: Double(recordingOptions.sampleRate), channels: AVAudioChannelCount(recordingOptions.channels), interleaved: true )! let fileUri = URL(string: recordingOptions.fileUri)! let formatSettings: [String: Any] = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVSampleRateKey: recordingOptions.sampleRate, AVNumberOfChannelsKey: recordingOptions.channels, AVEncoderBitRateStrategyKey: AVAudioBitRateStrategy_Constant, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue, ] self.recordedFile = try AVAudioFile( forWriting: fileUri, settings: formatSettings, commonFormat: outputFormat.commonFormat, interleaved: outputFormat.isInterleaved ) if !hadSetupNotification { setupNotifications() } } func handleConfigurationChange() { DispatchQueue.main.async { self.releaseAudioEngine() self.setupAudioEngine() if self.state == "recording" { // we could attempt to keep recording do { try self.audioEngine?.start() } catch { self.internalPauseRecording() self.sendInterruptEvent() } } } } func setupNotifications() { nc.addObserver( forName: Notification.Name.AVAudioEngineConfigurationChange, object: nil, queue: nil ) { [weak self] _ in guard let weakself = self else { return } if weakself.state != "inactive" { weakself.handleConfigurationChange() } } } private func setupAudioEngine() { self.audioEngine = nil let audioEngine = AVAudioEngine() self.audioEngine = audioEngine let inputNode = audioEngine.inputNode let inputFormat = inputNode.inputFormat(forBus: 0) let converter = AVAudioConverter(from: inputFormat, to: outputFormat)! inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in do { let inputBlock: AVAudioConverterInputBlock = { _, outStatus in outStatus.pointee = AVAudioConverterInputStatus.haveData return buffer } let frameCapacity = AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate) let outputBuffer = AVAudioPCMBuffer( pcmFormat: self.outputFormat, frameCapacity: frameCapacity )! var error: NSError? converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if let error = error { throw error } else { try self.recordedFile?.write(from: outputBuffer) } } catch { print(error) } } } private func releaseAudioEngine() { if let audioEngine = self.audioEngine { audioEngine.inputNode.removeTap(onBus: 0) audioEngine.stop() } audioEngine = nil } } Beside that, the record module works normally. It is just the configuration change that it does not handle well. I understand that when configuration changes, I need to reinit the audio engine to have the correct input format (since the new config/audio device can have different sample rate and such). If I don't do that, the app also crashes perhaps due to the mismatch. AVAudioRecorder is not an option for me. Thank you for your help.
Posted
by
Post not yet marked as solved
1 Replies
417 Views
I have a custom USB device that includes a microphone. I can see the microphone on macOS when I plug in the device so I know that it is working with the kernel and AV subsystems. I can enumerate and reference the microphone using AVCaptureDevice but I have not been able to figure out how to use this device reference with AVAudioEngine. I'm trying to accomplish two things with this microphone. I want to stream audio from the microphone and have it rendered to the speakers on my MacBook Pro. I want to capture sound data from the microphone and forward it to a live streaming API. To my mind, from what I've read, I need AVAudioEngine to do this but I'm having trouble determining from the documentation just how to go about it on macOS. It seems that there is a lot more information for iOS or iPadOS but since USB-C support is sparsely documented on those operating systems, I'm focusing on the desktop (macOS) for now. Can I convert an AVCaptureDevice into and audio input for AVAudioEngine? If not, how can I accomplish what I'm trying to do using whatever is available on AVFoundation?
Posted
by
Post not yet marked as solved
1 Replies
800 Views
My app listens for verbal commands "Roll" & "Skip". It was working well until I used it while listening to a podcast in another app. I am getting a crash with the error: Thread 1: "required condition is false: IsFormatSampleRateAndChannelCountValid(format)" . It crashes when I am playing audio from the apps Snipd (a podcast app) or the Apple Podcast app. When I am playing audio from Youtube or the Apple Music it does not crash. This is the code for when I start listening for the commands: // MARK: - Speech Recognition func startListening() { do { try configureAudioSession() createRecognitionRequest() try prepareAudioEngine() } catch { print("Audio Engine error: \(error.localizedDescription)") } } private func configureAudioSession() throws { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playAndRecord, mode: .measurement, options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers]) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) } private func createRecognitionRequest() { recognitionRequest = SFSpeechAudioBufferRecognitionRequest() guard let recognitionRequest = recognitionRequest else { return } recognitionRequest.shouldReportPartialResults = true recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: handleRecognitionResult) } private func prepareAudioEngine() throws { let inputNode = audioEngine.inputNode inputNode.removeTap(onBus: 0) let inputFormat = inputNode.inputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) { [weak self] (buffer, _) in self?.recognitionRequest?.append(buffer) } audioEngine.prepare() try audioEngine.start() isActuallyListening = true } Thanks
Posted
by
Post not yet marked as solved
0 Replies
468 Views
As the title suggests I am using AVAudioEngine for SpeechRecognition input & AVAudioPlayer for sound output. Apple says in this talk https://developer.apple.com/videos/play/wwdc2019/510 that the setVoiceProcessingEnabled function very usefully cancels the output from speaker to the mic. I set voiceProcessing on the Input and output nodes. It seems to work however the volume is low, even when the system volume is turned up. Any solution to this would be much appreciated.
Posted
by
Post not yet marked as solved
0 Replies
425 Views
How can I record audio in a keyboard extension? I've enabled microphone support by enabling "RequestsOpenAccess". When I try to record, I get the error below in the console. This doesn't make sense as Apple's docs seem to say that microphone access is allowed with Full Keyboard Access. What is the point of enabling the microphone if the app cannot access the data from the microphone? -CMSUtilities- CMSUtility_IsAllowedToStartRecording: Client sid:0x2205e, XXXXX(17965), 'prim' with PID 17965 was NOT allowed to start recording because it is an extension and doesn't have entitlements to record audio.
Posted
by
Post not yet marked as solved
1 Replies
461 Views
Basically for this iPhone app I want to be able to record from either the built in microphone or from a connected USB audio device while simultaneously playing back processed audio on connected AirPods. It's a pretty simple AVAudioEngine setup that includes a couple of effects units. The category is set to .playAndRecord with the .allowBluetooth and .allowBluetoothA2DP options added. With no attempts to set the preferred input and AirPods connected, the AirPods mic will be used and output also goes to the AirPods. If I call setPreferredInput to either built in mic or a USB audio device I will get input as desired but then output will always go to the speaker. I don't really see a good explanation for this and overrideOutputAudioPort does not really seem to have suitable options. Testing this on iPhone 14 Pro
Posted
by
Post not yet marked as solved
2 Replies
721 Views
Hello, I started to set audio stereo recording (both audio and video are recorded) and the audio quality seems to be lower than quality obtained with native camera application (configured for stereo). Using console to check the log, I found a difference between camera app and mine regarding MXSessionMode (of mediaserverd) in fact, camera application gives MXSessionMode = SpatialRecording and mine MXSessionMode = VideoRecording How can I configure capture session to finally have MXSessionMode = SpatialRecording? Any suggestion? Best regards
Posted
by
Post not yet marked as solved
0 Replies
559 Views
In voip application , when the CallKit is enabled if we try playing a video through AVplayer the video content is updated frame by frame and the audio of the content is not audible . This issue is observed only in iOS 17, any idea how can we resolve this
Posted
by