I am experiencing an issue while recording audio using AVAudioEngine with the installTap method. I convert the AVAudioPCMBuffer to Data and send it to a UDP server. However, when I receive the Data and play it back, there is continuous crackling noise during playback.
I am sending audio data using this library "https://github.com/mindAndroid/swift-rtp" by creating packet and send it.
Please help me resolve this issue. I have attached the code reference that I am currently using.
Thank you.
ViewController.swift
AVAudioEngine
RSS for tagUse a group of connected audio node objects to generate and process audio signals and perform audio input and output.
Posts under AVAudioEngine tag
51 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hello, I'm fairly new to AVAudioEngine and I'm trying to connect 2 mono nodes as left/right input to a stereo node. I was successful in splitting the input audio to 2 mono nodes using AVAudioConnectionPoint and channelMap.
But I can't figure out how to connect them back to a stereo node.
I'll post the code I have so far. The use case for this is that I'm trying to process the left/right channels with separate audio units.
Any ideas?
let monoFormat = AVAudioFormat(standardFormatWithSampleRate: nativeFormat.sampleRate, channels: 1)!
let leftInputMixer = AVAudioMixerNode()
let rightInputMixer = AVAudioMixerNode()
let leftOutputMixer = AVAudioMixerNode()
let rightOutputMixer = AVAudioMixerNode()
let channelMixer = AVAudioMixerNode()
[leftInputMixer, rightInputMixer, leftOutputMixer,
rightOutputMixer, channelMixer].forEach { engine.attach($0) }
let leftConnectionR = AVAudioConnectionPoint(node: leftInputMixer, bus: 0)
let rightConnectionR = AVAudioConnectionPoint(node: rightInputMixer, bus: 0)
plugin.leftInputMixer = leftInputMixer
plugin.rightInputMixer = rightInputMixer
plugin.leftOutputMixer = leftOutputMixer
plugin.rightOutputMixer = rightOutputMixer
plugin.channelMixer = channelMixer
leftInputMixer.auAudioUnit.channelMap = [0]
rightInputMixer.auAudioUnit.channelMap = [1]
engine.connect(previousNode, to: [leftConnectionR, rightConnectionR], fromBus: 0, format: monoFormat)
// Process right channel, pass through left channel
engine.connect(rightInputMixer, to: plugin.audioUnit, format: monoFormat)
engine.connect(plugin.audioUnit, to: rightOutputMixer, format: monoFormat)
engine.connect(leftInputMixer, to: leftOutputMixer, format: monoFormat)
// Mix back to stereo?
engine.connect(leftOutputMixer, to: channelMixer, format: stereoFormat)
engine.connect(rightOutputMixer, to: channelMixer, format: stereoFormat)
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The audio format has a bitrate of 48000, and each buffer has 480 samples.
I noticed when calling
audioPlayerNode.scheduleBuffer(audioBuffer)
The memory keeps increasing at the speed of 0.1MB per second And at around 4 minutes, the node seems to be full of buffers and had a hard reset, at which point, the audio is stopped temporary with a memory change. see attached screenshot.
However, if I call
audioPlayerNode.scheduleBuffer(audioBuffer, at: nil, options: .interrupts)
The memory leak issue is gone, but the audio is broken (sounds like been shortened).
Below is the full code snippet, anyone knows how to fix it?
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// callback every frame
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 48000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
// memory leak here
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
I'm building a streaming app on visionOS that can play sound from audio buffers each frame. The source audio buffer has 2 channels and is in a Float32 interleaved format.
However, when setting up the AVAudioFormat with interleaved to true, the app will crash with a memory issue:
AURemoteIO::IOThread (35): EXC_BAD_ACCESS (code=1, address=0x3)
But if I set AVAudioFormat with interleaved to false, and manually set up the AVAudioPCMBuffer, it can play audio as expected.
Could you please help me fix it? Below is the code snippet.
@Observable
final class MyAudioPlayer {
private var audioEngine: AVAudioEngine = .init()
private var audioPlayerNode: AVAudioPlayerNode = .init()
private var audioFormat: AVAudioFormat?
init() {
audioEngine.attach(audioPlayerNode)
audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: nil)
try? AVAudioSession.sharedInstance().setCategory(.playback, mode: .default)
try? AVAudioSession.sharedInstance().setActive(true)
audioEngine.prepare()
try? audioEngine.start()
audioPlayerNode.play()
}
// more code...
/// This crashes
private func audioFrameCallback_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: true),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData?[0] {
data.update(from: buf, count: samples * Int(format.channelCount))
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
/// This works
private func audioFrameCallback_Non_Interleaved(buf: UnsafeMutablePointer<Float>?, samples: Int) {
guard let buf,
let format = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 480000, channels: 2, interleaved: false),
let audioBuffer = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(samples))
else { return }
audioBuffer.frameLength = AVAudioFrameCount(samples)
if let data = audioBuffer.floatChannelData {
for channel in 0 ..< Int(format.channelCount) {
for frame in 0 ..< Int(audioBuffer.frameLength) {
data[channel][frame] = buf[frame * Int(format.channelCount) + channel]
}
}
}
audioPlayerNode.scheduleBuffer(audioBuffer)
}
}
Hello.
We are trying to get audio volume from microphone.
We have 2 questions.
1. Can anyone tell me about AVAudioEngine.InputNode.volume?
AVAudioEngine.InputNode.volume
Return 0 in the silence, Return float type value within 1.0 depending on the
volume are expected work, but it looks 1.0 (default value) is returned at any time.
Which case does it return 0.5 or 0?
Sample code is below. Microphone works correctly.
// instance member
private var engine: AVAudioEngine!
private var node: AVAudioInputNode!
// start method
self.engine = .init()
self.node = engine.inputNode
engine.prepare()
try! engine.start()
// volume getter
print(\(self.node.volume))
2. What is the best practice to get audio volume from microphone?
Requirements are:
Without AVAudioRecorder. We use it for streaming audio.
it should withstand high frequency access.
Testing info
device: iPhone XR
OS version: iOS 18
Best Regards.
The new iPhone 16 supports spatial audio recordings in the camera app when recording videos. Is it possible to also record spatial audio without video, and is it possible for 3rd party developers to do so? If so, how do I need to configure AVAudioSession and/or AVAudioEngine to record spatial audio in my audio recording app on iPhone 16?
Hi community,
I'm trying to setup an AVAudioFormat with AVAudioPCMFormatInt16. But, i've an error :
AVAEInternal.h:125 [AUInterface.mm:539:SetFormat: ([[busArray objectAtIndexedSubscript:(NSUInteger)element] setFormat:format error:&nsErr])] returned false, error Error Domain=NSOSStatusErrorDomain Code=-10868 "(null)"
If i understand the error code 10868, the format is not correct. But, how i can use PCM Int16 format ? Here is my method :
- (void)setupAudioDecoder:(double)sampleRate audioChannels:(double)audioChannels {
if (self.isRunning) {
return;
}
self.audioEngine = [[AVAudioEngine alloc] init];
self.audioPlayerNode = [[AVAudioPlayerNode alloc] init];
[self.audioEngine attachNode:self.audioPlayerNode];
AVAudioChannelCount channelCount = (AVAudioChannelCount)audioChannels;
self.audioFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatInt16
sampleRate:sampleRate
channels:channelCount
interleaved:YES];
NSLog(@"Audio Format: %@", self.audioFormat);
NSLog(@"Audio Player Node: %@", self.audioPlayerNode);
NSLog(@"Audio Engine: %@", self.audioEngine);
// Error on this line
[self.audioEngine connect:self.audioPlayerNode to:self.audioEngine.mainMixerNode format:self.audioFormat];
/**NSError *error = nil;
if (![self.audioEngine startAndReturnError:&error]) {
NSLog(@"Erreur lors de l'initialisation du moteur audio: %@", error);
return;
}
[self.audioPlayerNode play];
self.isRunning = YES;*/
}
Also, i see the audioEngine seem not running ?
Audio Engine:
________ GraphDescription ________
AVAudioEngineGraph 0x600003d55fe0: initialized = 0, running = 0, number of nodes = 1
Anyone have already use this format with AVAudioFormat ?
Thank you !
Here is the demo from Apple's site
This issues is specific to iOS 18.
When running this demo, we are getting new text when we have a gap in speaking, the recognitionTask(with:resultHandler:) provides new text which is only spoken after the gap and not the concatenation of old text and the new spoken text.
Hi!
I have an AVAudioSequencer with some AVMusicTracks that are filled with AVParameterEvents.
If I toggle the isMuted property of a track, it will instantly mute when changed to true. However, after turning the muting to false, the events will only triggers on the next round of a loop and not instantly. Is this intended behaviour, and is there some way to get the events to trigger immediately after toggling the isMuted to be false?
AVAudioEngine and AVAudioSession
Welcome! I will start off with the terms AVAudioEngineImpl::Initialize(NSError**).
Why? I want to make those who run into this issue have to possibility to find this post through Search Engines!
This is short small breakdown based on what I observed while trying to use these two Components. It's not a guide that goes into all the details.
If you're trying to figure out how to fix a crash, you may can find a common way to fix it, in this post!
Is it possible to use AVAudioEngine and AVAudioSession together?
The answer is yes.
But you will face challenges regarding it. Mostly AVAudioEngine. Whatever you're trying to do, it will take a lot of testing. I don't know how it will be with an IDE. But with just .app and iPhone it will take some testing. Or a lot of testing.
Something that helped me fixing a crash was, this here: https://developer.apple.com/documentation/avfaudio/audio_engine/audio_units/using_voice_processing
This example Project by Apple, uses both AVAudioEngine and AVAudioSession.
How can I fix AVAudioEngineImpl::Initialize(NSError**) ?
I think this depends. If you're lucky and have a crash log, you may can find clues, but the stack trace sometimes doesn't really help either.
I will mention common cases that I encountered though.
inputNode
https://developer.apple.com/documentation/avfaudio/avaudioengine/1386063-inputnode
You need an inputNode apparently. You need to access it or else I think there won't be one. And if there isn't one, AVAudioEngine.start will most likely crash.
The audio engine creates a singleton on demand when first accessing this variable.
Doing this has prevented this common issue for me.
.prepare deallocates and can cause a crash if you restart your AudioEngine
Another issue I faced was handling .prepare wrong. You don't need .prepare. But if you use installTap or other things, I think you need it.
Here is a common thing to note.
If you had previous initialized inputNode. Those could be gone after using .prepare.
You have to ensure you're accessing AVAudioEngine.inputNode again before calling .start() or whatever node you need.
The Voice Processing Project, does this by creating a Managing Controller for AVAudioEngine with a sort of "setup" function, which ensures that everything is ready, before .prepare and .start get called.
AVAudioSession's setCategory
You have to experiment with it. The crashes can be very weird. Sometimes your App will only crash once, and then only after you install it again, or if you start it up.
You are actually able to use .setActive and .setCategory with AVAduioEngine. Just do not try to do .setActive(false) before you've stopped the AudioEngine, as it will fail.
Sometimes I'd run into an issue with .setActive(true) so you really have to experiment if leaving that part out resolves the issue or not.
try session.setCategory(.multiRoute, mode: .default, options: [.defaultToSpeaker, .mixWithOthers])
Experiment with it. But these .multiRoute and .mixWithOthers have allowed me to use AVAudioEngine to make a test recording. And I can even switch the Data Sources and Polar Patterns without any issues.
Sometimes you can get away without setting .setActive at all. Not sure if AVAudioEngine does it automatically.
Short Summary
If you use .prepare and then .stop, make sure to initialize things like .inputNode before calling .prepare and .start again. (THIS CAN BE DIFFERENT)
Only call .setActive(false) after you used .stop. Otherwise I believe it has no chance to stop it.
AVAudioSession setCategory is important. Ensure you use mixRoutes or experiment with all the modes.
If you manage to solve your crash, you'll be able to indeed change the Data Sources and Polar Patterns and more!
Use isRunning before using .start, this will save you from another crash. If you use .start while it's already running, I think try and catch won't save you here, you have to ensure you're not starting it twice.
I hope that this short breakdown will help you to resolve your crash. If you get deeper into AVAudioEngine and AVAudioSession, you'll probably face more crashes. I yet, need to figure out how to solve them. I have a lot of trouble to put my Testing App on my iPhone, so I am sorry if this guide didn't cover every detail of it.
A HUGE tip from me is to check the Documentations. As example, when I read the Documentation for inputNode I learned why my app crashed, it's because I never accessed and initialized one.
The Developer Documentation can be a little bit of a laberynth, and I strongly recommend you to read every property you try to access if you believe they cause issues. And I also recommend to find example Projects like the Voice Processing ones. As there aren't any Code Examples in the Documentation.
If I have bluetooth speaker connected and I have installTap called on input Node, the callback is fired for 1-2 seconds then it doesnt anymore. I dont see any route or any notification handler called in between.
engine.inputNode.removeTap(onBus: 0)
engine.inputNode.installTap(
onBus: 0,
bufferSize: 4096,
format: format
) { buffer, _ in
// 3
guard let channelData = buffer.floatChannelData else {
return
}
// This callback fails after some time.
}
Not sure if this is expected, but I noticed some other applications, they seem to work fine.
If I remove bluetooth device, my input works fine.
Also I have no issues with output on Speaker.
Calls to ExtAudioFileRead are throwing OSStatus 561145203 (AVAudioSessionErrorCodeResourceNotAvailable) on iOS and iPadOS 18 -- earlier versions of iOS have not exhibited this behavior. This is a longstanding code path that has seen a spike of these error codes since iOS 18's release.
The following is also printed to the Xcode 16 console:
Watch OS11 My recording play gets paused when watch I turned down.
It was not happening in previous versions.
In my app I recorded my recording.
And When I play it in my app,
it was playing good in debug mode(when Xcode is connected) could not debug.
Otherwise, it was automatically paused(when my wrist is down or inactivity time is elapsed)
I want it to be continued.
Hi everyone,
I’m working on a project that involves streaming audio over WebSockets, and I need to compress the audio to reduce bandwidth usage. I’m currently using AVAudioEngine to capture and process audio in PCM format (AVAudioPCMBuffer), but I want to compress the buffer into Opus (or another efficient codec) before sending it over the network.
Has anyone worked with compressing an AVAudioPCMBuffer into Opus format within a tap on the inputNode, or could you recommend the best approach for compressing the PCM buffer into a different format? I haven’t been able to find a working solution for this.
Any advice or code examples would be greatly appreciated!
Thanks in advance,
Ondřej
--
My current code without the compression:
inputNode.installTap(onBus: .zero, bufferSize: 1440, format: nil) { [weak self] buffer, time in
guard let self else {
return
}
// 1. Send data
// a) Convert the buffer into the desired format
if let outputBuffer = buffer.convert(toFormat: Self.websocketInputFormat) {
// b) Use the converted buffer
// TODO: compress it into a different format
if let data = outputBuffer.convertToData() {
self.sendAudio(data)
}
}
// 2. Get sound level
self.visualizeRecorderBuffer(buffer)
}
func convert(toFormat outputFormat: AVAudioFormat) -> AVAudioPCMBuffer? {
let outputFrameCapacity = AVAudioFrameCount(
round(Double(frameLength) * (outputFormat.sampleRate / format.sampleRate))
)
guard
let outputBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: outputFrameCapacity),
let converter = AVAudioConverter(from: format, to: outputFormat)
else {
return nil
}
converter.convert(to: outputBuffer, error: nil) { packetCount, status in
status.pointee = .haveData
return self
}
return outputBuffer
}
static private let websocketInputFormat = AVAudioFormat(
commonFormat: .pcmFormatInt16,
sampleRate: 16000,
channels: 1,
interleaved: false
)!
Hi! I'm developing a music player app that interchanges between ApplicationMusicPlayer and AVAudioEngine. I'm facing an issue when switching from playback via ApplicationMusicPlayer to AVAudioEngine while the app is in background. Based on testing, it seems like the issue has to do with being unable to set audio focus in background, causing error AVAudioSessionErrorCodeCannotInterruptOthers.
I would like to check if ApplicationMusicPlayer has its own audio focus separated from the app's own audio focus. If it is, is there anything that I can do to ensure that ApplicationMusicPlayer returns focus to the app?
(I notice that the issue does not occur if we are moving playback from AVAudioEngine to ApplicationMusicPlayer. Not sure why the opposite does not work)
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again.
This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
I'm experiencing stuttering every time I record something with my iOS app on iOS 18 beta. The code ran fine on previous iOS versions.
The stuttering occurs for the first 2 seconds. Here's an example:
https://soundcloud.com/thomas-walther-219010679/ios-18-stuttering
The way I set up AVAudioEngine and AVAudioSession was vetted quite thoroughly during sessions at WWDC '23. Here is how the engine and the tap is configured:
let engine = AVAudioEngine()
let recorderNode = AVAudioMixerNode()
engine.attach(recorderNode)
engine.connect(engine.mainMixerNode, to: engine.outputNode, format: engine.outputNode.inputFormat(forBus: 0))
engine.connect(recorderNode, to: engine.mainMixerNode, format: recordingOutputFormat)
engine.connect(engine.inputNode, to: recorderNode, format: engine.inputNode.inputFormat(forBus: 0))
let bufferSize: AVAudioFrameCount = 4096
recorderNode.installTap(onBus: 0, bufferSize: bufferSize, format: nil) { [weak self] buffer, time in
guard let self = self else { return }
do {
// Write recording to disk
try audioFile.write(buffer)
} catch {
// ...
}
}
I tried setting a different buffer size, but with no luck. I also can't see any hangs in Instruments. Do you have any pointers on how to debug this?
I need to find a way to allow recording from the mic while outputting two different sound streams to two different devices (speaker and headphones).
I've done a fair bit of reading around using AVAudioSession.Category.multiroute but haven't found any modern examples. @theanalogkid posted a nice example using obj-C nine years ago, but others have noted that the code isn't readily translatable to Swift.
To make matters worse, this is one of the very few examples on how to properly use multirouting. The official documentation is lacking, to say the least, and the WWDC 2012 session is, well, old enough to attend middle school and be a Taylor Swift fan, but definitely not in Swift. The few relevant forum posts here are spread over this middle schooler's life span and likely outdated, with most having no responses other than the poster's own plightful echo. They don't paint a pretty picture of .multiroute's health, with a recent poster noting that volume buttons don't work in this mode, contacting DTS and finding that there's no fix; another finding that it just doesn't work for certain devices, etc.
Audio is giving me enough of a headache so I'd like to avoid slogging through this if possible. .multiroute feels like the developer mode of AVAudioSession, but without documentation.
tl;dr - Without using .multiroute, is there a way to allow an app to output two different devices while simultaneously recording audio? If .multiroute is the only way to achieve this, can someone give me a quick rundown of how this category works?
I am developing a visionOS app that captions speech in real environments. Currently, I am using Apple's built-in speech recognizer. However, when I was testing the app with a Vision Pro, the device seemed to only pick up the user's voice (in other words, the voices of the wearer of the Vision Pro device). For example, when the speech recognition task is running, and another person in front of me is talking, the system does not pick up the speech well.
I tried to set the AVAudioSession to be equally sensitive to all directions:
private func configureAudioSession() {
do {
try audioSession.setCategory(.record, mode: .measurement)
try audioSession.setActive(true)
if #available(visionOS 1.0, *) {
let availableDataSources = audioSession.availableInputs?.first?.dataSources
if let omniDirectionalSource = availableDataSources?.first(where: {$0.preferredPolarPattern == .omnidirectional}) {
try audioSession.setInputDataSource(omniDirectionalSource)
}
}
} catch {
print("Failed to set up audio session: \(error)")
}
}
And here is how I set up the speech recognition and configure the microphone inputs:
private func startSpeechRecognition(completion: @escaping (String) -> Void) {
do {
// Cancel the previous task if it's running.
if let recognitionTask = recognitionTask {
recognitionTask.cancel()
self.recognitionTask = nil
}
// The AudioSession is already active, creating input node.
let inputNode = audioEngine.inputNode
try inputNode.setVoiceProcessingEnabled(false)
// Create and configure the speech recognition request
recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let recognitionRequest = recognitionRequest else { fatalError("Unable to create a recognition request") }
recognitionRequest.shouldReportPartialResults = true
// Keep speech recognition data on device
if #available(iOS 13, *) {
recognitionRequest.requiresOnDeviceRecognition = true
}
// Create a recognition task for speech recognition session.
// Keep a reference to the task so that it can be canceled.
recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest) { result, error in
// var isFinal = false
if let result = result {
// Update the recognizedText
completion(result.bestTranscription.formattedString)
} else if let error = error {
completion("Recognition error: \(error.localizedDescription)")
}
if error != nil || result?.isFinal == true {
// Stop recognizing speech if there is a problem
self.audioEngine.stop()
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
}
}
// Configure the microphone input
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
self.recognitionRequest?.append(buffer)
}
audioEngine.prepare()
try audioEngine.start()
} catch {
completion("Audio engine could not start: \(error.localizedDescription)")
}
}
Description:
I am developing a recording-only application that supports background recording using AVAudioEngine. The app segments the recording into 60-second files for further processing. For example, a 10-minute recording results in ten 60-second files.
Problem:
The application functions as expected in the background. However, after the app receives an interruption (such as a phone call) and the interruption ends, I can successfully restart the recording. The problem arises when the app then transitions to the background; it fails to restart the recording. Specifically, after ending the call and transitioning the app to the background, the app encounters an error and is unable to restart AVAudioSession and AVAudioEngine. The only resolution is to close and restart the app, which is not ideal for user experience.
Steps to Reproduce:
1. Start recording using AVAudioEngine.
2. The app records and saves 60-second segments.
3. Receive an interruption (e.g., an incoming phone call).
4. End the call.
5. Transition the app to the background.
6. Transition the app to the foreground and the session will be activated again.
7. Attempt to restart the recording.
Expected Behavior:
The app should resume recording seamlessly after the interruption and background transition.
Actual Behavior:
The app fails to restart AVAudioSession and AVAudioEngine, resulting in a continuous error. The recording cannot be resumed without closing and reopening the app.
How I’m Starting the Recording:
Configuration:
internal func setAudioSessionCategory() {
do {
try audioSession.setCategory(
.playAndRecord,
mode: .default,
options: [.defaultToSpeaker, .mixWithOthers, .allowBluetooth]
)
} catch {
debugPrint(error)
}
}
internal func setAudioSessionActivation() {
if UIApplication.shared.applicationState == .active {
do {
try audioSession.setPrefersNoInterruptionsFromSystemAlerts(true)
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
if audioSession.isInputGainSettable {
try audioSession.setInputGain(1.0)
}
try audioSession.setPreferredIOBufferDuration(0.01)
try setBuiltInPreferredInput()
} catch {
debugPrint(error)
}
}
}
Starting AVAudioEngine:
internal func setupEngine() {
if callObserver.onCall() { return }
inputNode = audioEngine.inputNode
audioEngine.attach(audioMixer)
audioEngine.connect(inputNode, to: audioMixer, format: AVAudioFormat.validInputAudioFormat(inputNode))
}
internal func beginRecordingEngine() {
audioMixer.removeTap(onBus: 0)
audioMixer.installTap(onBus: 0, bufferSize: 1024, format: AVAudioFormat.validInputAudioFormat(inputNode)) { [weak self] buffer, _ in
guard let self = self, let file = self.audioFile else { return }
write(file, buffer: buffer)
}
audioEngine.prepare()
do {
try audioEngine.start()
recordingTimer = Timer.scheduledTimer(withTimeInterval: recordingInterval, repeats: true) { [weak self] _ in
self?.handleRecordingInterval()
}
} catch {
debugPrint(error)
}
}
On the try audioEngine.start() call, I receive error code 561145187 in the catch block.
Logs/Error Messages:
• Error code: 561145187
Request:
I would appreciate any guidance or solutions to ensure the app can resume recording after interruptions and background transitions without requiring a restart.
Thank you for your assistance.