Hi all,
I have been quite stumped on this behavior for a little bit now, so thought it best to share here and see if someone more experience with AVAudioEngine / AVAudioSession can weigh in.
Right now I have a AVAudioEngine that I am using to perform some voice chat with and give buffers to play. This works perfectly until route changes start to occur, which causes the AVAudioEngine to reset itself, which then causes all players attached to this engine to be stopped.
Once a AVPlayerNode gets stopped due to this (but also any other time), all samples that were scheduled to be played then get purged. Where this becomes confusing for me is the completion handler gets called every time regardless of the sound actually being played.
Is there a reliable way to know if a sample needs to be rescheduled after a player has been reset?
I am not quite sure in my case what my observer of AVAudioEngineConfigurationChange needs to be doing, as this engine only handles output. All input is through a separate engine for simplicity.
Currently I am storing a queue of samples as they get sent to the AVPlayerNode for playback, and after that completion checking if the player isPlaying or not. If it's playing I assume that the sound actually was played- and if not then I leave it in the queue and assume that an observer on the route change or the configuration change will realize there are samples in the queue and reset them
Thanks for any feedback!
AVAudioSession
RSS for tagUse the AVAudioSession object to communicate to the system how you intend to use audio in your app.
Posts under AVAudioSession tag
80 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm building a React Native call application using the following combination of libraries:
https://github.com/react-native-webrtc/react-native-callkeep
https://github.com/react-native-webrtc/react-native-webrtc
https://github.com/react-native-webrtc/react-native-voip-push-notification
When I press the speaker button on the call screen displayed by CallKit and change it to ON, the speaker button display on the call screen reverts back to OFF after a few seconds.
However, when the speaker button display reverts to OFF, the actual audio output route does not return to the earpiece - the audio continues to output from the speaker without any change.
Could you please advise on what cases might cause the speaker button display to revert, and if there are any potential solutions?
I want the audio session to always use the built-in microphone. However, when using the setPreferredInput() method like in this example
private func enableBuiltInMic() {
// Get the shared audio session.
let session = AVAudioSession.sharedInstance()
// Find the built-in microphone input.
guard let availableInputs = session.availableInputs,
let builtInMicInput = availableInputs.first(where: { $0.portType == .builtInMic }) else {
print("The device must have a built-in microphone.")
return
}
// Make the built-in microphone input the preferred input.
do {
try session.setPreferredInput(builtInMicInput)
} catch {
print("Unable to set the built-in mic as the preferred input.")
}
}
and calling that function once in the initializer,
the audio session still switches to the external microphone once one is plugged in.
The session's preferredInput is nil again at that point, even if the built-in microphone is still listed in availableInputs.
So,
why is the preferredInput suddenly reset?
when would be the appropriate time to set the preferredInput again?
Observing the session’s availableInputs did not work and setting the preferredInput again in the routeChangeNotification handler seems a bad choice as it’s already a bit too late then.
I have an iPadOS M-processor application with two different running configurations.
In config1, the shared AVAudioSession is configured for .videoChat mode using the built-in microphone. The input/output nodes of the AVAudioEngine are configured with voice processing enabled. The built-in mic is formatted for 1 channel at 48KHz.
In config2, the shared AVAudioSession is configured for .measurement mode using an external USB microphone. The input/output nodes of the AVAudioEngine are configured with voice processing disabled. The external mic is formatted for 2 channels at 44.1KHz
I've written a configuration manager designed to safely switch between these two configurations. It works by stopping AVAudioEngine and detaching all but the input and output nodes, updating the shared audio session for the desired mic and sample-rates, and setting the appropriate state for voice processing to either true or false as required by the configuration. Finally the new audio graph is constructed by attaching appropriate nodes, connecting them, and re-starting AVAudioEngine
I'm experiencing what I believe is a race-condition between switching voice processing on or off and then trying to re-build and start the new audio graph. Even though notifications, which are dumped to the console indicate that my requested input and sample-rate settings are in place, I crash when trying to start the audio engine because the sample-rate is wrong. Investigating further it looks like the switch from remote I/O to voice-processing I/O or vice-versa has not yet actually completed. I introduced a 100ms second delay and that seems to help but is obviously not a reliable way to build software that must work consistently.
How can I make sure that what are apparently asynchronous configuration changes to the shared audio session and the input/output nodes have completed before I go on?
I tried using route change notifications from the shared AVAudioSession but these lie. They say my preferred mic input and sample-rate setting is in place but when I dump the AVAudioEngine graph to the debugger console, I still see the wrong sample rate assigned to the input/output nodes. Also these are the wrong AU nodes. That is, VPIO is still in place when RIO should be, or vice-versa.
How can I make the switch reliable without arbitrary time delays?
Is my configuration manager approach appropriate (question for Apple engineers)?
Hi,
We've noticed that this issue occurs more frequently after upgrading to iOS 18.4.1 and can result in one-way audio.
Our app uses CallKit with WebRTC to establish VoIP connections.
However, on iOS 18.4.1, CallKit no longer triggers:
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession)
We're currently comparing the occurrence rate across different iOS versions to better understand the impact.
Could you please help analyze the root cause of this issue?
I'm currently I'm working on an iOS app + custom keyboard extension, and I’m hoping to get some insight into how to best architect a workflow where the keyboard acts as a remote trigger for dictation, but the main app handles the actual microphone recording and transcription.
I know that third-party keyboards are sandboxed and can’t access the microphone directly, so the pattern I’m following is similar to what Wispr Flow appears to be doing:
What I'm Trying to Build
The user taps a mic button in the custom keyboard (installed system-wide).
This triggers the main app to open, start a recording session, and send the audio to my transcription endpoint (not using Speech.framework).
Once the transcription result is ready, it's stored in an App Group shared container.
The keyboard extension polls for or receives the transcribed text and inserts it into the current input field via textDocumentProxy.insertText(...).
Key Questions
Triggering App Dictation from Keyboard
Is there a clean system-native way to transition from the keyboard to the main app and back?
Are there best practices around?:
Preventing jarring transitions (keyboard disappearing)?
Letting the app record in the background once triggered (see below)?
Keeping the App Alive During Audio Recording
Once the main app is opened to handle the dictation:
I want it to record audio continuously (sometimes for up to a minute or 2 ), send it to an external transcription API, and return the result.
However, from what I have read, iOS aggressively suspends or kills apps that are not in the foreground or haven’t requested the correct background modes - especially if there are background tasks running for longer than 30 seconds.
Even with audio enabled in Background Modes and a live AVAudioEngine session, I find that the app is sometimes paused or killed after a few seconds; especially when the user switches back to the app where they want to type.
But apps like Wispr Flow seem to manage this well. Their flow allows the user to record voice and insert it seamlessly without the app being terminated mid-recording.
So:
✅ How do I prevent my app from being killed/suspended while it's recording, especially when the user switches back to the original app?
Do I need:
A background AVAudioSession hack?
Audio playback tricks (e.g. silent audio)?
A workaround using CallKit (some apps seem to use it for persistent audio sessions)?
Something else Apple allows but doesn’t document clearly (or I am just a bad sercher)?
Returning Text to the Keyboard Extension
I’m using UserDefaults(suiteName:) in the App Group to pass the transcription result. Is that still the recommended approach?
Would it be better to use a shared file (for larger data or richer metadata)?
Are there any timing issues I should be aware of, e.g. like race conditions, stale reads, etc.?
I’m using the shared instance of AVAudioSession. After activating it with .setActive(true), I observe the outputVolume, and it correctly reports the device’s volume.
However, after deactivating the session using .setActive(false), changing the volume, and then reactivating it again, the outputVolume returns the previous volume (before deactivation), not the current device volume. The correct volume is only reported after the user manually changes it again using physical buttons or Control Center, which triggers the observer.
What I need is a way to retrieve the actual current device volume immediately after reactivating the audio session, even on the second and subsequent activations.
Disabling and re-enabling the audio session is essential to how my application functions.
I’ve tested this behavior with my colleagues, and the issue is consistently reproducible on iOS 18.0.1, iOS 18.1, iOS 18.3, iOS 18.5 and iOS 18.6.2. On devices running iOS 17.6.1 and iOS 16.0.3, outputVolume correctly reflects the current volume immediately after calling .setActive(true) multiple times.
Hello,
I'm trying to determine the best/recommended AVAudioSession configuration (i.e category, mode, and options) for the following use-case.
Essentially, I'd like to switch between periods of playing an audio file and then recognizing speech. The audio file is typically speech and I don't intend for playback and speech recognition to occur simultaneously. I'd like for the user to sill be able to interact with Siri and I'd like for it to work with CarPlay where navigation prompts can occur.
I would assume the category to use is 'playAndRecord', but I'm not sure if it's better to just set that once for the entire lifecycle, or set to 'playback' for audio file playback and then switch to 'playAndRecord' for speech recognition . I'm also not sure on the best 'mode' and 'options' to set. Any suggestions would be appreciated.
Thanks.
Hi everyone,
I’m testing audio recording on an iPhone 15 Plus using AVFoundation.
Here’s a simplified version of my setup:
let settings: [String: Any] = [
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVSampleRateKey: 8000,
AVNumberOfChannelsKey: 1,
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsFloatKey: false
]
audioRecorder = try AVAudioRecorder(url: fileURL, settings: settings)
audioRecorder?.record()
When I check the recorded file’s sample rate, it logs:
Actual sample rate: 8000.0
However, when I inspect the hardware sample rate:
try session.setCategory(.playAndRecord, mode: .default)
try session.setActive(true)
print("Hardware sample rate:", session.sampleRate)
I consistently get:
`Hardware sample rate: 48000.0
My questions are:
Is the iPhone mic actually capturing at 8 kHz, or is it recording at 48 kHz and then downsampling to 8 kHz internally?
Is there any way to force the hardware to record natively at 8 kHz?
If not, what’s the recommended approach for telephony-quality audio (true 8 kHz) on iOS devices?
Thanks in advance for your guidance!
Hello,
I’m new here. I'm developing an iOS app and I’d like to know whether it is possible to detect if a phone call is being recorded by another app running in the background.
I’ve already reviewed the documentation for CallKit and AVAudioSession, but I couldn’t find anything related. My expectation was that iOS might provide some callback or API to indicate if a call is being recorded (third-party apps), but so far I haven’t found a way.
My questions are:
Does iOS expose any API to detect if a call is being recorded?
If not, is there any indirect, Apple's policy compliant method (e.g., microphone usage events) that can be relied upon?
Or is this something that iOS explicitly prevents for privacyreasons?
Expecting solutions that align with Apple’s policies and would be accepted under the App Store Review Guidelines.
Thanks in advance for any guidance.
AVAudioSessionCategoryOptionAllowBluetooth is marked as deprecated in iOS 8 in iOS 26 beta 5 when this option was not deprecated in iOS 18.6. I think this is a mistake and the deprecation is in iOS 26. Am I right?
It seems that the substitute for this option is "AVAudioSessionCategoryOptionAllowBluetoothHFP". The documentation does not make clear if the behaviour is exactly the same or if any difference should be expected... Has anyone used this option in iOS 26? Should I expect any difference with the current behaviour of "AVAudioSessionCategoryOptionAllowBluetooth"?
Thank you.
Two issues:
No matter what I set in
try audioSession.setPreferredSampleRate(x)
the sample rate on both iOS and macOS is always 48000 when the output goes through the speaker, and 24000 when my Airpods connect to an iPhone/iPad.
Now, I'm checking the current output loudness to animate a 3D character, using
mixerNode.installTap(onBus: 0, bufferSize: y, format: nil) { [weak self] buffer, time in
Task { @MainActor in
// calculate rms and animate character accordingly
but any buffer size under 4800 is just ignored and the buffers I get are 4800 sized.
This is ok, when the sampleRate is currently 48000, as 10 samples per second lead to decent visual results.
But when AirPods connect, the samplerate is 24000, which means only 5 samples per second, so the character animation looks lame.
My AVAudioEngine setup is the following:
audioEngine.connect(playerNode, to: pitchShiftEffect, format: format)
audioEngine.connect(pitchShiftEffect, to: mixerNode, format: format)
audioEngine.connect(mixerNode, to: audioEngine.outputNode, format: nil)
Now, I'd be fine if the outputNode runs at whatever if it needs, as long as my tap would get at least 10 samples per second.
PS: Specifying my favorite format in the
let format = AVAudioFormat(standardFormatWithSampleRate: 48_000, channels: 2)!
mixerNode.installTap(onBus: 0, bufferSize: y, format: format)
doesn't change anything either
Context:
I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup.
I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate.
I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking.
Issue
When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block.
Details:
The audio session is already active and configured with .playAndRecord.
The input tap is already installed when the engine is started.
When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio.
Assumptions / Current Setup
Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio.
However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking.
Questions
Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine?
Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio?
Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer?
Would using separate engines for input and output improve responsiveness?
I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation.
Relevant Code Snippets
Engine Setup
func setup() {
let input = audioEngine.inputNode
do {
try input.setVoiceProcessingEnabled(true)
} catch {
print("Could not enable voice processing \(error)")
return
}
input.isVoiceProcessingAGCEnabled = false
let output = audioEngine.outputNode
let mainMixer = audioEngine.mainMixerNode
audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat)
audioEngine.connect(beepNode, to: mainMixer, format: outputFormat)
audioEngine.connect(mainMixer, to: output, format: outputFormat)
// Initialize converters
converter = AVAudioConverter(from: inputFormat, to: outputFormat)!
f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)!
audioEngine.prepare()
}
Input Tap Installation
func installTap() {
guard AudioHandler.shared.checkMicrophonePermission() else {
print("Microphone not granted for recording")
return
}
guard !isInputTapped else {
print("[AudioEngine] Input is already tapped!")
return
}
let input = audioEngine.inputNode
let microphoneFormat = input.inputFormat(forBus: 0)
let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)!
let desiredFormat = outputFormat
let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate)
input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in
guard let self = self else { return }
// Output buffer: 1920 frames at 16kHz
guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return }
outputBuffer.frameLength = outputBuffer.frameCapacity
let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = .haveData
return buffer
}
var error: NSError?
let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock)
if converterResult != .haveData {
DebugLogger.shared.print("Downsample error \(converterResult)")
} else {
self.handleDownsampledBuffer(outputBuffer)
}
}
isInputTapped = true
}
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls.
func configureForVoIPCall() throws {
try setCategory(
.playAndRecord, mode: .voiceChat,
options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker])
try setActive(true)
}
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing.
➡️ The problem:
My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior.
Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins.
So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON.
➡️ What I’ve tried:
— AVAudioSession.interruptionNotification → doesn’t fire
— devicesChangedEventStream → not triggered
I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience
➡️ What I need:
A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings.
Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated.
Thanks.
I'm developing a safety-critical monitoring app that needs to fetch data from government APIs every 30 minutes and trigger emergency audio alerts for threshold violations.
The app must work reliably in background since users depend on it for safety alerts even while sleeping.
Main Challenge: iOS background limitations seem to prevent consistent 30-minute intervals. Standard BGTaskScheduler and timers get suspended after a few minutes in background.
Question: What's the most reliable approach to ensure consistent 30-minute background monitoring for a safety-critical app where missed alerts could have serious consequences?
Are there special entitlements or frameworks for emergency/safety applications?
The app needs to function like an alarm clock - working reliably even when backgrounded with emergency audio override capabilities.
Topic:
App & System Services
SubTopic:
Processes & Concurrency
Tags:
Network
AVAudioSession
Background Tasks
Hello all,
I have an application that uses broadcast extension of Replay kit to record the entire screen and mic sound to a file on iOS. Up until some months ago, everything was smooth. Currently I am facing the following issue.
If another app uses the microphone, then I loose the sound and never comes back. In order to debug this issue I have added a log to the processSampleBuffer, that logs a text each time it receives a .audioMic buffer type. I start the recording and everything works as expected. Later on, I go onto an app that uses microphone, and then I do not get any log for audioMic. I stop the recording on that app, but the sound never comes back. This as a result makes my video file to not have any sound at all.
In the above context I also noticed that even with Photos app broadcast extension, if you start recording a video, and you go to Speech to text feature of the keyboard, then the sound is joined. While the STT is on, there is no sound and the sound of whatever comes after STT stops is joined to the sound before the STT starts, so I guess that this is something general.
Also on the same research I did, I saw that google Meet app does not allow any microphone to be used from another app while you are in a meet (Even the STT is grayed out).
I would like to know my options here. What can I do to have a valid video file with sound? How can I not allow other apps to use the microphone while my app is recording? Is there any entitlement? How does Google Meet do that?
P.S. I have added an observer to observe the interruptions for the session and the type .began runs, but the type .ended does not, so I can not actually set the AVAudioSession to active again.
In my navigation CarPlay app I am needing to capture voice input and process that into text. Is there a built in way to do this in CarPlay?
I did not find one, so I used the following, but I am running into issues where the AVAudioSession will throw an error when I am trying to set active to false after I have captured the audio.
public func startRecording(completionHandler: @escaping (_ completion: String?) -> ()) throws {
// Cancel the previous task if it's running.
if let recognitionTask = self.recognitionTask {
recognitionTask.cancel()
self.recognitionTask = nil
}
// Configure the audio session for the app.
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.record, mode: .default, options: [.duckOthers, .interruptSpokenAudioAndMixWithOthers])
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
let inputNode = self.audioEngine.inputNode
// Create and configure the speech recognition request.
self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
guard let recognitionRequest = self.recognitionRequest else { fatalError("Unable to created a SFSpeechAudioBufferRecognitionRequest object") }
recognitionRequest.shouldReportPartialResults = true
// Keep speech recognition data on device
recognitionRequest.requiresOnDeviceRecognition = true
// Create a recognition task for the speech recognition session.
// Keep a reference to the task so that it can be canceled.
self.recognitionTask = self.speechRecognizer.recognitionTask(with: recognitionRequest) { result, error in
var isFinal = false
if let result = result {
// Update the text view with the results.
let textResult = result.bestTranscription.formattedString
isFinal = result.isFinal
let confidence = result.bestTranscription.segments[0].confidence
if confidence > 0.0 {
isFinal = true
completionHandler(textResult)
}
}
if error != nil || isFinal {
// Stop recognizing speech if there is a problem.
self.audioEngine.stop()
do {
try audioSession.setActive(false, options: .notifyOthersOnDeactivation)
} catch {
print(error)
}
inputNode.removeTap(onBus: 0)
self.recognitionRequest = nil
self.recognitionTask = nil
if error != nil {
completionHandler(nil)
}
}
}
// Configure the microphone input.
let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
self.recognitionRequest?.append(buffer)
}
self.audioEngine.prepare()
try self.audioEngine.start()
}
Again, is there a build-in method to capture audio in CarPlay? If not, why would the AVAudioSession throw this error?
Error Domain=NSOSStatusErrorDomain Code=560030580 "Session deactivation failed" UserInfo={NSLocalizedDescription=Session deactivation failed}
Hello everyone,
I’m new to Swift development and have been working on an audio module that plays a specific sound at regular intervals - similar to a workout timer that signals switching exercises every few minutes.
Following AVFoundation documentation, I’m configuring my audio session like this:
let session = AVAudioSession.sharedInstance()
try session.setCategory(
.playback,
mode: .default,
options: [.interruptSpokenAudioAndMixWithOthers, .duckOthers]
)
self.engine.attach(self.player)
self.engine.connect(self.player, to: self.engine.outputNode, format: self.audioFormat)
try? session.setActive(true)
When it’s time to play cues, I schedule playback on a DispatchQueue:
// scheduleAudio uses DispatchQueue
self.scheduleAudio(at: interval.start) {
do {
try audio.engine.start()
audio.node.play()
for sample in interval.samples {
audio.node.scheduleBuffer(sample.buffer, at: AVAudioTime(hostTime: sample.hostTime))
}
} catch {
print("Audio activation failed: \(error)")
}
}
This works perfectly in the foreground. But once the app goes into the background, the scheduled callback runs, yet the audio engine fails to start, resulting in an error with code 561015905.
Interestingly, if the app is already playing audio before going to the background, the scheduled sounds continue to play as expected.
I have added the required background audio mode to my Info plist file by including the key UIBackgroundModes with the value audio.
Is there anything else I should configure? What is the best practice to play periodic audio when the app runs in the background? How do apps like turn-by-turn navigation handle continuous audio playback in the background?
Any advice or pointers would be greatly appreciated!
Hello.
My team and I think we have an issue where our app is asked to gracefully shutdown with a following SIGTERM. As we’ve learned, this is normally not an issue. However, it seems to also be happening while our app (an audio streamer) is actively playing in the background.
From our perspective, starting playback is indicating strong user intent. We understand that there can be extreme circumstances where the background audio needs to be killed, but should it be considered part of normal operation? We hope that’s not the case.
All we see in the logs is the graceful shutdown request. We can say with high certainty that it’s happening though, as we know that playback is running within 0.5 seconds of the crash, without any other tracked user interaction.
Can you verify if this is intended behavior, and if there’s something we can do about it from our end. From our logs it doesn’t look to be related to either memory usage within the app, or the system as a whole.
Best,
John