Hi, hoping for guidance on what's a long-running bug for our app.
The problem
We have a transcription app on iPhone 17 Pro Max running iOS 26.5. Recording flow uses AVAudioEngine.installTap(onBus:) to capture PCM into a JS bridge for streaming to a remote transcription service. A parallel AVAudioRecorder writes the same audio to disk as backup.
When the user starts a recording and locks the phone, iOS terminates our process with SIGKILL at approximately 50 seconds of continuous background time, despite:
UIBackgroundModesincludesaudio(verified in shipping IPA's Info.plist)AVAudioSession.setCategory(.playAndRecord, mode: .default)is activeAVAudioEngineis running withinstallTapproducing PCM buffers right up to the moment of deathUIApplication.backgroundTimeRemainingreturnsDouble.greatestFiniteMagnitudeatapplicationDidEnterBackground(verified in our event log)
No AVAudioSession.interruptionNotification is delivered before the kill. iOS terminates the process cleanly with no warning event to our observer.
Evidence
Our Swift observer module writes an event log to disk on every system event. On relaunch we ship it to our crash reporter. Excerpt from a recent kill on iOS 26.5 / build 2.1.32:
T=0.000s session-start (engineRunning: true)
T=57.199s app-will-resign-active (bufferCallbackCount: 22)
T=58.913s app-did-enter-background (backgroundTimeRemaining: infinity, bufferCallbackCount: 39)
[no further audio events captured]
[Swift heartbeat written every 5s for next ~46 seconds]
T~105s Process SIGKILLed (heartbeat last-alive: 09:31:01.597Z)
Background time before kill: ~46 seconds. engineRunning: true and bufferCallbackCount was still incrementing at the moment the event log stops capturing - the audio engine was alive and feeding buffers when iOS terminated us.
What we've tried (35 documented attempts)
Hopefully not all relevant but listing for completeness:
- Various
AVAudioSessioncategory/mode/options combinations (Default, Measurement, VoiceChat, .mixWithOthers, .defaultToSpeaker, .allowBluetoothHFP) - Parallel
AVAudioRecorderwriting a.caffile as a "real recording app" signal SFSpeechRecognizerwithrequiresOnDeviceRecognition = trueconsuming PCM in-process (50s request rotation)BGContinuedProcessingTaskwithProgress.completedUnitCountreporting monotonic progress every 5 seconds- Live Activity (ActivityKit) with
NSSupportsLiveActivitiesFrequentUpdates = true - Live Activity update pushes via APNs (confirmed wake widget extension only, not host)
- Silent device-token APNs background pushes (confirmed iOS ~5/day rate limit)
- CallKit fake call (CXProvider + CXCallController) - works but creates the green pill UI which our product can't ship
- WebRTC peer connection with active media stream (via react-native-webrtc loopback)
UIBackgroundModes: voipdeclaration (without CallKit)beginBackgroundTask+ engine bounce (Apple's own guidance says don't, our test confirmed it's actively harmful)- CLLocationManager background updates
All die at ~50s background. None of them survive.
What works on the same device
Three App Store transcription apps survive indefinite background recording on our exact device + iOS version. We have inspected their IPAs (Mach-O LC_LOAD_DYLIB analysis + embedded entitlement extraction):
Otter (com.aisense.otter) - UIBackgroundModes: audio + fetch + processing + remote-notification. Uses OneSignal-driven Live Activity push tokens + NotificationServiceExtension. No CallKit, PushKit, or WebRTC.
Granola (com.granola.ios-prod) - has UIBackgroundModes: voip but the voip is for their separate outbound-phone-call feature (TwilioVoice + CallKit, lives in their PhoneCalls.framework). Recording-path uses ONLY AVAudioRecorder + PlayAndRecord + ModeDefault + Live Activity with frequentPushesEnabled. Zero PushKit anywhere in the bundle.
Transcribe Speech to Text by DENIVIP (ru.denivip.transcribe) - the smallest API surface: UIBackgroundModes: audio + remote-notification only. AVAudioEngine + .playAndRecord + .default + SFSpeechRecognizer consuming PCM. No CallKit, PushKit, BGTask, Live Activity, WebRTC, or VoIP.
Three apps, three different mechanisms, all working. We've implemented bits of all three approaches in our app and still die at 50s.
Apple Voice Memos (system app, private entitlements) also survives indefinite recording on the same device.
Questions
-
What is the supported API path for indefinite background microphone-only recording on iOS 26.5? Voice Memos and competitor apps clearly accomplish this - what's the missing piece?
-
Why does
UIApplication.backgroundTimeRemainingreturnDouble.greatestFiniteMagnitudeatapplicationDidEnterBackgroundbut the process is terminated ~50 seconds later? Is the meaning of this property changing in iOS 26? -
What causes the iOS 26 process scheduler to revoke the
audio-mode background runtime classification? NoAVAudioSession.interruptionNotificationis delivered before SIGKILL. Where can we observe the classification change? -
Does iOS 26 distinguish "audio recording with no audible output" from "audio recording with audible output (e.g. a media playback session)"? If so, what is the supported API to register as a recording-only background-audio app?
-
Does
BGContinuedProcessingTask(new in iOS 26) actually extend background CPU time for an app that is also usingUIBackgroundModes: audioand an activeAVAudioSession? Or is it for finish-what-you-started bursts only (per WWDC 2025 session 227)?
Any guidance - even pointers to specific WWDC sessions, sample code, or technotes - would be hugely appreciated. We've spent ~40+ hours on this and want to know what the supported path looks like in iOS 26.
Happy to share more event-log data, IPA inspection notes, or build a focused Xcode reproduction if helpful.
Thanks!