I’m seeing consistent failures using SoundAnalysis live classification when my app moves to the background.
Setup
- iOS 17.x
- AVAudioEngine mic capture
- SNAudioStreamAnalyzer
- SNClassifySoundRequest(classifierIdentifier: .version1)
- UIBackgroundModes = audio
- AVAudioSession .record / .playAndRecord, active
- Audio capture + level metering continue working in background (mic indicator stays on)
Issue
As soon as the app enters background / screen locks:
- SoundAnalysis starts failing every second with domain:com.apple.SoundAnalysis, code:2(SNErrorCode.operationFailed)
- Audio capture itself continues normally
- When the app returns to foreground, classification immediately resumes without restarting the engine/analyzer
Question Is live background sound classification with the built-in SoundAnalysis classifier officially unsupported or known to fail in background?
If so, is a custom Core ML model the only supported approach for background detection?
Or is there a required configuration I’m missing to keep SNClassifySoundRequest(.version1) running in background?
Thanks for any clarification.