I'm trying to use the new Speech framework for streaming transcription on macOS 26.3, and I can reproduce a failure with SpeechAnalyzer.start(inputSequence:).
What is working:
- SpeechAnalyzer + SpeechTranscriber
- offline path using start(inputAudioFile:finishAfterFile:)
- same Spanish WAV file transcribes successfully and returns a coherent final result
What is not working:
-
SpeechAnalyzer + SpeechTranscriber
-
stream path using start(inputSequence:)
-
same WAV, replayed as AnalyzerInput(buffer:bufferStartTime:)
-
fails once replay starts with:
_GenericObjCError domain=Foundation._GenericObjCError code=0 detail=nilError
I also tried:
- DictationTranscriber instead of SpeechTranscriber
- no realtime pacing during replay
Both still fail in stream mode with the same error.
So this does not currently look like a ScreenCaptureKit issue or a Python integration issue. I reduced it to a pure Swift CLI repro.
Environment:
- macOS 26.3 (25D122)
- Xcode 26.3
- Swift 6.2.4
- Apple Silicon Mac
Has anyone here gotten SpeechAnalyzer.start(inputSequence:) working reliably on macOS 26.x?
If so, I'd be interested in any workaround or any detail that differs from the obvious setup:
- prepareToAnalyze(in:)
- bestAvailableAudioFormat(...)
- AnalyzerInput(buffer:bufferStartTime:)
- replaying a known-good WAV in chunks
I already filed Feedback Assistant: FB22149971