I started playing which transcription of audio files on macOS today, latest beta of Xcode and latest beta of Tahoe. Transcription itself works really well, but for some reason the majority of the results contain no audioTimeRange. I got 22 single-word results with time ranges, spread out all over total file of 53 minutes.
Is there something I can do to improve this? To my understanding, I have followed sample code and instructions very closely, but the SwiftTranscriptionSampleApp and other examples I've seen lead me to believe I should be getting a lot more time ranges than I actually do.
Speech
RSS for tagRecognize spoken words in recorded or live audio using Speech.
Posts under Speech tag
54 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files.
Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing.
I've stripped my class down to the following:
private var isAuthed = false
// I call this in a .task {} in my SwiftUI View
public func requestSpeechRecognizerPermission() {
SFSpeechRecognizer.requestAuthorization { authStatus in
Task {
self.isAuthed = authStatus == .authorized
}
}
}
public func transcribe(from url: URL) {
guard isAuthed else { return }
let locale = Locale(identifier: "en-US")
let recognizer = SFSpeechRecognizer(locale: locale)
let recognitionRequest = SFSpeechURLRecognitionRequest(url: url)
// the behaviour occurs whether I set this to true or not, I recently set
// it to true to see if it made a difference
recognizer?.supportsOnDeviceRecognition = true
recognitionRequest.shouldReportPartialResults = true
recognitionRequest.addsPunctuation = true
recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in
guard result != nil else { return }
if result!.isFinal {
//speechRecognitionMetadata is not nil
} else {
//speechRecognitionMetadata is nil
}
}
}
}
Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true.
The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values.
The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
During testing the “Bringing advanced speech-to-text capabilities to your app” sample app demonstrating the use of iOS 26 SpeechAnalyzer, I noticed that the language model for the English locale was presumably already downloaded. Upon checking the documentation of AssetInventory, I found out that indeed, the language model can be preinstalled on the system.
Can someone from the dev team share more info about what assets are preinstalled by the system? For example, can we safely assume that the English language model will almost certainly be already preinstalled by the OS if the phone has the English locale?
i tried combine speech detector and speech transciber to anlayzer.
but speech detector is not speech module. please help me
I have Xcode 16 and am setting everything to a minimum target deployment to 17.5, and am using import Speech
Never the less, Xcode doesn't can't find it.
At ChatGPT's urging I tried going back to Xcode 15.3, but that won't work with Sequoia
Am I misunderstanding something?
Here's how I am trying to use it:
if templateItems.isEmpty {
templateItems = dbControl?.getAllItems(templateName: templateName) ?? []
items = templateItems.compactMap { $0.itemName?.components(separatedBy: " ") }.flatMap { $0 }
let phrases = extractContextualWords(from: templateItems)
Task {
do {
// 1. Get your items and extract words
templateItems = dbControl?.getAllItems(templateName: templateName) ?? []
let phrases = extractContextualWords(from: templateItems)
// 2. Build the custom model and export it
let modelURL = try await buildCustomLanguageModel(from: phrases)
// 3. Prepare the model (STATIC method)
try await SFSpeechRecognizer.prepareCustomLanguageModel(at: modelURL)
// ✅ Ready to use in recognition request
print("✅ Model prepared at: \(modelURL)")
// Save modelURL to use in Step 5 (speech recognition)
// e.g., self.savedModelURL = modelURL
} catch {
print("❌ Error preparing model: \(error)")
}
}
}
I'm experimenting with the new SpeechTranscriber in macOS/iOS 26, transcribing speech from a prerecorded mp4 file. Speed and quality are amazing!
I've told the transcriber to include time indexes. Each run is always exactly one word, which can be very useful. When I look at the indexes the end of one run is always identical to the start of the next run, even if there's a pause.
I'd like to identify pauses, perhaps to generate something like phrases for subtitling. With each run of text going into the next I can't do this, other than using punctuation - which might be rather rough.
Any suggestions on detecting pauses, or getting that kind of metadata from the transcriber?
Here's a short sample, showing each run with the start, end, and characters in the run:
105.9 --> 107.04 I
107.04 --> 107.16 think
107.16 --> 108.0 more
108.0 --> 108.42 lighting
108.42 --> 108.6 is
108.6 --> 108.72 definitely
108.72 --> 109.2 needed,
109.2 --> 109.92 downtown.
109.98 --> 110.4 My
110.4 --> 110.52 only
110.52 --> 110.7 question
110.7 --> 111.06 is,
111.06 --> 111.48 poll
111.48 --> 111.78 five,
111.78 --> 111.84 that
111.84 --> 112.08 you're
112.08 --> 112.38 increasing
112.38 --> 112.5 the
112.5 --> 113.34 50,000?
113.4 --> 113.58 Where
113.58 --> 113.88 exactly
When I create a SFSpeechRecognizer object, I find SFLocalSpeechRecognitionClient remains in memory and never gets released.
You can create a demo with a single UIButton whose touch action is
SFSpeechRecognizer(locale: Locale(identifier: "zh_CN"))
Subject: Assistance Needed with Enabling Speech Recognition Entitlement for iOS App
Hi everyone,
I’m seeking guidance regarding the Speech Recognition entitlement for my iOS app using Capacitor. Our App and we submitted a request to Apple Developer Support four days ago, but have not yet received a response.
🧩 Summary of the issue:
Our app uses the Capacitor speech recognition plugin (@capacitor-community/speech-recognition) to listen for native voice input on iOS.
We have added both of the required keys in Info.plist:
NSSpeechRecognitionUsageDescription
NSMicrophoneUsageDescription
We previously had a duplicate microphone key, which caused the system to silently skip the permission request. After removing the duplicate, we did briefly see the microphone permission prompt appear.
However, in our most recent builds, the app launches without any prompts, even on a fresh install. The plugin reports:
available = true
permissionStatus = granted
Despite this, no speech input is ever received, and the listener returns nothing.
We believe the app is functioning correctly at a code level (plugin loads, no errors, correct Info.plist), but suspect the missing Speech Recognition entitlement is blocking actual access to the speech system.
🔎 What we need help with:
How can we confirm whether the Speech Recognition entitlement is enabled for our App ID?
If it’s not enabled, is there a way to escalate or re-submit the request? Our app is currently stuck until this entitlement is granted.
Thank you for your time and any guidance you can offer!
I am building an iOS app with the App ID: com.echo.eyes.app
I have a paid Apple Developer membership and have followed all correct procedures, including:
Adding com.apple.developer.speech-recognition manually to the App.entitlements file
Setting Info.plist keys for microphone and speech permissions
Assigning my Apple Developer Team to the project
Setting App/App.entitlements under Code Signing Entitlements
Despite all this, Xcode automatic signing fails, and I receive the error:
vbnet
Copy
Edit
Provisioning profile 'iOS Team Provisioning Profile: com.echo.eyes.app' doesn't include the com.apple.developer.speech-recognition entitlement.
I am unable to add the entitlement via the Capabilities section, and no method I try will allow provisioning to succeed.
Please update this App ID to include the required entitlement in the provisioning profile. This issue is preventing all voice recognition functionality.
Thank you.
How to add speech recognition in + capability in Xcode there is no "Speech Recognition" in the list.
Hello,
I recently enrolled in the Apple Developer Program and created an App ID with the bundle ID com.echo.eyes.voice.
I am trying to enable Speech Recognition in the App ID capabilities list, but the option does not appear — even after waiting over a week since my membership was activated.
I’ve already:
Confirmed my Apple Developer account is active
Checked the Identifiers section in the Developer portal
Tried editing the App ID, but Speech Recognition is not listed
Contacted both Developer Support and Developer Technical Support (Case #102594089120), but was told to post here for help
My app uses Capacitor + the @capacitor-community/speech-recognition plugin. I need the com.apple.developer.speech-recognition entitlement to appear so I can use native voice input in iOS.
I would really appreciate help from an Apple engineer or anyone who has faced this issue.
Thank you,
— Daniel Colyer
Hello, I’ve followed all the steps you recommended and confirmed that the entitlement is correctly added in Xcode, but the provisioning profile still fails. I believe the issue is that my App ID com.echo.eyes.app is missing the com.apple.developer.speech-recognition entitlement on Apple’s end.
Could you please manually add this entitlement to my App ID, or guide me on how to get it attached? I’ve already added it locally and confirmed the error in Xcode is due to it not being in the provisioning profile.
.
When a new application runs on iOS 18.4 simulator and tries to access the Speech Framework, prompting a request for authorisation to use Speech Recognition, the application will crash if the user clicks allow. Same issue in the visionOS 2.4 simulator.
Using Swift 6. Report Identifier: FB17686186
/// Checks speech recognition availability and requests necessary permissions.
@MainActor
func checkAvailabilityAndPermissions() async {
logger.debug("Checking speech recognition availability and permissions...")
// 1. Verify that the speechRecognizer instance exists
guard let recognizer = speechRecognizer else {
logger.error("Speech recognizer is nil - speech recognition won't be available.")
reportError(.configurationError(description: "Speech recognizer could not be created."), context: "checkAvailabilityAndPermissions")
self.isAvailable = false
return
}
// 2. Check recognizer availability (might change at runtime)
if !recognizer.isAvailable {
logger.error("Speech recognizer is not available for the current locale.")
reportError(.configurationError(description: "Speech recognizer not available."), context: "checkAvailabilityAndPermissions")
self.isAvailable = false
return
}
logger.trace("Speech recognizer exists and is available.")
// 3. Request Speech Recognition Authorization
// IMPORTANT: Add `NSSpeechRecognitionUsageDescription` to Info.plist
let speechAuthStatus = SFSpeechRecognizer.authorizationStatus()
logger.debug("Current Speech Recognition authorization status: \(speechAuthStatus.rawValue)")
if speechAuthStatus == .notDetermined {
logger.info("Requesting speech recognition authorization...")
// Use structured concurrency to wait for permission result
let authStatus = await withCheckedContinuation { continuation in
SFSpeechRecognizer.requestAuthorization { status in
continuation.resume(returning: status)
}
}
logger.debug("Received authorization status: \(authStatus.rawValue)")
// Now handle the authorization result
let speechAuthorized = (authStatus == .authorized)
handleAuthorizationStatus(status: authStatus, type: "Speech Recognition")
// If speech is granted, now check microphone
if speechAuthorized {
await checkMicrophonePermission()
}
} else {
// Already determined, just handle it
let speechAuthorized = (speechAuthStatus == .authorized)
handleAuthorizationStatus(status: speechAuthStatus, type: "Speech Recognition")
// If speech is already authorized, check microphone
if speechAuthorized {
await checkMicrophonePermission()
}
}
}
Here is the demo from Apple's site
This issues is specific to iOS 18.
When running this demo, we are getting new text when we have a gap in speaking, the recognitionTask(with:resultHandler:) provides new text which is only spoken after the gap and not the concatenation of old text and the new spoken text.
I know there has been issues with SFSpeechRecognizer in iOS 17+ inside the simulator. Running into issues with speech not being recognised inside the visionOS 2.4 simulator as well (likely because it borrows from iOS frameworks). Just wondering if anyone has any work arounds or advice for this simulator issue. I can't test on device because I don't have an Apple Vision Pro.
Using Swift 6 on Xcode 16.3. Below are the console logs & the code that I am using.
Console Logs
BACKGROUND SPATIAL TAP (hit BackgroundTapPlane)
SpeechToTextManager.startRecording() called
[0x15388a900|InputElement #0|Initialize] Number of channels = 0 in AudioChannelLayout does not match number of channels = 2 in stream format.
iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending
iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending
iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending
iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending
iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending
iOSSimulatorAudioDevice-22270-1: Abandoning I/O cycle because reconfig pending
SpeechToTextManager.startRecording() completed successfully and recording is active.
GameManager.onTapToggle received. speechToTextManager.isAvailable: true, speechToTextManager.isRecording: true
GameManager received tap toggle callback. Tapped Object: None
BACKGROUND SPATIAL TAP (hit BackgroundTapPlane)
GESTURE MANAGER - User is already recording, stopping recording
SpeechToTextManager.stopRecording() called
GameManager.onTapToggle received. speechToTextManager.isAvailable: true, speechToTextManager.isRecording: false
Audio data size: 134400 bytes
Recognition task error: No speech detected <---
Code
private(set) var isRecording: Bool = false
private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?
private var recognitionTask: SFSpeechRecognitionTask?
@MainActor
func startRecording() async throws {
logger.debug("SpeechToTextManager.startRecording() called")
guard !isRecording else {
logger.warning("Cannot start recording: Already recording.")
throw AppError.alreadyRecording
}
currentTranscript = ""
processingError = nil
audioBuffer = Data()
isRecording = true
do {
try await configureAudioSession()
try await Task.detached { [weak self] in
guard let self = self else {
throw AppError.internalError(description: "SpeechToTextManager instance deallocated during recording setup.")
}
try await self.audioProcessor.configureAudioEngine()
let (recognizer, request) = try await MainActor.run { () -> (SFSpeechRecognizer, SFSpeechAudioBufferRecognitionRequest) in
guard let result = self.createRecognitionRequest() else {
throw AppError.configurationError(description: "Speech recognition not available or SFSpeechRecognizer initialization failed.")
}
return result
}
await MainActor.run {
self.recognitionRequest = request
}
await MainActor.run {
self.recognitionTask = recognizer.recognitionTask(with: request) { [weak self] result, error in
guard let self = self else { return }
if let error = error {
// WE ENTER INTO THIS BLOCK, ALWAYS
self.logger.error("Recognition task error: \(error.localizedDescription)")
self.processingError = .speechRecognitionError(description: error.localizedDescription)
return
}
. . .
}
}
. . .
}.value
} catch {
. . .
}
}
@MainActor
func stopRecording() {
logger.debug("SpeechToTextManager.stopRecording() called")
guard isRecording else {
logger.debug("Not recording, nothing to do")
return
}
isRecording = false
Task.detached { [weak self] in
guard let self = self else { return }
await self.audioProcessor.stopEngine()
let finalBuffer = await self.audioProcessor.getAudioBuffer()
await MainActor.run {
self.recognitionRequest?.endAudio()
self.recognitionTask?.cancel()
}
. . .
}
}
When a new application runs on VisionOS 2.4 simulator and tries to access the Speech Framework, prompting a request for authorisation to use Speech Recognition, the application freezes.
Using Swift 6.
Report Identifier: FB17666252
@MainActor
func checkAvailabilityAndPermissions() async {
logger.debug("Checking speech recognition availability and permissions...")
// 1. Verify that the speechRecognizer instance exists
guard let recognizer = speechRecognizer else {
logger.error("Speech recognizer is nil - speech recognition won't be available.")
reportError(.configurationError(description: "Speech recognizer could not be created."), context: "checkAvailabilityAndPermissions")
self.isAvailable = false
return
}
// 2. Check recognizer availability (might change at runtime)
if !recognizer.isAvailable {
logger.error("Speech recognizer is not available for the current locale.")
reportError(.configurationError(description: "Speech recognizer not available."), context: "checkAvailabilityAndPermissions")
self.isAvailable = false
return
}
logger.trace("Speech recognizer exists and is available.")
// 3. Request Speech Recognition Authorization
// IMPORTANT: Add `NSSpeechRecognitionUsageDescription` to Info.plist
let speechAuthStatus = SFSpeechRecognizer.authorizationStatus() // FAILS HERE
logger.debug("Current Speech Recognition authorization status: \(speechAuthStatus.rawValue)")
if speechAuthStatus == .notDetermined {
logger.info("Requesting speech recognition authorization...")
// Use structured concurrency to wait for permission result
let authStatus = await withCheckedContinuation { continuation in
SFSpeechRecognizer.requestAuthorization { status in
continuation.resume(returning: status)
}
}
logger.debug("Received authorization status: \(authStatus.rawValue)")
// Now handle the authorization result
let speechAuthorized = (authStatus == .authorized)
handleAuthorizationStatus(status: authStatus, type: "Speech Recognition")
// If speech is granted, now check microphone
if speechAuthorized {
await checkMicrophonePermission()
}
} else {
let speechAuthorized = (speechAuthStatus == .authorized)
handleAuthorizationStatus(status: speechAuthStatus, type: "Speech Recognition")
// If speech is already authorized, check microphone
if speechAuthorized {
await checkMicrophonePermission()
}
}
}
Hey there! I faced issue in iOS 18 and newer when Spotlight search doesn't show my App in results. In older versions it works. Here is my code:
func configureUserActivitity(with id: String, keywords: [String]) {
let activity = NSUserActivity(activityType: id)
activity.contentAttributeSet = self.defaultAttributeSet
activity.isEligibleForSearch = true
activity.keywords = Set(keywords)
activity.becomeCurrent()
self.userActivity = activity
}
I didn't find any reasons why it doesn't work now. Maybe I should report a bug?
I'm able to get text to speech to audio file using the following code for iOS 12 iPhone 8 to create a car file:
audioFile = try AVAudioFile(
forWriting: saveToURL,
settings: pcmBuffer.format.settings,
commonFormat: .pcmFormatInt16,
interleaved: false)
where pcmBuffer.format.settings is:
[AVAudioFileTypeKey: kAudioFileMP3Type,
AVSampleRateKey: 48000,
AVEncoderBitRateKey: 128000,
AVNumberOfChannelsKey: 2,
AVFormatIDKey: kAudioFormatLinearPCM]
However, this code does not work when I run the app in iOS 18 on iPhone 13 Pro Max. The audio file is created, but it doesn't sound right. It has a lot of static and it seems the speech is very low pitch.
Can anyone give me a hint or an answer?
Hope it's okay to post here - I haven't gotten resolution anywhere else. Apple's iOs Live Captions is supposed to translate speech into written text either on the phone (works like a charm!) or via microphone (think meeting in a conference room). Microphone doesn't work anywhere, anytime on a new iPhone 14 purchased November 2024. Anyone out there want to fix this and help a lot of people who have trouble hearing? I'm part of an entire generation that didn't know we were supposed to protect our hearing at concerts and clubs and worse, thought it was cool to snag a spot by the speakers...
I have created an app where you can speak using SFSpeechRecognizer and it will recognize you speech into text, translate it and then return it back using speech synthesis. All locales for SFSpeechRecognizer and switching between them work fine when the app is in the foreground but after I turn off my screen(the app is still running I just turned off the screen) and try to create new recognitionTask it it receives this error inside the recognition task: User denied access to speech recognition. The weird thing about this is it only happens with some languages. The error happens with Croatian or Hungarian locale for speech recognition but doesn't with English or Spanish locale.