Apple provides a function to create TTS voice as a file in TTS.
(AVSpeechUtterance/AVSpeechSynthesizer)
Or, if the user records the video of TTS playback and uses that video
I wonder what the scope of use is if I use this TTS voice to make YouTube, TikTok, or commercial videos.
Is it impossible to use it commercially at all?
Can I use it commercially with the source indicated?
Can I use it commercially without a separate source indication?
Is there a difference in commercial use license between Siri voices and regular TTS voices?
Speech
RSS for tagRecognize spoken words in recorded or live audio using Speech.
Posts under Speech tag
54 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi all!
I have been working on a web speech recognition service using the Web Speech API. This service is intended to work on smartphones, primarily Chrome on Android and Safari (or WebKit WebView) on iOS.
In my specific use case, I need to set the properties continuous = true and interimResults = true. However, I have noticed that interimResults = true does not always work as expected in WebKit.
I understand that this setting should provide fast, native, on-device speech recognition with isFinal = false. However, at times, the recognition becomes throttled and slow, yielding isFinal = true and switching to cloud-based recognition.
To confirm whether the recognition is cloud-based, I tested it by disabling the internet connection before starting speech recognition. In some cases, recognition fails entirely, which suggests that requiresOnDeviceRecognition = false is being applied. (Reference: SFSpeechRecognitionRequest.requiresOnDeviceRecognition)
I believe this is not the expected behavior when setting interimResults = true. I have researched the native services used by the Web Speech API on iOS devices, and the following links seem relevant:
• SFSpeechRecognizer
• SFSpeechRecognitionRequest.shouldReportPartialResults
• SFSpeechRecognizer.supportsOnDeviceRecognition
• Recognizing speech in live audio
• Apple Developer Forums Discussion
I found that setRequiresOnDeviceRecognition and setShouldReportPartialResults appear to be set correctly, but apparently, they do not work as expected:
WebKit Source Code
Recently I updated to Xcode 14.0. I am building an iOS app to convert recorded audio into text. I got an exception while testing the application from the simulator(iOS 16.0).
[SpeechFramework] -[SFSpeechRecognitionTask handleSpeechRecognitionDidFailWithError:]_block_invoke Ignoring subsequent recongition error: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)"
Error Domain=kAFAssistantErrorDomain Code=1107 "(null)"
I have to know what does the error code means and why this error occurred.
How to uninstall/delete Voice Control on macOS so that I can test my app for the case when the initial use of Voice Control causes it to be downloaded from Apple? Is there a folder in the macOS System or Library to delete to force a re-download of Voice Control?
My macOS app uses the older NSSpeechRecognizer to handle speech commands, but to use NSSpeechRecognizer required authorization via [SFSpeechRecognizer requestAuthorization...]. I do this and on a macOS system it can trigger a download of Voice Control, the macOS feature. An alert appears with:
"A 390 MB download is required to use speech recognition features in MyApp. You may need to quit and open MyApp again after download completes."
Hello! I would like to use Speech Framework on my App Playground for this year challenge. But I still can't understand if I am allowed to use it to respect the rule of "not rely on a network connection". That's why:
Speech framework can use on-device Speech recognition –
No internet connection needed ✅.
But it can ask to download an Apple's native language package to use it for this on-device recognition – To get this, you need to be connected to the Internet ❌.
When I try to add a Speech Recognition Capabilities on my App Playground, its' description says: "Required to perform speech recognition using Apple's servers." (screenshot is attached). Does it mean that I won't be able to use on-device recognition on my App Playground? – And therefore, only online-version of this framework is available and I can't use it to participate on the challenge successfully❓.
If it's possible, could you please make it clearer? This framework is crucial for my App Playground and I really need this to make it work.
Thanks for your help in advance! And a have a good day!
Hi, Apple's engineer.
Hoping that you can reply to this one.
We're developing a Text-to-Speak app. Everything went well until the IOS got upgraded to 18.
AVSpeechSynthesisVoice(language: "zh-CN") is running well under IOS 16 AND IOS 17. It speaks Mandarin correctly.
In IOS 18, we noticed that Siri's Language setting interrupted the performance of AVSpeechSynthesisVoice. It plays Cantonese instead of Mandarin.
Buggy language setting in Siri that affects the AVSpeechSynthesisVoice :
Chinese (Cantonese - China mainland)
Chinese (Cantonese -Hong Kong)
Speech Framework
I've been checking for SFSpeechRecognitionMetadata to determine the end of a sentence when using Voice Recognition.
Yet it doesn't detect small pauses but only large ones, so that I've transcribed basically an entire paragraph before going onto the next one.
Besides implementing your own timer, are there any other ways to have more natural pauses to detect the end of sentences, similar to the browser's Web Speech recognition? Since it's in Safari, I assume there should be some similar feature that can be equivalent in MacOS.
I'm making a Safari extension for learning languages. I need speech synthesis for any language the user chooses to learn.
I initially tried to make this work within JavaScript, but Safari 18 doesn't reliably list voices for all languages on the web SpeechSynthesis API as described here: https://stackoverflow.com/questions/79179072/how-do-you-use-a-japanese-voice-with-speechsynthesis-in-safari-ios-18
As a workaround, I've had to use AVSpeechSynthesizer in SafariWebExtensionHandler (NSExtensionRequestHandling implementation for the extension). This works in the simulator but not on a real device. I've found this note from Apple in a StackOverflow reply:
"Safari extensions are very short-lived, hence not fit for audio playback or speech synthesis. Not being able to validate an app extension in Xcode with a manually-added plist entry for background audio is the designed behavior. The general recommendation is to synthesize speech using JavaScript in conjunction with the Web Speech API."
Unfortunately, the suggestion to use the Web Speech API is unsuitable as I just explained.
Is there a way to set up a background process in the host app that can do speech synthesis? The app extension would need a way to communicate with this process, and start it if it's not running. Is that possible?
Hi,
I'd like to develop an app which runs speech recognition even after going into background. I know I can accomplish this using audio background mode and the process the audio but I am not sure if this workaround would get accepted into App Store because of the processing limitations while in the background.
How can I accomplish this while still being compliant with Apples privacy policy and other restrictions?
Thanks,
Marek
Description:
I have encountered an issue with SFSpeechRecognizer on iOS 18.0. During live dictation, if a natural pause (e.g., 1-2 seconds) is introduced, the previously transcribed text is cleared, and the transcription starts over. This behavior makes it difficult to use the API for real-time speech recognition scenarios where pauses are expected.
Steps to Reproduce: Open Apple's demo app "SpokenWord".
Start the dictation process using SFSpeechRecognizer.
Speak a few words, pause for 1-2 seconds, and then continue speaking.
Observe that the previously transcribed text is truncated, and the transcription starts anew.
Expected Behavior: The transcription should continue appending new results to the previous ones after a natural pause, maintaining a seamless user experience.
Observed Behavior: After a pause, the transcription resets, clearing previously transcribed text.
Impact: This behavior makes the SFSpeechRecognizer API unreliable for scenarios requiring continuous speech recognition with intermittent pauses.
Additional Information:
iOS Version: 18.0
Device: [Specify your device, e.g., iPhone 13 Pro]
Speech Recognizer Locale: [Specify locale, e.g., en-US]
App Behavior: Issue persists in both Apple's demo app ('SpokenWord') and custom implementations.
Hello all,
I'm working on a project that involves listening to a person speak off of a script and I want to stop then restart the recognitionTask between sections so I don't run afoul of keeping the recognitionTask running for longer than it needs to. Also, I'd like to be able to flush the current input between sections so the input from the previous section doesn't roll over into the next one.
This is based on the sample code for SFSpeechRecognizer so there's a chance I might be misunderstanding something.
private func restartRecording() {
let inputNode = audioEngine.inputNode
audioEngine.stop()
inputNode.removeTap(onBus: 0)
recognitionRequest?.endAudio()
recordingStarted = false
recognitionTask?.cancel()
do {
try startRecording()
} catch {
print("Oopsie.")
}
}
Here's my code. When I run it, the recognition task doesn't restart. Any ideas?
Can anyone please guide me on how to use SFCustomLanguageModelData.CustomPronunciation?
I am following the below example from WWDC23
https://wwdcnotes.com/documentation/wwdcnotes/wwdc23-10101-customize-ondevice-speech-recognition/
While using this kind of custom pronunciations we need X-SAMPA string of the specific word.
There are tools available on the web to do the same
Word to IPA: https://openl.io/
IPA to X-SAMPA: https://tools.lgm.cl/xsampa.html
But these tools does not seem to produce the same kind of X-SAMPA strings used in demo, example - "Winawer" is converted to "w I n aU @r".
While using any online tools it gives - "/wI"nA:w@r/".
I see this error in the debugger:
#FactoryInstall Unable to query results, error: 5
IPCAUClient.cpp:129 IPCAUClient: bundle display name is nil
Error in destroying pipe Error Domain=NSCocoaErrorDomain Code=4099 "The connection from pid 5476 on anonymousListener or serviceListener was invalidated from this process." UserInfo={NSDebugDescription=The connection from pid 5476 on anonymousListener or serviceListener was invalidated from this process.}
on this function:
func speakItem() {
let utterance = AVSpeechUtterance(string: item.toString())
utterance.voice = AVSpeechSynthesisVoice(language: "en-GB")
try? AVAudioSession.sharedInstance().setCategory(.playback)
utterance.rate = 0.3
let synthesizer = AVSpeechSynthesizer()
synthesizer.speak(utterance)
}
When running without the debugger, it will (usually) speak once, then it won't speak unless I tap the button that calls this function many times.
I know AVSpeech has problems that Apple is long aware of, but I'm wondering if anyone has a work around. I was thinking there might be a way to call the destructor for AVSpeechUtterance and generate a new object each time speech is needed, but utterance.deinit() shows: "Deinitializers cannot be accessed"
I was testing SFSpeechRecognition on my real device running ios 18.2 beta, and found that the result's "final" field is true, the result itself does not contain entire conversation's transcription. I came across some blog posts saying it's fixed in a 18.1 beta, is this not the case for 18.2 beta?
Example code:
recognitionTask = recognizer.recognitionTask(with: request) { [weak self] result, error in
guard let self = self else { return }
if let error = error {
DispatchQueue.main.async {
self.errorMessage = "Transcription failed: \(error.localizedDescription)"
self.isTranscribing = false
}
} else if let result = result, result.isFinal {
// HERE!
}
}