Recognize spoken words in recorded or live audio using Speech.

Posts under Speech tag

53 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

AVSpeechUtterance problem
I see this error in the debugger: #FactoryInstall Unable to query results, error: 5 IPCAUClient.cpp:129 IPCAUClient: bundle display name is nil Error in destroying pipe Error Domain=NSCocoaErrorDomain Code=4099 "The connection from pid 5476 on anonymousListener or serviceListener was invalidated from this process." UserInfo={NSDebugDescription=The connection from pid 5476 on anonymousListener or serviceListener was invalidated from this process.} on this function: func speakItem() { let utterance = AVSpeechUtterance(string: item.toString()) utterance.voice = AVSpeechSynthesisVoice(language: "en-GB") try? AVAudioSession.sharedInstance().setCategory(.playback) utterance.rate = 0.3 let synthesizer = AVSpeechSynthesizer() synthesizer.speak(utterance) } When running without the debugger, it will (usually) speak once, then it won't speak unless I tap the button that calls this function many times. I know AVSpeech has problems that Apple is long aware of, but I'm wondering if anyone has a work around. I was thinking there might be a way to call the destructor for AVSpeechUtterance and generate a new object each time speech is needed, but utterance.deinit() shows: "Deinitializers cannot be accessed"
0
0
66
1d
ios 18.2 beta(22C5131E) Speech Recognition Discards Previously Transcribed Audio
I was testing SFSpeechRecognition on my real device running ios 18.2 beta, and found that the result's "final" field is true, the result itself does not contain entire conversation's transcription. I came across some blog posts saying it's fixed in a 18.1 beta, is this not the case for 18.2 beta? Example code: recognitionTask = recognizer.recognitionTask(with: request) { [weak self] result, error in guard let self = self else { return } if let error = error { DispatchQueue.main.async { self.errorMessage = "Transcription failed: \(error.localizedDescription)" self.isTranscribing = false } } else if let result = result, result.isFinal { // HERE! } }
4
0
161
2d
AVSpeechUtterance Mandarin voice output replaced by SIRI language setting after upgraded the IOS to 18
Hi, Apple's engineer. Hoping that you can reply to this one. We're developing a Text-to-Speak app. Everything went well until the IOS got upgraded to 18. AVSpeechSynthesisVoice(language: "zh-CN") is running well under IOS 16 AND IOS 17. It speaks Mandarin correctly. In IOS 18, we noticed that Siri's Language setting interrupted the performance of AVSpeechSynthesisVoice. It plays Cantonese instead of Mandarin. Buggy language setting in Siri that affects the AVSpeechSynthesisVoice : Chinese (Cantonese - China mainland) Chinese (Cantonese -Hong Kong)
1
2
193
1w
Batch transcribe from file fails on all but the last, async problem?
I am attempting to do batch Transcription of audio files exported from Voice Memos, and I am running into an interesting issue. If I only transcribe a single file it works every time, but if I try to batch it, only the last one works, and the others fail with No speech detected. I assumed it must be something about concurrency, so I implemented what I think should remove any chance of transcriptions running in parallel. And with a mocked up unit of work, everything looked good. So I added the transcription back in, and 1: It still fails on all but the last file. This happens if I am processing 10 files or just 2. 2: It no longer processes in order, any file can be the last one that succeeds. And it seems to not be related to file size. I have had paragraph sized notes finish last, but also a single short sentence that finishes last. I left the mocked processFiles() for reference. Any insights would be greatly appreciated. import Speech import SwiftUI struct ContentView: View { @State private var processing: Bool = false @State private var fileNumber: String? @State private var fileName: String? @State private var files: [URL] = [] let locale = Locale(identifier: "en-US") let recognizer: SFSpeechRecognizer? init() { self.recognizer = SFSpeechRecognizer(locale: self.locale) } var body: some View { VStack { if files.count > 0 { ZStack { ProgressView() Text(fileNumber ?? "-") .bold() } Text(fileName ?? "-") } else { Image(systemName: "folder.badge.minus") Text("No audio files found") } } .onAppear { files = getFiles() Task { await processFiles() } } } private func getFiles() -> [URL] { do { let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first! let path = documentsURL.appendingPathComponent("Voice Memos").absoluteURL let contents = try FileManager.default.contentsOfDirectory(at: path, includingPropertiesForKeys: nil, options: []) let files = (contents.filter {$0.pathExtension == "m4a"}).sorted { url1, url2 in url1.path < url2.path } return files } catch { print(error.localizedDescription) return [] } } private func processFiles() async { var fileCount = files.count for file in files { fileNumber = String(fileCount) fileName = file.lastPathComponent await processFile(file) fileCount -= 1 } } // private func processFile(_ url: URL) async { // let seconds = Double.random(in: 2.0...10.0) // await withCheckedContinuation { continuation in // DispatchQueue.main.asyncAfter(deadline: .now() + seconds) { // continuation.resume() // print("\(url.lastPathComponent) \(seconds)") // } // } // } private func processFile(_ url: URL) async { let recognitionRequest = SFSpeechURLRecognitionRequest(url: url) recognitionRequest.requiresOnDeviceRecognition = false recognitionRequest.shouldReportPartialResults = false await withCheckedContinuation { continuation in recognizer?.recognitionTask(with: recognitionRequest) { (transcriptionResult, error) in guard transcriptionResult != nil else { print("\(url.lastPathComponent.uppercased())") print(error?.localizedDescription ?? "") return } if ((transcriptionResult?.isFinal) == true) { if let finalText: String = transcriptionResult?.bestTranscription.formattedString { print("\(url.lastPathComponent.uppercased())") print(finalText) } } } continuation.resume() } } }
0
0
140
2w
Strange error in com.apple.speech.localspeechrecognition that doesn't affect output?
While running Swift's SpeechRecognition capabilities I get the error below. However, the app successfully transcribes the audio file. So am not sure how worried I have to be, as well, would like to know that if when that error occurred, did that mean that the app went to the internet to transcribe that file? Yes, requiresOnDeviceRecognition is set to false. Would like to know what that error meant, and how much I need to worry about it? Received an error while accessing com.apple.speech.localspeechrecognition service: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)"
0
0
200
2w
VoiceOver needs to support CFBundleSpokenName
VoiceOver does not support the plist property CFBundleSpokenName. This is wrong and should be fixed. Ultimately the issue I am dealing with is that our app name is UWCU, and instead of VoiceOver pronouncing each letter, it tries to read this as a word and horribly butchers our organization's/app's name. Alternatives such as using U.W.C.U. and U W C U are not acceptable. @Apple, I know you're first response is going to be "no, it is working perfectly," but quite frankly you are wrong. I know you feel strongly about this, given your response in posts like this: https://forums.developer.apple.com/forums/thread/734545?answerId=760084022 HOWEVER, with iOS 18, your argument for "VoiceOver should read what's on the screen" doesn't hold water anymore. With iOS 18, you Apple have added a new feature that lets users customize their home screens and completely remove the name of apps. Here's your own guide: https://support.apple.com/guide/iphone/customize-apps-and-widgets-on-the-home-screen-iph385473442/ios Quoted from your guide: Make the icons bigger: Tap Large. (In large size, the names of the apps disappear.) With large icons + VoiceOver turned on, VoiceOver still reads the app name even though it has disappeared from the screen. So, your own argument "VoiceOver should read the text as it appears on the screen" is invalid, because there is NO text on the screen. If you can't tell, I'm pretty peeved about all this. There's a reason why screen readers support aria attributes to help deliver the right accessible experience. It's a simple ask for VoiceOver to do the same thing.
3
0
474
3w
SFCustomLanguageModelData.CustomPronunciation and X-SAMPA string conversion
Can anyone please guide me on how to use SFCustomLanguageModelData.CustomPronunciation? I am following the below example from WWDC23 https://wwdcnotes.com/documentation/wwdcnotes/wwdc23-10101-customize-ondevice-speech-recognition/ While using this kind of custom pronunciations we need X-SAMPA string of the specific word. There are tools available on the web to do the same Word to IPA: https://openl.io/ IPA to X-SAMPA: https://tools.lgm.cl/xsampa.html But these tools does not seem to produce the same kind of X-SAMPA strings used in demo, example - "Winawer" is converted to "w I n aU @r". While using any online tools it gives - "/wI"nA:w@r/".
0
1
180
4w
"AVSpeechSynthesisVoice" choppy at start.
So, I'm trying to create my own text-to-speech setup. Problem I'm having is whenever I do a test run, the speech gets a bit choppy at the start kind of skipping over maybe a word or a few characters. A few details: I've essentially built a separate class for handling the speech events. AVSpeechSynthesizer is set up as a private variable for the class so I don't expect deallocation to be the issue. Especially since it's a problem at the start. I've got a queue set up for what it's worth so that shouldn't be a problem. I'd appreciate any advice.
2
0
159
4w
SFSpeechRecognizer is broken on iOS 18
Hello, I noticed that SFSpeechRecognizer is broken on iOS 18. During a recognition task, it keeps dropping the recognized text on every pause. For example, if you say "how are you fine", it will drop the "how are you" part and only give you "fine" as the result. Say "how are you <pause> fine" // iOS 17 ✅ (perfect final result) How How are How are you How are you. How are you. Fine. // iOS 18 ❌ How How are How are you How are you Fine (the text before the pause is dropped, and fail to recognize the punctuations.) Reproducing the issue: Download the official sample project. Run it on an iOS 18 device or simulator. Say "how are you fine" Only "fine" will be displayed.
4
4
686
Oct ’24
Speech Recognition Problem in iOS 18.0
It looks like Apple has added some new API(s) to SFSpeechRecognition My app, which is currently listed on App Store does feature speech recognition. Yet, trying to use it under iOS 18.0 throws errors: -[SFSpeechRecognitionTask localSpeechRecognitionClient:speechRecordingDidFail:]_block_invoke Ignoring subsequent local speech recording error: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)" What happens is that after several words are transcribed and displayed, the next sentence results in previous words disappearance. That's probably what that portion of the error text - "Ignoring subsequent local speech recording error: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)" means. The problem occurs ONLY when the app is running under iOS 18.0 Even when it's compiled in Xcode 16.0 using iOS 17.5 everything works fine. Any suggestions?
35
8
3.1k
Oct ’24
Voice to Text on a Beta platform
I'm writing an app that uses on-device voice to text for recognising scientific terms. It works fine on my phone but now in beta my first tester cannot make it work. All the permission requests are working: p&s Mic and Speech Recognition are both now enabled on the target device where the user granted the app permission. Is there something else I'm missing? Incidentally, both my phone, the target phone and my XCode are fully up to date. Thanks.
0
0
375
Aug ’24
What methods in what Framework to separate an audio file into two files?
I'm having trouble using SFSpeechRecognizer & SFSpeechRecognitionTask to show me the words from an audio file. I found a solution on stackoverflow to separate the audio file into smaller sizes. How would I do that programmatically using Swift for a macOS app Xcode project? I would prefer not to separate the file into smaller files. I will submit another post with more information for that.
3
0
453
Aug ’24
IPCAUClient.cpp:139 IPCAUClient: can't connect to server (-66748) <0x104309130>
When using the AVSpeechSynthesizer() , I get an error after a couple of seconds :"IPCAUClient.cpp:139 IPCAUClient: can't connect to server (-66748) <0x104309130>", and then it speaks the text. The second time I call speak, there is no delay and error and it speaks immediately. Where does this error and delay come from and how can I resolve it? Intialization code: self.audioSession = AVAudioSession.sharedInstance() // 2) handle audio session first, before trying to read the text do { try audioSession.setCategory(.playback, mode: .voicePrompt, options: .duckOthers) try audioSession.setActive(false) } catch let error { Logger.model.debug("❓\(error.localizedDescription)") } speechSynthesizer = AVSpeechSynthesizer() speechSynthesizer.usesApplicationAudioSession = true Speak code: let utterance = AVSpeechUtterance(string: text) utterance.preUtteranceDelay = 0.1 utterance.rate = 0.5 utterance.pitchMultiplier = 0.75 utterance.prefersAssistiveTechnologySettings = false self.speechSynthesizer.speak(utterance) The last statement gives this error message!
1
4
535
Aug ’24
ios sound recognition: to what extent can developers access apple's built-in sound recognition?
hi, i am currently developing an app that has core functionalities reliant on detecting user laughter in the background. in our early stages we noticed apple's built-in sound recognition functionality. at the core, i am guessing that sound recognition requires permission from the user to access the microphone 24/7. currently, using the conventional avenue of background audio recording, a yellow indicator will be present on the top of the iphone screen indicating recording. this is not the case for sound recognition; instead. if all sound processing/recognition is kept on-device, is there any way to avoid the yellow dot and achieve sound laughter in a way that is similar to how apple's sound recognition does it? from the settings interface for sound recognition accessible to the user in the settings app, the only detectable "people" sounds are baby crying, coughing, and shouting. is it also possible to add laughter to this list somehow? thank you in advance.
2
0
450
Aug ’24
Configuring Apple Vision Pro's microphones to effectively pick up other speaker's voice
I am developing a visionOS app that captions speech in real environments. Currently, I am using Apple's built-in speech recognizer. However, when I was testing the app with a Vision Pro, the device seemed to only pick up the user's voice (in other words, the voices of the wearer of the Vision Pro device). For example, when the speech recognition task is running, and another person in front of me is talking, the system does not pick up the speech well. I tried to set the AVAudioSession to be equally sensitive to all directions: private func configureAudioSession() { do { try audioSession.setCategory(.record, mode: .measurement) try audioSession.setActive(true) if #available(visionOS 1.0, *) { let availableDataSources = audioSession.availableInputs?.first?.dataSources if let omniDirectionalSource = availableDataSources?.first(where: {$0.preferredPolarPattern == .omnidirectional}) { try audioSession.setInputDataSource(omniDirectionalSource) } } } catch { print("Failed to set up audio session: \(error)") } } And here is how I set up the speech recognition and configure the microphone inputs: private func startSpeechRecognition(completion: @escaping (String) -> Void) { do { // Cancel the previous task if it's running. if let recognitionTask = recognitionTask { recognitionTask.cancel() self.recognitionTask = nil } // The AudioSession is already active, creating input node. let inputNode = audioEngine.inputNode try inputNode.setVoiceProcessingEnabled(false) // Create and configure the speech recognition request recognitionRequest = SFSpeechAudioBufferRecognitionRequest() guard let recognitionRequest = recognitionRequest else { fatalError("Unable to create a recognition request") } recognitionRequest.shouldReportPartialResults = true // Keep speech recognition data on device if #available(iOS 13, *) { recognitionRequest.requiresOnDeviceRecognition = true } // Create a recognition task for speech recognition session. // Keep a reference to the task so that it can be canceled. recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest) { result, error in // var isFinal = false if let result = result { // Update the recognizedText completion(result.bestTranscription.formattedString) } else if let error = error { completion("Recognition error: \(error.localizedDescription)") } if error != nil || result?.isFinal == true { // Stop recognizing speech if there is a problem self.audioEngine.stop() inputNode.removeTap(onBus: 0) self.recognitionRequest = nil self.recognitionTask = nil } } // Configure the microphone input let recordingFormat = inputNode.outputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in self.recognitionRequest?.append(buffer) } audioEngine.prepare() try audioEngine.start() } catch { completion("Audio engine could not start: \(error.localizedDescription)") } }
0
0
483
Jul ’24
Is there a memory leak in localspeechrecognition?
Hello! I have noticed this in Sonoma and in the betas for Sequoia, the ARM variants. I am using the example from https://github.com/sveinbjornt/hear?tab=readme-ov-file in an attempt to cobble together an all-in-one transcription and low-level grammar checker utilizing LanguageTool. I have noticed that the ram usage, specifically the swap, just keeps on climbing while it is processing an audio file. It is... quite amazing to see exactly how much swap the dang thing can use in a pinch. Frighteningly so considering the Mini I am using only has 256gb of storage. Throw an eight hour mp3 audiobook at the process and see for yourself. I am aware that localspeechrecognition wasn't really designed with the idea that people will be throwing audio files at it, so it is understandable that it wouldn't be equipped to gracefully handle this situation. I am a novice programmer here. Seriously - this is my first major stab at programming since dabbling with Qbasic back in elementary school. Thus, this question: if there is a memory leak, is there a way to shunt the swap being used by the app to an external drive? I am willing to take the performance hit if it keeps the internal SSD from paying the ferryman sooner than expected due to excessive swap usage. Thanks!
2
0
413
Jul ’24