Speech Recognizer Transcription in Scrumdinger - Stops too early?

I found the Scrumdinger sample application really helpful in understanding SwiftUI, but I have a question about the transcription example.

Regardless of using either the "StartingProject" and doing the tutorial section, or using the "Completed" project, the speech transcription works but only for a small number of seconds.

Is this a side effect of something else in the project? Should I expect a complete transcription of everything said when the MeetingView view is presented?

This was done on Xcode 13.4 and Xcode 14 beta 4, with iOS 15 and iOS 16 (beta 4).

Thanks for any assistance!

  • Adding some basic print debugging into the recognitionHandler I can see that I was just exiting out of the view too soon - when results are done over the Internet, it just takes a long while to get a result! So if the user ends the scrum too soon, it looks like you lose anything that did not get translated.

Add a Comment

Replies

Just in case people come across this, if you choose a different quality of service when setting up the DispatchQueue in transcribe(), performance is much improved. I set it to .default and it then happily translated over the Internet at near real time on an iPhone 12.

It's unclear if this would be the optimal choice in a production application, but for better understanding the transcription capabilities of iOS it seems to be a better choice than .background.

I am not sure if I just did something dumb, but I found this because when testing the app, it was not transcribing at all. I made the suggested change to .default, and now it works fine.