Error throws while using the speech recognition service in my app

Recently I updated to Xcode 14.0. I am building an iOS app to convert recorded audio into text. I got an exception while testing the application from the simulator(iOS 16.0).

[SpeechFramework] -[SFSpeechRecognitionTask handleSpeechRecognitionDidFailWithError:]_block_invoke Ignoring subsequent recongition error: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)" Error Domain=kAFAssistantErrorDomain Code=1107 "(null)"

I have to know what does the error code means and why this error occurred.

Replies

while testing the application from the simulator

Do you see this problem when testing on a real device?

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

Add a Comment

I'm getting this error as well.

My usage:

  • feed an mp3 file into the speech
  • requiresOnDeviceRecognition is set to true
  • addsPunctuation is set to true, but setting to false doesn't fix anything
  • app being ran in foreground

Hardware: iPhone 11

Workaround:

  • Restart the device. Quitting the app is not good enough.
  • Split large audio into smaller chunks

Why it's not just the audio I'm transcribing: when I restart the device the exact same audio will transcribe

What I haven't tried: using audio buffers instead of feeding it a file.

Digging a bit deeper:

  • CPU usage is low
  • Memory usage is low (so it's not an OOM on my app's side
  • using [weal self] or [unowned self] in recognizer?.recognitionTask(with: request) makes no difference. I know that in the Lock Screen remote control that using weak self will cause a crash.
  • recognizer?.recognitionTask(with: request) {...} this closure gets called between 74 to 130 times before error is thrown. In the calls before the error, transcription is present, but metadata and transcription segment duration is not present. However, I've seen one example where metadata and transcription comes in before a couple times before the error.
  • setting locale manually does not fix it
  • request.shouldReportPartialResults = false does not fix it
  • recognizer?.defaultTaskHint = .dictation does not fix it
  • recognizer?.queue = OperationQueue() does not fix it
  • using delegate vs closure for task callback did not fix it

Ideal resolution from Apple:

  • Send a more helpful error so we know what we did wrong
  • If it's a bug in Speech framework given that restarting the device fixes it
  • Let us know if this is a priority for a fix (so that we can plan accordingly)

Hi everyone — I’ve got this error as well (on macOS) and was able to fix it.

It was a Sandbox issue: I’ve pointed to a mp3 file that my app just did not have access to. So I recommend using FileManager to check if the audio file you’re trying to transcript is readable.

Quinn,

We have these issues on real devices. Can you provide any sort of guidance on how to troubleshoot 1101 and 1107 errors from the recognizer?

Thanks in advance

Can you provide any sort of guidance on how to troubleshoot 1101 and 1107 errors from the recognizer?

Not really. I looked up these errors in the kAFAssistantErrorDomain domain and they both seem to be related to an XPC communication problems with the service that backs this API [1]. Something has caused that service to fail, which has caused the XPC communication to fail, which is triggering this error.

Do you see any relevant looking crash reports on that device? So, not a crash in your app but a crash in a system process that’s correlated in time with the error you’re seeing and could plausibly be involved.


Oh, I want to come back to this snippet from JonMercer’s post upthread:

Ideal resolution from Apple:

This gives the impression that you think that that DevForums is a formal support channel, which it is not. You have two option for formal support:

If you do report a bug, please post your bug number, just for the record.

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

[1] XPC implies IPC, so I’m not talking about off-device communication.

Quinn,

Thanks for the reply. We unfortunately do not have a crash report or a log from the device in question and we've never been able to reproduce the issues locally. If we receive either we'll report back.

[1] XPC implies IPC, so I’m not talking about off-device communication.

Can you clarify what you mean by "off-device communication"?

Thanks,

  • Aaron

We unfortunately do not have a crash report or a log from the device in question and we've never been able to reproduce the issues locally.

OK.

In situations like this what you want is a sysdiagnose log. It’s best to take the sysdiagnose log immediately after seeing the problem. One option here is to look for that specific error and prompt the user to capture the log and send it to you. You might want to restrict that feature to your beta testers (-:

In this case, however, a sysdiagnose log taken long after the fact might still capture actionable information, namely the crash reports from any system processes.

For more information about sysdiagnose log, see our Bug Reporting > Profiles and Logs page.

Can you clarify what you mean by "off-device communication"?

When dealing with speech APIs it’s important to distinguish between:

  • On-device processing, where all the work is done on the user’s device

  • Off-device processing, where some of the work is done on a server

That comment was intended to clarify that XPC is an IPC mechanism. IPC stands for inter-process communication, which is on-device communication. So, the immediate cause of the failure is something happening on the user’s device, not a problem talking to a server out there on the Internet.

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

Quinn,

Thanks for all the clarification. We can try to get a sysdiagnose from a beta tester. If that is successful we can proceed w/ analysis and filing a bug report if anything useful is found.

RE: on/off device processing. Thanks for the clarification.

This has been very helpful for us and we appreciate your time answering questions.

  • You're welcome.

Add a Comment

I'm getting the same error, it works fine in an IPAD and iphone 12, but doesn't work in iphone 14????

[SpeechFramework] -[SFSpeechRecognitionTask handleSpeechRecognitionDidFailWithError:]_block_invoke Ignoring subsequent recongition error: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)"

Quinn, Are there any updates, speech recognition service still doesn't work on real device while it does work in simulator following https://stackoverflow.com/questions/59786548/sfspeechrecognizer-fails-with-error-kafassistanterrordomain-code-1107.

Update:

let recognizer = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))

let recognizer = SFSpeechRecognizer(locale: Locale(identifier: "en-GB"))

worked for me. It shouldn't make a big difference though.

  • This switch made it start working on my iPhone. And I am based in the UK. Coincidence? No idea.

Add a Comment

> Are there any updates

Per my comments above, this seems like a bug. There have been no bug numbers posted on this thread, so it’s hard to check on its status, or even know whether a bug was actually filed O-:

Share and Enjoy

Quinn “The Eskimo!” @ Developer Technical Support @ Apple
let myEmail = "eskimo" + "1" + "@" + "apple.com"

I was seeing a similar issue with Xcode 15.0 and the iPhone 15 Pro simulator on iOS 17.0. Speech recognition worked once after a Erase all Content and Settings, but then emitted 1101 error messages to console even after rebooting the simulator. I never saw the 1107 error.

For me the solution was to not even instantiate the SFSpeechRecognizer until after calling SFSpeechRecog nizer.requestAuthorization(). I had been instantiating the object, asking for permission, then starting to call additional API and apparently I wasn't asking for permission early enough.

Nope I'm wrong. Still a problem on iOS 17.0 simulator and devices up to and including iOS 17.0.3. Doesn't seem to be a problem on iPhone 14 Pro simulator running iOS 16.4.

Ok I was able to resolve my confusion with SFSpeechRecognizer. What I found was:

  • SFSpeechRecognizer no longer works in simulator on iOS 17 at all. It works fine on iOS 16.4 simulator but in 17.0 it returns error 1107 and a nil transcript in the completion block of recognitionTask(with:resultHandler). This is regardless of whether requiresOnDeviceRecognition is set true or false.
  • SFSpeechRecognizer works fine on device in iOS 17, but emits spurious error 1101 messages in the console during the recognition process. These logs don't seem to negatively affect performance or the resulting transcript. Again, this is regardless of whether requiresOnDeviceRecognition is set true or false.
  • I think that the limitation of only working on-device is known by Apple ... the sample app described in https://developer.apple.com/documentation/speech/recognizing_speech_in_live_audio mentions this caveat: "The sample app doesn’t run in Simulator, so you need to run it on a physical device with iOS 17 or later, or iPadOS 17 or later." This is a little vague, but does add weight to the idea that simulator usage is not supported.
  • Interesting. Thanks for sharing your findings.

Add a Comment

@geneg1 I got the simulator to work but I'm seeing a gazillion error messages as well. how did you separate instantiating the SFSpeechRecognizer and calling SFSpeechRecog nizer.requestAuthorization()? I'm performing both in the initialiser of the SpeechRecognizer actor and can't seem to separate them.

< init() { recognizer = SFSpeechRecognizer() guard recognizer != nil else { transcribe(RecognizerError.nilRecognizer) return }

    Task {
        do {
            guard await SFSpeechRecognizer.hasAuthorizationToRecognize() else {
                throw RecognizerError.notAuthorizedToRecognize
            }
            guard await AVAudioSession.sharedInstance().hasPermissionToRecord() else {
                throw RecognizerError.notPermittedToRecord
            }
        } catch {
            transcribe(error)
        }
    }
} >