Recognize spoken words in recorded or live audio using Speech.

Speech Documentation

Posts under Speech tag

68 Posts
Sort by:
Post not yet marked as solved
1 Replies
2.5k Views
When the screen is unlocked the AVSpeechSynthesizer.speak is working fine and in locked not working do { try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.playAndRecord, mode: .default, options: .defaultToSpeaker) try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation) } catch { print("audioSession properties weren't set because of an error.") } let utterance = AVSpeechUtterance(string: voiceOutdata) utterance.voice = AVSpeechSynthesisVoice(language: "en-US") let synth = AVSpeechSynthesizer() synth.speak(utterance) defer { disableAVSession() } Error Log in the locked state [AXTTSCommon] Failure starting audio queue alp![AXTTSCommon] _BeginSpeaking: couldn't begin playback
Posted
by
Post not yet marked as solved
1 Replies
2.4k Views
Are these error codes documented anywhere? Error codes 203 and 1110 seems to happen regularly. I think they mean the following: 203: some limit reached (happens very regularly when using server-side speech recognition, less often when using on-device recognition) 1110: no speech detected I have gotten a new one now: 1107 No idea what that means.
Posted
by
Post not yet marked as solved
3 Replies
1.2k Views
It's a little bit unclear to me whether I get a list of available or installed voices on the device. If an app requests a voice which is available but not installed, what happened? Is it upon the app to install the missing voice or is this done by iOS automatically? For some reasons, a male and female gender is not offered for each language. What's the reason for?
Posted
by
Post not yet marked as solved
1 Replies
993 Views
We are creating an online book reading app in which we are initiating video call (group call:- for video call. we are using agora SDK) and at the join of call we start book reading and highlight words at other members' end also and recording/recognition text we are using SFSpeechRecognizer but whenever call kit start and video call start SFSpeechRecognizer start to record audio at others end it's getting failed always, can you please provide any solution to record audio during the video call. // // Speech.swift // Edsoma // // Created by Kapil on 16/02/22. // import Foundation import AVFoundation import Speech protocol SpeechRecognizerDelegate {   func didSpoke(speechRecognizer : SpeechRecognizer , word : String?) } class SpeechRecognizer: NSObject {       private let speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US")) //1   private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest?   private var recognitionTask: SFSpeechRecognitionTask?   private let audioEngine = AVAudioEngine()   var delegate : SpeechRecognizerDelegate?   static let shared = SpeechRecognizer()   var isOn = false       func setup(){     speechRecognizer?.delegate = self //3           SFSpeechRecognizer.requestAuthorization { (authStatus) in //4               var isButtonEnabled = false               switch authStatus { //5       case .authorized:         isButtonEnabled = true                 case .denied:         isButtonEnabled = false         print("User denied access to speech recognition")                 case .restricted:         isButtonEnabled = false         print("Speech recognition restricted on this device" )                 case .notDetermined:         isButtonEnabled = false         print("Speech recognition not yet authorized")       @unknown default:         break;       }               OperationQueue.main.addOperation() {         // self.microphoneButton.isEnabled = isButtonEnabled       }     }   }   func transcribeAudio(url: URL) {     // create a new recognizer and point it at our audio     let recognizer = SFSpeechRecognizer()     let request = SFSpeechURLRecognitionRequest(url: url)     // start recognition!     recognizer?.recognitionTask(with: request) { [unowned self] (result, error) in       // abort if we didn't get any transcription back       guard let result = result else {         print("There was an error: \(error!)")         return       }       // if we got the final transcription back, print it       if result.isFinal {         // pull out the best transcription...         print(result.bestTranscription.formattedString)       }     }   }       func startRecording() {     isOn = true     let inputNode = audioEngine.inputNode     if recognitionTask != nil {       inputNode.removeTap(onBus: 0)       self.audioEngine.stop()       self.recognitionRequest = nil       self.recognitionTask = nil       DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 1) {         self.startRecording()       }       return       debugPrint("****** recognitionTask != nil *************")     }           let audioSession = AVAudioSession.sharedInstance()     do {               try audioSession.setCategory(AVAudioSession.Category.multiRoute)       try audioSession.setMode(AVAudioSession.Mode.measurement)       try audioSession.setActive(true, options: .notifyOthersOnDeactivation)     } catch {       print("audioSession properties weren't set because of an error.")     }           recognitionRequest = SFSpeechAudioBufferRecognitionRequest()                 guard let recognitionRequest = recognitionRequest else {       fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")     }           recognitionRequest.shouldReportPartialResults = true           recognitionRequest.taskHint = .search           recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in               var isFinal = false               if result != nil {         self.delegate?.didSpoke(speechRecognizer: self, word: result?.bestTranscription.formattedString)          debugPrint(result?.bestTranscription.formattedString)         isFinal = (result?.isFinal)!                 }               if error != nil {         debugPrint("Speech Error ====>",error)         inputNode.removeTap(onBus: 0)         self.audioEngine.stop()         self.recognitionRequest = nil         self.recognitionTask = nil         if BookReadingSettings.isSTTEnable{           DispatchQueue.main.asyncAfter(deadline: DispatchTime.now() + 1) {             self.startRecording()           }         }         // self.microphoneButton.isEnabled = true       }     })          // let recordingFormat = AVAudioFormat.init(commonFormat: .pcmFormatFloat32, sampleRate: <#T##Double#>, interleaved: <#T##Bool#>, channelLayout: <#T##AVAudioChannelLayout#>)//inputNode.outputFormat(forBus: 0)     inputNode.removeTap(onBus: 0)     let sampleRate = AVAudioSession.sharedInstance().sampleRate     let recordingFormat = AVAudioFormat(standardFormatWithSampleRate: sampleRate, channels: 1)     inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in       self.recognitionRequest?.append(buffer)     }           audioEngine.prepare()           do {       try audioEngine.start()     } catch {       print("audioEngine couldn't start because of an error.")     }     debugPrint("Say something, I'm listening!")     //textView.text = "Say something, I'm listening!"         }       /* func stopRecording(){     isOn = false     debugPrint("Recording stoped")     self.audioEngine.stop()     recognitionTask?.cancel()     let inputNode = audioEngine.inputNode     inputNode.removeTap(onBus: 0)     self.recognitionRequest = nil     self.recognitionTask = nil         }*/       func stopRecording(){     isOn = false     debugPrint("Recording stoped")     let inputNode = audioEngine.inputNode     inputNode.removeTap(onBus: 0)     self.audioEngine.stop()     recognitionTask?.cancel()     self.recognitionRequest = nil     self.recognitionTask = nil    }     } extension SpeechRecognizer : SFSpeechRecognizerDelegate {     }
Posted
by
Post not yet marked as solved
3 Replies
2.1k Views
I have updated to macOS Monterrey and my code for SFSPeechRecognizer just broke. I get this error if I try to configure an offline speech recognizer for macOS Error Domain=kLSRErrorDomain Code=102 "Failed to access assets" UserInfo={NSLocalizedDescription=Failed to access assets, NSUnderlyingError=0x6000003c5710 {Error Domain=kLSRErrorDomain Code=102 "No asset installed for language=es-ES" UserInfo={NSLocalizedDescription=No asset installed for language=es-ES}}} Here is a code snippet from a demo project: private func process(url: URL) throws {     speech = SFSpeechRecognizer.init(locale: Locale(identifier: "es-ES"))     speech.supportsOnDeviceRecognition = true     let request = SFSpeechURLRecognitionRequest(url: url)     request.requiresOnDeviceRecognition = true     request.shouldReportPartialResults = false     speech.recognitionTask(with: request) { result, error in       guard let result = result else {         if let error = error {           print(error)           return         }         return       }       if let error = error {         print(error)         return       }       if result.isFinal {         print(result.bestTranscription.formattedString)       }     }   } I have tried with different languages (es-ES, en-US) and it says the same error each time. Any idea on how to install these assets or how to fix this?
Posted
by
Post not yet marked as solved
12 Replies
5.8k Views
Setting a voice for AVSpeechSynthesizer leads to an heap buffer overflow. Turn on address sanitizer in Xcode 14 beta and run the following code. Anybody else experiencing this problem, is there any workaround? let synthesizer = AVSpeechSynthesizer() var synthVoice : AVSpeechSynthesisVoice? func speak() { let voices = AVSpeechSynthesisVoice.speechVoices()           for voice in voices {       if voice.name == "Daniel" {    // select e.g. Daniel voice         synthVoice = voice       }     }           let utterance = AVSpeechUtterance(string: "Test 1 2 3")           if let synthVoice = synthVoice { utterance.voice = synthVoice     }           synthesizer.speak(utterance) // AddressSanitizer: heap-buffer-overflow }
Posted
by
Post not yet marked as solved
19 Replies
6.9k Views
Recently I updated to Xcode 14.0. I am building an iOS app to convert recorded audio into text. I got an exception while testing the application from the simulator(iOS 16.0). [SpeechFramework] -[SFSpeechRecognitionTask handleSpeechRecognitionDidFailWithError:]_block_invoke Ignoring subsequent recongition error: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)" Error Domain=kAFAssistantErrorDomain Code=1107 "(null)" I have to know what does the error code means and why this error occurred.
Post not yet marked as solved
4 Replies
2.3k Views
Using the write method from AVSpeechSynthesizer produces the following error: [AXTTSCommon] TTSPlaybackEnqueueFullAudioQueueBuffer: error -66686 enqueueing buffer This issue has first been seen on iOS 16. More information and code snippet: https://stackoverflow.com/questions/73716508/play-audio-buffers-generated-by-avspeechsynthesizer-directly
Posted
by
Post marked as solved
3 Replies
917 Views
I'm trying to create a list that users can pick for the voice they want to use in my app. The code below works for US but if I change my locale settings to any other country, it fails to load any available voices even though I have downloaded other for other countries.. func voices() -> [String] {          AVSpeechSynthesisVoice.speechVoices().filter { $0.language == NSLocale.current.voiceLanguage }.map { $0.name }                    // AVSpeechSynthesisVoice.speechVoices().map { $0.name }     } If I list all voices available, I can select the voices for other countries that are loaded.
Posted
by
Post not yet marked as solved
2 Replies
1.7k Views
Crash - 1: Fatal Exception: NSRangeException 0 CoreFoundation 0x9e38 __exceptionPreprocess 1 libobjc.A.dylib 0x178d8 objc_exception_throw 2 CoreFoundation 0x1af078 -[__NSCFString characterAtIndex:].cold.1 3 CoreFoundation 0x1a44c -[CFPrefsPlistSource synchronize] 4 UIKitCore 0x1075f68 -[UIPredictionViewController predictionView:didSelectCandidate:] 5 TextInputUI 0x2461c -[TUIPredictionView _didRecognizeTapGesture:] 6 UIKitCore 0xbe180 -[UIGestureRecognizerTarget _sendActionWithGestureRecognizer:] 7 UIKitCore 0x42c050 _UIGestureRecognizerSendTargetActions 8 UIKitCore 0x1a5a18 _UIGestureRecognizerSendActions 9 UIKitCore 0x86274 -[UIGestureRecognizer _updateGestureForActiveEvents] 10 UIKitCore 0x132348 _UIGestureEnvironmentUpdate 11 UIKitCore 0x9ba418 -[UIGestureEnvironment _deliverEvent:toGestureRecognizers:usingBlock:] 12 UIKitCore 0xf6df4 -[UIGestureEnvironment _updateForEvent:window:] 13 UIKitCore 0xfb760 -[UIWindow sendEvent:] 14 UIKitCore 0xfaa20 -[UIApplication sendEvent:] 15 UIKitCore 0xfa0d8 __dispatchPreprocessedEventFromEventQueue 16 UIKitCore 0x141e00 __processEventQueue 17 UIKitCore 0x44a4f0 __eventFetcherSourceCallback 18 CoreFoundation 0xd5f24 CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION 19 CoreFoundation 0xe22fc __CFRunLoopDoSource0 20 CoreFoundation 0x661c0 __CFRunLoopDoSources0 21 CoreFoundation 0x7bb7c __CFRunLoopRun 22 CoreFoundation 0x80eb0 CFRunLoopRunSpecific 23 GraphicsServices 0x1368 GSEventRunModal 24 UIKitCore 0x3a1668 -[UIApplication _run] 25 UIKitCore 0x3a12cc UIApplicationMain ============================================================ Crash - 2: Crashed: com.apple.root.background-qos 0 libobjc.A.dylib 0x1c20 objc_msgSend + 32 1 UIKitCore 0xb0e0d8 __37-[UIDictationConnection cancelSpeech]_block_invoke + 152 2 libdispatch.dylib 0x24b4 _dispatch_call_block_and_release + 32 3 libdispatch.dylib 0x3fdc _dispatch_client_callout + 20 4 libdispatch.dylib 0x15b8c _dispatch_root_queue_drain + 684 5 libdispatch.dylib 0x16284 _dispatch_worker_thread2 + 164 6 libsystem_pthread.dylib 0xdbc _pthread_wqthread + 228 7 libsystem_pthread.dylib 0xb98 start_wqthread + 8 ============================================================ I encountered the two keyboard-related crashes in iOS 16.x, but I cannot reproduce them. Can anyone tell me what is going on and how to fix them? Please let me know.
Posted
by
Post not yet marked as solved
1 Replies
886 Views
[catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 [AXTTSCommon] Invalid rule: [AXTTSCommon] Invalid rule: [AXTTSCommon] File file:///var/MobileAsset/AssetsV2/com_apple_MobileAsset_Trial_Siri_SiriTextToSpeech/purpose_auto/20700159d3b64fc92fc033a3e3946535bd231e4b.asset/AssetData/vocalizer-user-dict.dat contained data that was not null terminated
Posted
by
Post not yet marked as solved
0 Replies
552 Views
I'm looking at the SFSpeechRecognitionRequest object but it's saying not to use it, and instead use SFSpeechAudioBufferRecognitionRequest: https://developer.apple.com/documentation/speech/sfspeechrecognitionrequest I want to work with the contextualStrings but I'm not seeing that as an option in the other objects. Do I ignore this warning and use it anyway?
Posted
by
Post not yet marked as solved
5 Replies
1.8k Views
We recently started working on getting an iOS app to work on Macs with Apple Silicon as a "Designed for iPhone" app and are having issues with speech synthesis. Specifically, voices retuned by AVSpeechSynthesisVoice.speechVoices() do not all work on the Mac. When we build an utterance and attempt to speak, the synthesizer falls back on a default voice and says some very odd text about voice parameters (that is not in the utterance speech text) before it does say the intended speech. Here is some sample code to setup the utterance and speak: func speak(_ text: String, _ settings: AppSettings) { let utterance = AVSpeechUtterance(string: text) if let voice = AVSpeechSynthesisVoice(identifier: settings.selectedVoiceIdentifier) { utterance.voice = voice print("speak: voice assigned \(voice.audioFileSettings)") } else { print("speak: voice error") } utterance.rate = settings.speechRate utterance.pitchMultiplier = settings.speechPitch do { let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playback, mode: .default, options: .duckOthers) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) self.synthesizer.speak(utterance) return } catch let error { print("speak: Error setting up AVAudioSession: \(error.localizedDescription)") } } When running the app on the Mac, this is the kind of error we get with "com.apple.eloquence.en-US.Rocko" as the selectedVoiceIdentifier: speak: voice assgined [:] 2023-05-29 18:00:14.245513-0700 A.I.[9244:240554] [aqme] AQMEIO_HAL.cpp:742 kAudioDevicePropertyMute returned err 2003332927 2023-05-29 18:00:14.410477-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.412837-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.413774-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.414661-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.415544-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416384-0700 A.I.[9244:240554] Could not retrieve voice [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null) 2023-05-29 18:00:14.416804-0700 A.I.[9244:240554] [AXTTSCommon] Audio Unit failed to start after 5 attempts. 2023-05-29 18:00:14.416974-0700 A.I.[9244:240554] [AXTTSCommon] VoiceProvider: Could not start synthesis for request SSML Length: 140, Voice: [AVSpeechSynthesisProviderVoice 0x6000033794f0] Name: Rocko, Identifier: com.apple.eloquence.en-US.Rocko, Supported Languages ( "en-US" ), Age: 0, Gender: 0, Size: 0, Version: (null), converted from tts request [TTSSpeechRequest 0x600002c29590] <speak><voice name="com.apple.eloquence.en-US.Rocko">How much wood would a woodchuck chuck if a wood chuck could chuck wood?</voice></speak> language: en-US footprint: premium rate: 0.500000 pitch: 1.000000 volume: 1.000000 2023-05-29 18:00:14.428421-0700 A.I.[9244:240360] [VOTSpeech] Failed to speak request with error: Error Domain=TTSErrorDomain Code=-4010 "(null)". Attempting to speak again with fallback identifier: com.apple.voice.compact.en-US.Samantha When we run AVSpeechSynthesisVoice.speechVoices(), the "com.apple.eloquence.en-US.Rocko" is absolutely in the list but fails to speak properly. Notice that the line: print("speak: voice assigned \(voice.audioFileSettings)") Shows: speak: voice assigned [:] The .audioFileSettings being empty seems to be a common factor for the voices that do not work properly on the Mac. For voices that do work, we see this kind of output and values in the .audioFileSettings: speak: voice assigned ["AVFormatIDKey": 1819304813, "AVLinearPCMBitDepthKey": 16, "AVLinearPCMIsBigEndianKey": 0, "AVLinearPCMIsFloatKey": 0, "AVSampleRateKey": 22050, "AVLinearPCMIsNonInterleaved": 0, "AVNumberOfChannelsKey": 1] So we added a function to check the .audioFileSettings for each voice returned by AVSpeechSynthesisVoice.speechVoices(): //The voices are set in init(): var voices = AVSpeechSynthesisVoice.speechVoices() ... func checkVoices() { DispatchQueue.global().async { [weak self] in guard let self = self else { return } let checkedVoices = self.voices.map { ($0.0, $0.0.audioFileSettings.count) } DispatchQueue.main.async { self.voices = checkedVoices } } } That looks simple enough, and does work to identify which voices have no data in their .audioFileSettings. But we have to run it asynchronously because on a real iPhone device, it takes more than 9 seconds and produces a tremendous amount of error spew to the console. 2023-06-02 10:56:59.805910-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:56:59.971435-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.122976-0700 A.I.[17186:910118] [catalog] Query for com.apple.MobileAsset.VoiceServices.VoiceResources failed: 2 2023-06-02 10:57:00.144430-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 11006 (Can't compile rule): regularExpression=\Oviedo(?=, (\x1b\\pause=\d+\\)?Florida)\b, message=unrecognized character follows \, characterPosition=1 2023-06-02 10:57:00.147993-0700 A.I.[17186:910116] [AXTTSCommon] MauiVocalizer: 16038 (Resource load failed): component=ttt/re, uri=, contentType=application/x-vocalizer-rettt+text, lhError=88602000 2023-06-02 10:57:00.148036-0700 A.I.[17186:910116] [AXTTSCommon] Error loading rules: 2147483648 ... This goes on and on and on ... There must be a better way?
Posted
by
Post not yet marked as solved
0 Replies
893 Views
AVSpeechSynthesizer delay issue, [AXTTSCommon] Invalid rule: Play text using AVSpeechSynthesizer. Sound comes out normally. If you call speak(_ utterance: AVSpeechUtterance) to play text as speech [AXTTSCommon] Invalid rule: The log is displayed and it is called two or three times and the sound comes out late. It is judged that there are cases where delays of more than 1 second occur depending on the user, device, or situation, usually ranging from 0.6 seconds. [AXTTSCommon] Invalid rule: There is no additional reasoning behind the log. [AXTTSCommon] Invalid rule: [AXTTSCommon] Invalid rule: In this way, it is called a second time. This issue occurs after iOS 16.0. Is there any way to fix that issue? or not I would like to know if there is a way to reduce the delay at startup even if that log is displayed after the speak(_ utterance: AVSpeechUtterance) call.
Posted
by
Post not yet marked as solved
2 Replies
1.3k Views
I am using SFSpeechRecognizer to perform speech recognition, but I am getting the following error. [SpeechFramework] -[SFSpeechRecognitionTask localSpeechRecognitionClient:speechRecordingDidFail:]_block_invoke Ignoring subsequent local speech recording error: Error Domain=kAFAssistantErrorDomain Code=1101 "(null)" Setting requiresOnDeviceRecognition to False works correctly, but previously it worked with True with no error. The value of supportsOnDeviceRecognition was True, so the device is recognizing that it supports speech recognition. iPad Pro 11inch iOS 16.5. Is this expected behavior?
Posted
by
Post not yet marked as solved
2 Replies
1.3k Views
I need a simple text-to-speech avatar in my iOS app. iOS already has Memojis ready to go - but I cannot find anywhere in the dev docs on how to access Memojis to use in as a tool in app development. Am I missing something? Also - can anyone point me to any resources besides the Apple docs for using AVSpeechSynthesis?
Posted
by
Post not yet marked as solved
2 Replies
789 Views
Hi everyone, I might need some help with on-device recognition. It seems that the speech recognition task will discard whatever it has transcribed after a new sentence starts (or it believes it becomes a new sentence) during a single audio session, with requiresOnDeviceRecognition is set to true. This doesn't happen with requiresOnDeviceRecognition set to false. System environment: macOS 14 with Xcode 15, deploying to iOS 17 Thank you all!
Posted
by
Post not yet marked as solved
20 Replies
5.4k Views
I see a lot of crashes on iOS 17 beta regarding some problem of "Text To Speech". Does anybody has a clue why TTS crashes? Anybody else seeing the same problem? Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Subtype: KERN_INVALID_ADDRESS at 0x000000037f729380 Exception Codes: 0x0000000000000001, 0x000000037f729380 VM Region Info: 0x37f729380 is not in any region. Bytes after previous region: 3748828033 Bytes before following region: 52622617728 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL MALLOC_NANO 280000000-2a0000000 [512.0M] rw-/rwx SM=PRV ---> GAP OF 0xd20000000 BYTES commpage (reserved) fc0000000-1000000000 [ 1.0G] ---/--- SM=NUL ...(unallocated) Termination Reason: SIGNAL 11 Segmentation fault: 11 Terminating Process: exc handler [36389] Triggered by Thread: 9 ..... Thread 9 name: Thread 9 Crashed: 0 libobjc.A.dylib 0x000000019eeff248 objc_retain_x8 + 16 1 AudioToolboxCore 0x00000001b2da9d80 auoop::RenderPipeUser::~RenderPipeUser() + 112 (AUOOPRenderPipePool.mm:400) 2 AudioToolboxCore 0x00000001b2e110b4 -[AUAudioUnit_XPC internalDeallocateRenderResources] + 92 (AUAudioUnit_XPC.mm:904) 3 AVFAudio 0x00000001bfa4cc04 AUInterfaceBaseV3::Uninitialize() + 60 (AUInterface.mm:524) 4 AVFAudio 0x00000001bfa894bc AVAudioEngineGraph::PerformCommand(AUGraphNodeBaseV3&, AVAudioEngineGraph::ENodeCommand, void*, unsigned int) const + 772 (AVAudioEngineGraph.mm:3317) 5 AVFAudio 0x00000001bfa93550 AVAudioEngineGraph::_Uninitialize(NSError**) + 132 (AVAudioEngineGraph.mm:1469) 6 AVFAudio 0x00000001bfa4b50c AVAudioEngineImpl::Stop(NSError**) + 396 (AVAudioEngine.mm:1081) 7 AVFAudio 0x00000001bfa4b094 -[AVAudioEngine stop] + 48 (AVAudioEngine.mm:193) 8 TextToSpeech 0x00000001c70b3c5c __55-[TTSSynthesisProviderAudioEngine renderSpeechRequest:]_block_invoke + 1756 (TTSSynthesisProviderAudioEngine.m:613) 9 libdispatch.dylib 0x00000001ae4b0740 _dispatch_call_block_and_release + 32 (init.c:1519) 10 libdispatch.dylib 0x00000001ae4b2378 _dispatch_client_callout + 20 (object.m:560) 11 libdispatch.dylib 0x00000001ae4b990c _dispatch_lane_serial_drain + 748 (queue.c:3885) 12 libdispatch.dylib 0x00000001ae4ba470 _dispatch_lane_invoke + 432 (queue.c:3976) 13 libdispatch.dylib 0x00000001ae4c5074 _dispatch_root_queue_drain_deferred_wlh + 288 (queue.c:6913) 14 libdispatch.dylib 0x00000001ae4c48e8 _dispatch_workloop_worker_thread + 404 (queue.c:6507) ... Thread 9 crashed with ARM Thread State (64-bit): x0: 0x0000000283309360 x1: 0x0000000000000000 x2: 0x0000000000000000 x3: 0x00000002833093c0 x4: 0x00000002833093c0 x5: 0x0000000101737740 x6: 0x0000000000000013 x7: 0x00000000ffffffff x8: 0x0000000283309360 x9: 0x3c788942d067009a x10: 0x0000000101547000 x11: 0x0000000000000000 x12: 0x00000000000007fb x13: 0x00000000000007fd x14: 0x000000001ee24020 x15: 0x0000000000000020 x16: 0x0000b1037f729360 x17: 0x000000037f729360 x18: 0x0000000000000000 x19: 0x0000000000000000 x20: 0x00000001016a8de8 x21: 0x0000000283e21d00 x22: 0x0000000283b3f1f8 x23: 0x0000000283098000 x24: 0x00000001bfb4fc35 x25: 0x00000001bfb4fc43 x26: 0x000000028033a688 x27: 0x0000000280c93090 x28: 0x0000000000000000 fp: 0x000000016fc86490 lr: 0x00000001b2da9d80 sp: 0x000000016fc863e0 pc: 0x000000019eeff248 cpsr: 0x1000 esr: 0x92000006 (Data Abort) byte read Translation fault
Posted
by
Post not yet marked as solved
1 Replies
838 Views
Hi, I'm creating a Text to speech app to read aloud PDF/Epub files. I was working fine on iOS older version but stop working for iOS 16.5 in background. Speaking stop just after couple mins that I don't want. I'm really appreciate if you can provide me a solution for this!
Posted
by