Dive into the technical aspects of audio on your device, including codecs, format support, and customization options.

Audio Documentation

Posts under Audio subtopic

Post

Replies

Boosts

Views

Activity

AVMIDIPlayer.play() function crashes on iOS 18
It's only occurs on iOS 18+. Backtrace attached below. Exception Codes: 0x0000000000000000, 0x0000000000000000 Termination Reason: SIGNAL 6 Abort trap: 6 Terminating Process: NoteKeys [24384] Triggered by Thread: 0 Last Exception Backtrace: 0 CoreFoundation 0x1a2d4c7cc __exceptionPreprocess + 164 (NSException.m:249) 1 libobjc.A.dylib 0x1a001f2e4 objc_exception_throw + 88 (objc-exception.mm:356) 2 CoreFoundation 0x1a2e47748 +[NSException raise:format:] + 128 (NSException.m:0) 3 AVFAudio 0x1bd41f4c8 -[AVMIDIPlayer play:] + 300 (AVMIDIPlayer.mm:145) 4 NoteKeys 0x1023c0670 SoundGenerator.playData() + 20 (SoundGenerator.swift:170) 5 NoteKeys 0x1023c0670 EditViewController.playBtnTapped(startIndex:) + 940 (EditViewController.swift:2034) 6 NoteKeys 0x1024497fc specialized Keyboard.playBtnTapped(sender:) + 1904 (Keyboard.swift:1249) 7 NoteKeys 0x10244631c Keyboard.playBtnTapped(sender:) + 4 (<compiler-generated>:0) 8 NoteKeys 0x10244631c @objc Keyboard.playBtnTapped(sender:) + 48 9 UIKitCore 0x1a58739cc -[UIApplication sendAction:to:from:forEvent:] + 100 (UIApplication.m:5816) 10 UIKitCore 0x1a58738a4 -[UIControl sendAction:to:forEvent:] + 112 (UIControl.m:942) 11 UIKitCore 0x1a58736f4 -[UIControl _sendActionsForEvents:withEvent:] + 324 (UIControl.m:1013) 12 UIKitCore 0x1a5fe8d8c -[UIButton _sendActionsForEvents:withEvent:] + 124 (UIButton.m:4198) 13 UIKitCore 0x1a5fea5a0 -[UIControl touchesEnded:withEvent:] + 400 (UIControl.m:692) 14 UIKitCore 0x1a57bb9ac -[UIWindow _sendTouchesForEvent:] + 852 (UIWindow.m:3318) 15 UIKitCore 0x1a57bb3d8 -[UIWindow sendEvent:] + 2964 (UIWindow.m:3641) 16 UIKitCore 0x1a564fb70 -[UIApplication sendEvent:] + 376 (UIApplication.m:12972) 17 UIKitCore 0x1a565009c __dispatchPreprocessedEventFromEventQueue + 1048 (UIEventDispatcher.m:2686) 18 UIKitCore 0x1a5659f3c __processEventQueue + 5696 (UIEventDispatcher.m:3044) 19 UIKitCore 0x1a5552c60 updateCycleEntry + 160 (UIEventDispatcher.m:133) 20 UIKitCore 0x1a55509d8 _UIUpdateSequenceRun + 84 (_UIUpdateSequence.mm:136) 21 UIKitCore 0x1a5550628 schedulerStepScheduledMainSection + 172 (_UIUpdateScheduler.m:1171) 22 UIKitCore 0x1a555159c runloopSourceCallback + 92 (_UIUpdateScheduler.m:1334) 23 CoreFoundation 0x1a2d20328 __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 28 (CFRunLoop.c:1970) 24 CoreFoundation 0x1a2d202bc __CFRunLoopDoSource0 + 176 (CFRunLoop.c:2014) 25 CoreFoundation 0x1a2d1ddc0 __CFRunLoopDoSources0 + 244 (CFRunLoop.c:2051) 26 CoreFoundation 0x1a2d1cfbc __CFRunLoopRun + 840 (CFRunLoop.c:2969) 27 CoreFoundation 0x1a2d1c830 CFRunLoopRunSpecific + 588 (CFRunLoop.c:3434) 28 GraphicsServices 0x1eecfc1c4 GSEventRunModal + 164 (GSEvent.c:2196) 29 UIKitCore 0x1a5882eb0 -[UIApplication _run] + 816 (UIApplication.m:3844) 30 UIKitCore 0x1a59315b4 UIApplicationMain + 340 (UIApplication.m:5496) 31 NoteKeys 0x10254bc10 main + 68 (AppDelegate.swift:15) 32 dyld 0x1c870aec8 start + 2724 (dyldMain.cpp:1334) Thanks very much for any help: )
1
0
302
Feb ’25
watchOS longFormAudio cannot de active
My workout watch app supports audio playback during exercise sessions. When users carry both Apple Watch, iPhone, and AirPods, with AirPods connected to the iPhone, I want to route audio from Apple Watch to AirPods for playback. I've implemented this functionality using the following code. try? session.setCategory(.playback, mode: .default, policy: .longFormAudio, options: []) try await session.activate() When users are playing music on iPhone and trigger my code in the watch app, Apple Watch correctly guides users to select AirPods, pauses the iPhone's music, and plays my audio. However, when playback finishes and I end the session using the code below: try session.setActive(false, options:[.notifyOthersOnDeactivation]) the iPhone doesn't automatically resume the previously interrupted music playback—it requires manual intervention. Is this expected behavior, or am I missing other important steps in my code?
1
0
301
Nov ’25
Is there an errors with SpatialAudioCLI?
Hi, everyone, I downloaded the source code EditingSpatialAudioWithAnAudioMix.zip from https://developer.apple.com/documentation/Cinematic/editing-spatial-audio-with-an-audio-mix, when I carried out one of the actions named "process" in command line the program crashed!! Form the source code, I found that the value of componentType is set to kAudioUnitType_FormatConverter: // The actual `AudioUnit`. public var auAudioMix = AVAudioUnitEffect() init() { // Generate a component description for the audio unit. let componentDescription = AudioComponentDescription( componentType: kAudioUnitType_FormatConverter, componentSubType: kAudioUnitSubType_AUAudioMix, componentManufacturer: kAudioUnitManufacturer_Apple, componentFlags: 0, componentFlagsMask: 0) auAudioMix=AVAudioUnitEffect(audioComponentDescription: componentDescription) } But in the document from https://developer.apple.com/documentation/avfaudio/avaudiouniteffect/init(audiocomponentdescription:), it seems that componentType can not be set to kAudioUnitType_FormatConverter and : Has everyone encountered this problem?
1
0
154
Nov ’25
Always audio from latest connected external USB mic
Hello! I've two mics connected to a USB-hub. The USB-hub is then connected to my iPad. Both mics are part of the audio session's list of available inputs. The problem is that regardless of which mic I select in my app (using setPreferredInput() on the audio session), the audio keeps coming from the mic that was last connected to the USB-hub. Anyone that knows if this is a limitation in iPadOS/iOS?
1
1
205
Jul ’25
Improving Speech Analyzer Transcription for technical terms
I am developing an app with transcription and I am exploring ways to improve the transcription from the SpeechAnalyzer/Transcriber for technical terms. SFSpeech... recognition had the capability of being augmented by contextualStrings. Does something similar exist for SpeechAnalyzer/Transcriber? If so please point me towards the documentation and any sample code that may exist for this. If there are other options, please let me know.
1
1
255
Sep ’25
Apple Music iOS 26 features in Android
Since many users like me use Apple Music on Android, the app is almost as feature-rich as iOS. It would be fantastic if the developers could add the new iOS 26 features to the Android app, along with a minor UI change. I know it’s challenging to implement liquid glass on Android hardware or design, but features like auto-mix, pronunciation, and translation could be added. kindly consider this request !!!!
1
0
144
Jul ’25
AppleAVBAudio assertion information
Hi, I'm currently developping an AVB hardware device, and I'm currently stuck because because the apple AVB stack is throwing me errors without much informations. Is there any way to have more information about these assertions and why they are happening ? Furtermore is there any documentation on theAppleAVBAudio module ? It would be very handy Here are the logs shown in the console: Filtering the log data using "process == "coreaudiod"" Timestamp Thread Type Activity PID TTL 2025-12-05 15:44:27.087043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.087545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.088043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.088546+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.089043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.089545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.090043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.090545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.091043+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.091545+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.092044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.092544+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.093044+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.093552+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.094050+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533 2025-12-05 15:44:27.094543+0100 0x15ae74 Default 0x0 12965 0 coreaudiod: (AppleAVBAudio) Assert: <private> (value 0x0 0), <private> file: <private>, line: 1533
1
0
85
2w
Logic Pro cannot load v3 audio unit with framework compiled with Swift 6
Sequoia 15.4.1 (24E263) XCode: 16.3 (16E140) Logic Pro: 11.2.1 I’ve been developing a complex audio unit for Mac OS that works perfectly well in its own bespoke host app and is now well into its beta testing stage. It did take some effort to get it to work well in Logic Pro however and all was fine and working well until: The AU part is an empty app extension with a framework containing its code. The framework contains Swift code for the UI and C code for the DSP parts. When the framework is compiled using the Swift 5 compiler the AU will run in Logic with no problems. (I should also mention that AU passes the most strict auval tests). But… when the framework is compiled with Swift 6 Logic Pro cannot load it. Logic displays a message saying the audio unit could not be loaded and to contact the developer. My own host app loads the AU perfectly well with the Swift 6 version, so I know there’s nothing wrong with the audio unit. I cannot find any differences in any of the built output files except, of course, the actual binary code in the framework. I’ve worked for hours on this and cannot find a solution other than to build the framework in Swift 5. (I worked hard to get all the async code updated and working with Swift 6! so I feel a little cheated!) What is happening? Is this a bug in Logic? Is this a bug in Swift 6 compiler/linker? I’m at the Duh! hands in the air, tearing out hair stage! ( once again!)
1
0
363
Jul ’25
Users experiencing frequent media services reset interruptions
I work on an iOS app that records video and audio. We've been getting reports for a while from users who are experiencing their video recordings being cut off. After investigating, I found that many users are receiving the AVAudioSessionMediaServicesWereResetNotification (.mediaServicesWereResetNotification) notification while recording. It's associated with the AVFoundationErrorDomain[-11819] error, which seems to indicate that the system audio daemon crashed. We have a handler registered to end the recording, show the user a prompt, and restart our AV sessions. However, from our logs this looks to be happening to hundreds of users every day and it's not an ideal user experience, so I would like to figure out why this is happening and if it's due to something that we're doing wrong. The debug menu option to trigger the audio session reset is not of much use, because it can't be triggered unless you leave the app and go to system settings. So our app can't be recording video when the debug reset is triggered. So far I haven't found a way to reproduced the issue locally, but I can see that it's happening to users from logs. I've found some posts online from developers experiencing similar issues, but none of them seem to directly address our issue. The system error doesn't include a userInfo dictionary, and as far as I can tell it's a system daemon crash so any logs would need to be captured from the OS. Is there any way that I could get more information about what may be causing this error that I may have missed?
1
0
91
Apr ’25
Music in iOS 26.2
I’m running the iOS 26.2 Public Beta update and my album artwork is missing from the music app (I’m not using Apple Music). I use google to get my album artwork. Do I need to wait for a new update?
1
0
155
Nov ’25
Hybrid Wired-to-Wireless Audio Mode Using AirPods Charging Case
Many Apple users own both Bluetooth earphones (AirPods) and traditional wired earphones. While Bluetooth audio provides freedom of movement, some users still prefer wired earphones for comfort, sound profile, or personal preference. However, plugging wired earphones directly into an iPhone can feel restrictive and inconvenient during daily use. This proposal suggests a hybrid audio approach where wired earphones can be connected to a Bluetooth-enabled AirPods charging case (or a similar Apple-designed module), allowing users to enjoy wired earphones without a physical connection to the iPhone. #Problem Statement *Wired earphones offer consistent audio quality and zero latency *Bluetooth earphones provide freedom from cables *Users must currently choose one or the other *Plugging wired earphones into an iPhone limits movement and can feel intrusive in daily scenarios (walking, commuting, working) There is no native Apple solution that allows wired earphones to function wirelessly while maintaining Apple’s audio experience standards. #Proposed Solution Introduce a Wired-to-Wireless Audio Mode through the AirPods charging case or a dedicated Apple Bluetooth audio bridge. How it works: User plugs wired earphones into the AirPods case (or a future AirPods accessory port) The case acts as a Bluetooth audio transmitter Audio is streamed wirelessly from iPhone to the case The case outputs audio to the wired earphones #User experiences: No cable connected to the iPhone Familiar wired earphone sound Freedom of movement similar to Bluetooth earbuds User Experience (UX Flow) Plug wired earphones into the AirPods case iPhone automatically detects: “Wired Earphones via AirPods Case” Seamless pairing using existing AirPods framework Audio controls, volume, and switching handled through iOS No additional apps required #Key Benefits Combines wired sound reliability with wireless convenience Reduces physical cable disturbance during use Extends usefulness of existing wired earphones Minimal learning curve for users Fits naturally into Apple’s ecosystem and design philosophy #Privacy & Performance Considerations On-device audio processing only No cloud involvement Low-latency audio using Apple’s proprietary Bluetooth codecs Power-efficient usage leveraging AirPods case battery #Target Users Users who prefer wired earphones but want wireless freedom Commuters and walkers Developers and professionals who multitask Users sensitive to Bluetooth earbud fit or comfort #Ecosystem Fit Builds on existing AirPods pairing and audio stack Aligns with Apple’s focus on seamless UX Could be implemented via: New AirPods hardware Firmware update + accessory Dedicated Apple audio bridge
1
0
246
4d
Why personal music recommendations contain no more than 12 item?
Hi! I get personal recommendations MusicItemCollection using this code: func getRecommendations() async throws -> MusicItemCollection<MusicPersonalRecommendation> { let request = MusicPersonalRecommendationsRequest() let response = try await request.response() let recommendations = response.recommendations return recommendations } However, all recommendations contain no more than 12 MusicItem's, while the Music.app application provides much more for some recommendations, for example, for the You recently listened recommendation, the Music.app application displays 40 items. Each recommendation has an items property that contains a collection of musical items MusicItemCollection<MusicPersonalRecommendation.Item>, the hasNextBatch property for these collections is always false. I expected that for some collections loading of new items would be available. Please tell me if I'm doing something wrong or is this a MusicKit bug? Thank you!
1
0
462
Feb ’25
Crackling/Popping sound when using AVAudioUnitTimePitch
I have a simple AVAudioEngine graph as follows: AVAudioPlayerNode -> AVAudioUnitEQ -> AVAudioUnitTimePitch -> AVAudioUnitReverb -> Main mixer node of AVAudioEngine. I noticed that whenever I have AVAudioUnitTimePitch or AVAudioUnitVarispeed in the graph, I noticed a very distinct crackling/popping sound in my Airpods Pro 2 when starting up the engine and playing the AVAudioPlayerNode and unable to find the reason why this is happening. When I remove the node, the crackling completely goes away. How do I fix this problem since i need the user to be able to control the pitch and rate of the audio during playback. import AVKit @Observable @MainActor class AudioEngineManager { nonisolated private let engine = AVAudioEngine() private let playerNode = AVAudioPlayerNode() private let reverb = AVAudioUnitReverb() private let pitch = AVAudioUnitTimePitch() private let eq = AVAudioUnitEQ(numberOfBands: 10) private var audioFile: AVAudioFile? private var fadePlayPauseTask: Task<Void, Error>? private var playPauseCurrentFadeTime: Double = 0 init() { setupAudioEngine() } private func setupAudioEngine() { guard let url = Bundle.main.url(forResource: "Song name goes here", withExtension: "mp3") else { print("Audio file not found") return } do { audioFile = try AVAudioFile(forReading: url) } catch { print("Failed to load audio file: \(error)") return } reverb.loadFactoryPreset(.mediumHall) reverb.wetDryMix = 50 pitch.pitch = 0 // Increase pitch by 500 cents (5 semitones) engine.attach(playerNode) engine.attach(pitch) engine.attach(reverb) engine.attach(eq) // Connect: player -> pitch -> reverb -> output engine.connect(playerNode, to: eq, format: audioFile?.processingFormat) engine.connect(eq, to: pitch, format: audioFile?.processingFormat) engine.connect(pitch, to: reverb, format: audioFile?.processingFormat) engine.connect(reverb, to: engine.mainMixerNode, format: audioFile?.processingFormat) } func prepare() { guard let audioFile else { return } playerNode.scheduleFile(audioFile, at: nil) } func play() { DispatchQueue.global().async { [weak self] in guard let self else { return } engine.prepare() try? engine.start() DispatchQueue.main.async { [weak self] in guard let self else { return } playerNode.play() fadePlayPauseTask?.cancel() playPauseCurrentFadeTime = 0 fadePlayPauseTask = Task { [weak self] in guard let self else { return } while true { let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: true) // Ramp up volume until 1 is reached if volume >= 1 { break } engine.mainMixerNode.outputVolume = volume try await Task.sleep(for: .milliseconds(10)) playPauseCurrentFadeTime += 0.01 } engine.mainMixerNode.outputVolume = 1 } } } } func pause() { fadePlayPauseTask?.cancel() playPauseCurrentFadeTime = 0 fadePlayPauseTask = Task { [weak self] in guard let self else { return } while true { let volume = updateVolume(for: playPauseCurrentFadeTime / 0.1, rising: false) // Ramp down volume until 0 is reached if volume <= 0 { break } engine.mainMixerNode.outputVolume = volume try await Task.sleep(for: .milliseconds(10)) playPauseCurrentFadeTime += 0.01 } engine.mainMixerNode.outputVolume = 0 playerNode.pause() // Shut down engine once ramp down completes DispatchQueue.global().async { [weak self] in guard let self else { return } engine.pause() } } } private func updateVolume(for x: Double, rising: Bool) -> Float { if rising { // Fade in return Float(pow(x, 2) * (3.0 - 2.0 * (x))) } else { // Fade out return Float(1 - (pow(x, 2) * (3.0 - 2.0 * (x)))) } } func setPitch(_ value: Float) { pitch.pitch = value } func setReverbMix(_ value: Float) { reverb.wetDryMix = value } } struct ContentView: View { @State private var audioManager = AudioEngineManager() @State private var pitch: Float = 0 @State private var reverb: Float = 0 var body: some View { VStack(spacing: 20) { Text("🎵 Audio Player with Reverb & Pitch") .font(.title2) HStack { Button("Prepare") { audioManager.prepare() } Button("Play") { audioManager.play() } .padding() .background(Color.green) .foregroundColor(.white) .cornerRadius(10) Button("Pause") { audioManager.pause() } .padding() .background(Color.red) .foregroundColor(.white) .cornerRadius(10) } VStack { Text("Pitch: \(Int(pitch)) cents") Slider(value: $pitch, in: -2400...2400, step: 100) { _ in audioManager.setPitch(pitch) } } VStack { Text("Reverb Mix: \(Int(reverb))%") Slider(value: $reverb, in: 0...100, step: 1) { _ in audioManager.setReverbMix(reverb) } } } .padding() } }
1
0
193
Apr ’25
It crashes when AVAssetReader is released
Thread 5 Crashed: 0 libobjc.A.dylib 0x19af7b038 objc_msgSend + 56 1 CoreFoundation 0x19dfdb618 cow_cleanup + 135 2 CoreFoundation 0x19dfdb6fc -[__NSDictionaryM dealloc] + 147 3 MediaToolbox 0x1b167636c FigRemotePropertyCacheTeardown + 31 4 MediaToolbox 0x1b1c5b648 remoteXPCAsset_Finalize + 107 5 CoreMedia 0x1b1e9166c FigBaseObjectFinalize + 275 6 CoreFoundation 0x19dfcc5ec _CFRelease + 295 7 AVFCore 0x1b1054d64 -[AVFigAssetTrackInspector dealloc] + 151 8 AVFCore 0x1b0f818d8 -[AVAssetTrack dealloc] + 63 9 CoreFoundation 0x19dfdba28 RELEASE_OBJECTS_IN_THE_ARRAY + 115 10 CoreFoundation 0x19dfdb7e0 -[__NSArrayM dealloc] + 147 11 AVFCore 0x1b0f52e04 -[AVURLAsset dealloc] + 167 12 libobjc.A.dylib 0x19af821f8 object_cxxDestructFromClass(objc_object*, objc_class*) + 115 13 libobjc.A.dylib 0x19af7df20 objc_destructInstance_nonnull_realized(objc_object*) + 75 14 libobjc.A.dylib 0x19af7d4a4 _objc_rootDealloc + 71 15 AVFCore 0x1b0fef988 -[AVAssetReaderOutput dealloc] + 415 16 AVFCore 0x1b0ff11ec -[AVAssetReaderTrackOutput dealloc] + 127 17 CoreFoundation 0x19dfe20a4 -[__NSSingleObjectArrayI dealloc] + 63 18 libobjc.A.dylib 0x19af7d3f8 AutoreleasePoolPage::releaseUntil(objc_object**) + 203
1
0
268
2w
In Speech framework is SFTranscriptionSegment timing supposed to be off and speechRecognitionMetadata nil until isFinal?
I'm working in Swift/SwiftUI, running XCode 16.3 on macOS 15.4 and I've seen this when running in the iOS simulator and in a macOS app run from XCode. I've also seen this behaviour with 3 different audio files. Nothing in the documentation says that the speechRecognitionMetadata property on an SFSpeechRecognitionResult will be nil until isFinal, but that's the behaviour I'm seeing. I've stripped my class down to the following: private var isAuthed = false // I call this in a .task {} in my SwiftUI View public func requestSpeechRecognizerPermission() { SFSpeechRecognizer.requestAuthorization { authStatus in Task { self.isAuthed = authStatus == .authorized } } } public func transcribe(from url: URL) { guard isAuthed else { return } let locale = Locale(identifier: "en-US") let recognizer = SFSpeechRecognizer(locale: locale) let recognitionRequest = SFSpeechURLRecognitionRequest(url: url) // the behaviour occurs whether I set this to true or not, I recently set // it to true to see if it made a difference recognizer?.supportsOnDeviceRecognition = true recognitionRequest.shouldReportPartialResults = true recognitionRequest.addsPunctuation = true recognizer?.recognitionTask(with: recognitionRequest) { (result, error) in guard result != nil else { return } if result!.isFinal { //speechRecognitionMetadata is not nil } else { //speechRecognitionMetadata is nil } } } } Further, and this isn't documented either, the SFTranscriptionSegment values don't have correct timestamp and duration values until isFinal. The values aren't all zero, but they don't align with the timing in the audio and they change to accurate values when isFinal is true. The transcription otherwise "works", in that I get transcription text before isFinal and if I wait for isFinal the segments are correct and speechRecognitionMetadata is filled with values. The context here is I'm trying to generate a transcription that I can then highlight the spoken sections of as audio plays and I'm thinking I must be just trying to use the Speech framework in a way it does not work. I got my concept working if I pre-process the audio (i.e. run it through until isFinal and save the results I need to json), but being able to do even a rougher version of it 'on the fly' - which requires segments to have the right timestamp/duration before isFinal - is perhaps impossible?
1
0
134
Jul ’25