Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.

All subtopics
Posts under Media Technologies topic

Post

Replies

Boosts

Views

Activity

Unexpected AVAudioSession behavior after iOS 18.5 causing audio loss in VoIP calls
After updating to iOS 18.5, we’ve observed that outgoing audio from our app intermittently stops being transmitted during VoIP calls using AVAudioSession configured with .playAndRecord and .voiceChat. The session is set active without errors, and interruptions are handled correctly, yet audio capture suddenly ceases mid-call. This was not observed in earlier iOS versions (≤ 18.4). We’d like to confirm if there have been any recent changes in AVAudioSession, CallKit, or related media handling that could affect audio input behavior during long-running calls. func configureForVoIPCall() throws { try setCategory( .playAndRecord, mode: .voiceChat, options: [.allowBluetooth, .allowBluetoothA2DP, .defaultToSpeaker]) try setActive(true) }
1
0
133
Aug ’25
How to detect when iOS Camera app starts video recording (with Allow Audio Playback ON)?
Since iOS 18, the system setting “Allow Audio Playback” (enabled by default) allows third-party app audio to continue playing while the user is recording video with the Camera app. This has created a problem for the app I’m developing. ➡️ The problem: My app plays continuous audio in both foreground and background states. If the user starts recording video using the iOS Camera app, the app’s audio — still playing in the background — gets captured in the video — obviously an unintended behavior. Yes, the user could stop the app manually before starting the video recording, but that can’t be guaranteed. As a developer, I need a way to stop the app’s audio before the video recording begins. So far, I haven’t found a reliable way to detect when video recording starts if ‘Allow Audio Playback’ is ON. ➡️ What I’ve tried: — AVAudioSession.interruptionNotification → doesn’t fire — devicesChangedEventStream → not triggered I don’t want to request mic permission (app doesn’t use mic). also, disabling the app from playing audio in the background isn’t an option as it is a crucial part of the user experience ➡️ What I need: A reliable, supported way to detect when the Camera app begins video recording, without requiring mic access — so I can stop audio and avoid unintentional overlap with the user’s recordings. Any official guidance, workarounds, or AVFoundation techniques would be greatly appreciated. Thanks.
0
0
177
Aug ’25
SpeechTranscriber extremely slow (14+ seconds) despite proper locale allocation and optimization
Using the official SwiftTranscriptionSampleApp from WWDC 2025, speech transcription takes 14+ seconds from audio input to first result, making it unusable for real-time applications. Environment iOS: 26.0 Beta Xcode: Beta 5 Device: iPhone 16 pro Sample App: Official Apple SwiftTranscriptionSampleApp from WWDC 2025 Configuration Tested Locale: en-US (properly allocated with AssetInventory.allocate(locale:)) and es-ES Setup: All optimizations applied (preheating, high priority, model retention) I started testing in my own app to replace SFSpeech API and include speech detection but after long fights with documentation (this part is quite terrible TBH) I tested the example (https://developer.apple.com/documentation/speech/bringing-advanced-speech-to-text-capabilities-to-your-app) and saw same results. I added some logs to check the specific time: 🎙️ [20:30:41.532] ✅ Analyzer started successfully - ready to receive audio! 🎙️ [20:30:41.532] Listening for transcription results... 🎙️ [20:30:56.342] 🚀 FIRST TRANSCRIPTION RESULT after 14.810s: 'Hello' (isFinal: false) Questions Is this expected performance for iOS 26 Beta, because old SFSpeech is far faster? Are there additional optimization steps for SpeechTranscriber? Should we expect significant performance improvements in later betas?
1
0
141
Aug ’25
Live Translations on VOIP on iOS26
Hi team, With regards to Call (Live) Translations on VOIP: Is it possible to invoke live translations within the app? (without going into the Call System UI) Is it possible to navigate users from app to Call System UI via an API? (and also invoking the new live translations directly) Will Apple support more languages apart from the current ones? (Currently I see 4 supported languages)
1
0
110
Aug ’25
Incorrect 5.1 / Atmos channel mapping on Apple TV 4K (2022)
I ran 5.1 audio tests in both YouTube and Apple Music, and I noticed that when sound is supposed to play from the rear or front surround speakers, it’s also duplicated in the front left and right channels. I’m absolutely sure the issue is with the Apple TV, because I played the same video directly through my TV’s native system, and the channel separation was correct. Everything used to work perfectly before, so this must be a software issue. I’m currently on tvOS 26 Developer Beta 5, but I’m certain the problem also existed on the stable tvOS 18.5. I’ve already reset and updated my Apple TV, and I also tried switching the audio format to forced Dolby Atmos 5.1. On the forums, I mostly see complaints about Dolby Atmos not working at all — in my case, everything technically works, but not the way it’s supposed to.
1
0
63
Aug ’25
SpeechTranscriber not providing audioTimeRange for most results
I started playing which transcription of audio files on macOS today, latest beta of Xcode and latest beta of Tahoe. Transcription itself works really well, but for some reason the majority of the results contain no audioTimeRange. I got 22 single-word results with time ranges, spread out all over total file of 53 minutes. Is there something I can do to improve this? To my understanding, I have followed sample code and instructions very closely, but the SwiftTranscriptionSampleApp and other examples I've seen lead me to believe I should be getting a lot more time ranges than I actually do.
3
0
84
Aug ’25
Audio driver based on AudioDriverKit sometimes hangs after sleep
Dear Sirs, I’ve written a virtual audio driver based on AudioDriverKit and running as dext in my MacOS app. Sometimes when waking up from a sleep state the recording side of my driver extension seems to hang and I don’t see any calls to my io_operation callback. Then the recording app like a DAW seems to hang when trying to start a recording. This doesn’t happen after short sleep states or after a complete new start of my MacBook. I already opened a case in Feedback-Assistant on 5th of May (FB17503622) which also includes a sysdiagnose and a ktrace but I didn't get any feedback so far. Meanwhile some of our customers are getting angry and I'd like to know if there's anything I could do to fix this problem on my side. We’re not sure whether this worked in previous MacOS versions, we think we didn’t observe this before 15.3.1 but at least since 15.3.1. we’ve seen this problem. Best regards, Johannes
1
0
86
Aug ’25
Why is my YCbCr to sRGB camera pipeline brighter Than Preview Layer? (420YpCbCr8BiPlanarFullRange, AVCaptureVideoPreviewLayer)
Hi all, I'm working on a custom Metal-based video pipeline using AVCaptureVideoDataOutput, and I've run into an unexpected issue related to exposure. Setup: I'm capturing video frames using kCVPixelFormatType_420YpCbCr8BiPlanarFullRange. In my Metal shader, I: Convert YCbCr (full range Rec.709) to linear Rec.709 RGB. Apply Rec.709 → sRGB gamma encoding. Output to .bgra8Unorm_srgb via MTKView. Everything renders correctly in terms of colorspace math, but the image appears significantly brighter (~+3 stops EV) compared to AVCaptureVideoPreviewLayer and the native iOS Camera app under the same camera exposure settings. What I’ve verified: The color transforms are correct: YCbCr709 to RGB, then linear to sRGB. I'm not applying any tone mapping or aggressive look LUTs yet. Camera exposure is locked using: device.setExposureModeCustom(duration: ..., iso: ...) The same EV (e.g., ISO 50, 1/125s, f/5.6) on my iPhone appears visually 3 stops brighter than on my digital cameras (Sony/Canon etc). To match the look of the preview layer or camera app, I have to simulate a ~–3 EV shift in my custom pipeline. Questions: Is AVCaptureVideoPreviewLayer applying extra tone mapping, digital gain, or contrast shaping (like OOTF etc)? Does the camera ISP expose "hotter" (i.e., with more light) internally for the preview layer than what we get in video frame buffers? Is there a standard way to compensate for this ISP behavior in custom pipelines using AVCaptureVideoDataOutput? Can this be accounted for using metadata (e.g., exposure bias, gain, gamma curve)?
0
0
94
Aug ’25
Delay in Microphone Input When Talking While Receiving Audio in PTT Framework (Full Duplex Mode)
Context: I am currently developing an app using the Push-to-Talk (PTT) framework. I have reviewed both the PTT framework documentation and the CallKit demo project to better understand how to properly manage audio session activation and AVAudioEngine setup. I am not activating the audio session manually. The audio session configuration is handled in the incomingPushResult or didBeginTransmitting callbacks from the PTChannelManagerDelegate. I am using a single AVAudioEngine instance for both input and playback. The engine is started in the didActivate callback from the PTChannelManagerDelegate. When I receive a push in full duplex mode, I set the active participant to the user who is speaking. Issue When I attempt to talk while the other participant is already speaking, my input tap on the input node takes a few seconds to return valid PCM audio data. Initially, it returns an empty PCM audio block. Details: The audio session is already active and configured with .playAndRecord. The input tap is already installed when the engine is started. When I talk from a neutral state (no one is speaking), the system plays the standard "microphone activation" tone, which covers this initial delay. However, this does not happen when I am already receiving audio. Assumptions / Current Setup Because the audio session is active in play and record, I assumed that microphone input would be available immediately, even while receiving audio. However, there seems to be a delay before valid input is delivered to the tap, only occurring when switching from a receive state to simultaneously talking. Questions Is this expected behavior when using the PTT framework in full duplex mode with a shared AVAudioEngine? Should I be restarting or reconfiguring the engine or audio session when beginning to talk while receiving audio? Is there a recommended pattern for managing microphone readiness in this scenario to avoid the initial empty PCM buffer? Would using separate engines for input and output improve responsiveness? I would like to confirm the correct approach to handling simultaneous talk and receive in full duplex mode using PTT framework and AVAudioEngine. Specifically, I need guidance on ensuring the microphone is ready to capture audio immediately without the delay seen in my current implementation. Relevant Code Snippets Engine Setup func setup() { let input = audioEngine.inputNode do { try input.setVoiceProcessingEnabled(true) } catch { print("Could not enable voice processing \(error)") return } input.isVoiceProcessingAGCEnabled = false let output = audioEngine.outputNode let mainMixer = audioEngine.mainMixerNode audioEngine.connect(pttPlayerNode, to: mainMixer, format: outputFormat) audioEngine.connect(beepNode, to: mainMixer, format: outputFormat) audioEngine.connect(mainMixer, to: output, format: outputFormat) // Initialize converters converter = AVAudioConverter(from: inputFormat, to: outputFormat)! f32ToInt16Converter = AVAudioConverter(from: outputFormat, to: inputFormat)! audioEngine.prepare() } Input Tap Installation func installTap() { guard AudioHandler.shared.checkMicrophonePermission() else { print("Microphone not granted for recording") return } guard !isInputTapped else { print("[AudioEngine] Input is already tapped!") return } let input = audioEngine.inputNode let microphoneFormat = input.inputFormat(forBus: 0) let microphoneDownsampler = AVAudioConverter(from: microphoneFormat, to: outputFormat)! let desiredFormat = outputFormat let inputFramesNeeded = AVAudioFrameCount((Double(OpusCodec.DECODED_PACKET_NUM_SAMPLES) * microphoneFormat.sampleRate) / desiredFormat.sampleRate) input.installTap(onBus: 0, bufferSize: inputFramesNeeded, format: input.inputFormat(forBus: 0)) { [weak self] buffer, when in guard let self = self else { return } // Output buffer: 1920 frames at 16kHz guard let outputBuffer = AVAudioPCMBuffer(pcmFormat: desiredFormat, frameCapacity: AVAudioFrameCount(OpusCodec.DECODED_PACKET_NUM_SAMPLES)) else { return } outputBuffer.frameLength = outputBuffer.frameCapacity let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in outStatus.pointee = .haveData return buffer } var error: NSError? let converterResult = microphoneDownsampler.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock) if converterResult != .haveData { DebugLogger.shared.print("Downsample error \(converterResult)") } else { self.handleDownsampledBuffer(outputBuffer) } } isInputTapped = true }
4
0
283
Aug ’25
Save slow motion video with custom playback speed bar
Background: For iOS, I have built a custom video record app that records video at the highest possible FPS (supposed to be around 240 FPS but more like ~200 in practice). When I save this video to the user's camera roll, I notice this dynamic playback speed bar. This bar will speed up or slow down the video. My question: How can I save my video such that this playback speed bar is constantly slow or plays at real time speed? For reference I have include the playback speed bar that I am talking about in the screenshot, you can find this in the photo app when you record slow motion video.
0
0
183
Jul ’25
UIViewController + AVPlayerLayer + AVPlayer cannot AirPlay to HomePod on tvOS 18.5, but AVPlayerViewController + AVPlayer is ok
1, Devices Apple TV HD tvOS 18.5 (22L572) Mode(A1625) HomePod Gen1 OS18.5 (22L572) 2, Test cases: 2.1 UIViewController + AVPlayerLayer + AVPlayer Result: Can not AirPlay to HomePod Sample code for UIViewController + AVPlayer 2.1 AVPlayerViewController + AVPlayer Result: Can AirPlay to HomePod Sample code for AVPlayerViewController + AVPlayer
0
0
182
Jul ’25
iOS 18 Picture-in-Picture dynamically changes UIScreen.main.bounds on iPhone 11 causing keyboard clipping and layout issues
When creating and activating AVPictureInPictureController on iPhone 11 running iOS 18.x, the main screen viewport unexpectedly changes from the native 375×812 logical points to 414×896 points (the size of a different device such as iPhone XR/11 Plus). Additionally, a separate UIWindow with zero CGRect is created. This causes UIKit layout calculations, especially related to the keyboard and safeAreaInsets, to malfunction and results in issues like keyboard clipping and general UI misalignment. The problem appears tied specifically to iOS 18+ behavior and does not manifest on earlier iOS versions or other devices consistently. Steps to Reproduce: Run the app on iPhone 11 with iOS 18.x (including the latest minor updates). Instantiate and start an AVPictureInPictureController during app runtime. Observe that UIScreen.main.bounds changes from 375×812 (native iPhone 11 size) to 414×896. Notice a secondary UIWindow appears with frame CGRectZero. Interact with text inputs that cause the keyboard to appear. The keyboard is clipped or partially obscured, causing UI usability issues. Layout elements dependent on UIScreen.main.bounds or safeAreaInsets behave incorrectly. Expected Behavior: UIScreen.main.bounds should not dynamically change upon PiP controller creation. The coordinate space and viewport should remain accurate to device native dimensions. The keyboard and UI layout should adjust properly with respect to the actual on-screen size and safe areas. Actual Behavior: UIScreen.main.bounds changes to 414×896 upon PiP controller creation on iPhone 11 iOS 18+. A zero-rect secondary UIWindow is created, impacting layout queries. Keyboard clipping and UI layout bugs occur. The app viewport and coordinate space are inconsistent with device hardware.
0
0
150
Jul ’25
AVSampleBufferDisplayLayer drop frame when play uncompressed video with bframe>3
When using AVSampleBufferDisplayLayer to play uncompressed H.264 and H.265 video with B-frames more than 7, frame drops occur. The more B-frames there are, the more noticeable the frame drops become, for example 15 bframes. Use FFmpeg to transcode a video file with visible timestamps and frame numbers (x264 or x265 ): ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx264 -x264-params "bframes=15:b-adapt=0" -crf 30 -y x264_bf15.mp4 ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx265 -x265-params "bframes=15:b-adapt=0" -crf 30 -y x265_bf15.mp4 Use the demo player from this repository to reproduce the issue: https://github.com/msfrms/CustomPlayer frame drops can be observed. And following log can be found in devices console. mediaserverd <<<< IQ-CA >>>> piqca_gmstats_dump: FIQCA(0x1266f4000) recent frames: enqueued: 184, displayed: 138, dropped: 42, flushed: 0, evicted: 3, >16ms late: 2 PS. I was using iphone11 iOS14.6, to replay this issue. May I ask why frame drops occur in this case? Is there any configuration or API usage change that could help fix the frame drop issue? Many thanks!
2
1
187
Jul ’25
Why does CADisplayLink of an external UIScreen drift in time?
I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable. I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:). The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below). Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely. Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same). When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected. Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else? Can the drifting be stopped? func onDisplayLinkUpdate(displayLink: CADisplayLink) { // Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime() }
3
0
190
Jul ’25
AVAudioEngine failing with -10877 on macOS 26 beta, no devices detected via AVFoundation but HAL works
I’m developing a macOS audio monitoring app using AVAudioEngine, and I’ve run into a critical issue on macOS 26 beta where AVFoundation fails to detect any input devices, and AVAudioEngine.start() throws the familiar error 10877. FB#: FB19024508 Strange Behavior: AVAudioEngine.inputNode shows no channels or input format on bus 0. AVAudioEngine.start() fails with -10877 (AudioUnit connection error). AVCaptureDevice.DiscoverySession returns zero audio devices. Microphone permission is granted (authorized), and the app is properly signed and sandboxed with com.apple.security.device.audio-input. However, CoreAudio HAL does detect all input/output devices: Using AudioObjectGetPropertyDataSize and AudioObjectGetPropertyData with kAudioHardwarePropertyDevices, I can enumerate 14+ devices, including AirPods, USB DACs, and BlackHole. This suggests the lower-level audio stack is functional. I have tried: Resetting CoreAudio with sudo killall coreaudiod Rebuilding and re-signing the app Clearing TCC with tccutil reset Microphone Running on Apple Silicon and testing Rosetta/native detection via sysctl.proc_translated Using a fallback mechanism that logs device info from HAL and rotates logs for submission via Feedback Assistant I have submitted logs and a reproducible test case via Feedback Assitant : FB#: FB19024508]
0
0
222
Jul ’25
SpeechTranscriber/SpeechAnalyzer being relatively slow compared to FoundationModel and TTS
So, I've been wondering how fast a an offline STT -> ML Prompt -> TTS roundtrip would be. Interestingly, for many tests, the SpeechTranscriber (STT) takes the bulk of the time, compared to generating a FoundationModel response and creating the Audio using TTS. E.g. InteractionStatistics: - listeningStarted: 21:24:23 4480 2423 - timeTillFirstAboveNoiseFloor: 01.794 - timeTillLastNoiseAboveFloor: 02.383 - timeTillFirstSpeechDetected: 02.399 - timeTillTranscriptFinalized: 04.510 - timeTillFirstMLModelResponse: 04.938 - timeTillMLModelResponse: 05.379 - timeTillTTSStarted: 04.962 - timeTillTTSFinished: 11.016 - speechLength: 06.054 - timeToResponse: 02.578 - transcript: This is a test. - mlModelResponse: Sure! I'm ready to help with your test. What do you need help with? Here, between my audio input ending and the Text-2-Speech starting top play (using AVSpeechUtterance) the total response time was 2.5s. Of that time, it took the SpeechAnalyzer 2.1s to get the transcript finalized, FoundationModel only took 0.4s to respond (and TTS started playing nearly instantly). I'm already using reportingOptions: [.volatileResults, .fastResults] so it's probably as fast as possible right now? I'm just surprised the STT takes so much longer compared to the other parts (all being CoreML based, aren't they?)
2
0
506
Jul ’25
AVAssetReaderOutput.Provider Missing symbols
Recurring crash on install of any app with the new sourceVideoTrackProvider.next() dyld[41966]: Symbol not found: _$sSo19AVAssetReaderOutputC12AVFoundationE8ProviderC4nextxSgyYaKFTjTu Referenced from: <79AA2BE0-A6B4-32F5-A804-E84BBE5D1AEA> /Users/<username>/Library/Developer/Xcode/DerivedData/TrackProviderCrash-bbbhjptcxnmfdcackxtpucnunxyc/Build/Products/Debug-maccatalyst/TrackProviderCrash.app/Contents/MacOS/TrackProviderCrash.debug.dylib Expected in: <1B847AF9-7973-3B28-95C2-09E73F6DD50B> /usr/lib/swift/libswiftAVFoundation.dylib Can be reproduced with the current Xcode Beta 4 by running on to MacCatalyst and macOS https://developer.apple.com/documentation/AVFoundation/converting-projected-video-to-apple-projected-media-profile Crash goes away of you comment out lines 154-158 and 164-170 which are while let sampleBuffer = try await sourceVideoTrackProvider.next(){/*other code*/} Can also be reproduced if you add the code below to a MacCatalyst project import AVKit let asset: AVURLAsset = .init(url: Bundle.main.url(forResource: "SomeVideo.mp4", withExtension: nil)!) let videoReader = try! AVAssetReader(asset: asset) let videoTracks = try! await asset.loadTracks(withMediaCharacteristic: .visual) // Get the side-by-side video track. let videoTrack = videoTracks.first! let videoInputTrack = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: nil) let sourceVideoTrackProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>> = videoReader.outputProvider(for: videoInputTrack) //Comment out this while let sb = try! await sourceVideoTrackProvider.next() { }
1
0
578
Jul ’25
Why Does WebView Audio Get Quiet During RTC Calls? (AVAudioSession Analysis)
I developed an educational app that implements audio-video communication through RTC, while using WebView to display course materials during classes. However, some users are experiencing an issue where the audio playback from WebView is very quiet. I've checked that the AVAudioSessionCategory is set by RTC to AVAudioSessionCategoryPlayAndRecord, and the AVAudioSessionCategoryOption also includes AVAudioSessionCategoryOptionMixWithOthers. What could be causing the WebView audio to be suppressed, and how can this be resolved?
0
0
534
Jul ’25
iOS 26 HLS Audio Track Display Behavior: EXT-X-MEDIA NAME vs LANGUAGE Attributes
Hello Apple Developer Community, I am seeking clarification on the intended display behavior of HLS audio tracks within the iOS 26 (or current beta) native player, specifically concerning the NAME and LANGUAGE attributes of the EXT-X-MEDIA tag. In our HLS manifests, we define alternative audio tracks using EXT-X-MEDIA tags, like so: #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="ja",NAME="AUDIO-1",DEFAULT=YES,AUTOSELECT=YES,URI="audio_ja.m3u8" #EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio",LANGUAGE="ja",NAME="AUDIO-2",URI="audio_en.m3u8" Our observation is that when an audio track is selected and its name is displayed in the native iOS media controls (e.g., Control Center or within a full-screen video player's UI), the value specified in the NAME attribute ("AUDIO-1", "AUDIO-2") does not seem to be used. Instead, the display appears to derive from the LANGUAGE attribute ("ja", "en"), often showing the system's localized string for that language (e.g., "Japanese", "English"). We would like to understand the official or intended behavior regarding this. Is it the expected behavior for the iOS native player to prioritize the LANGUAGE attribute (or its localized equivalent) over the NAME attribute for displaying the selected audio track's label? If this is the intended design, what is the recommended best practice for developers who wish to present a custom, human-readable name for audio tracks (beyond the standard language name) in the native iOS UI? Are there any specific AVPlayer properties or AVMediaSelectionOption considerations that would allow more granular control over this display, or is this entirely managed by the system based on the LANGUAGE attribute? Any insights or official guidance on this behavior in iOS 26 (and potentially previous versions) would be greatly appreciated. Thank you for your time and assistance.
2
0
347
Jul ’25
[26] audioTimeRange would still be interesting for .volatileResults in SpeechTranscriber
So experimenting with the new SpeechTranscriber, if I do: let transcriber = SpeechTranscriber( locale: locale, transcriptionOptions: [], reportingOptions: [.volatileResults], attributeOptions: [.audioTimeRange] ) only the final result has audio time ranges, not the volatile results. Is this a performance consideration? If there is no performance problem, it would be nice to have the option to also get speech time ranges for volatile responses. I'm not presenting the volatile text at all in the UI, I was just trying to keep statistics about the non-speech and the speech noise level, this way I can determine when the noise level falls under the noisefloor for a while. The goal here was to finalize the recording automatically, when the noise level indicate that the user has finished speaking.
4
0
287
Jul ’25