AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts

Post

Replies

Boosts

Views

Activity

Crash when trying to get originatingRecipient
According to the documentation (https://developer.apple.com/documentation/avfoundation/avcontentkeyrequest/originatingrecipient?changes=_3&language=objc), starting with ios 18.4, I can get AVContentKeyRecipient from AVContentKeyRequest. But when I try to get it, I get a crash. What could be the issue? I want to note that I add the asset to the AVContentKeySession using the addContentKeyRecipient method (https://developer.apple.com/documentation/avfoundation/avcontentkeysession/addcontentkeyrecipient(_:)?changes=_3&language=objc).
0
0
49
16h
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator)
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator) App Overview App Name: Extn Browser Bundle ID: ai.extn.browser Purpose: A visionOS web browser that plays 360°/180° VR videos in an immersive sphere environment Development Environment & SDK Versions Component Version Xcode 26.2 Swift 6.2 visionOS Deployment Target 26.2 Swift Concurrency MainActor isolation enabled App is released in the TestFlight. Frameworks Used SwiftUI - UI framework RealityKit - 3D rendering, MeshResource, ModelEntity, VideoMaterial AVFoundation - AVPlayer, AVAudioSession WebKit - WKWebView for browser functionality Network - NWListener for local proxy server Sphere Video Mechanism The app creates an immersive 360° video experience using the following approach: // 1. Create sphere mesh (10 meter radius for immersive viewing) let mesh = MeshResource.generateSphere(radius: 10.0) // 2. Create initial transparent material var material = UnlitMaterial() material.color = .init(tint: .clear) // 3. Create entity and invert sphere (negative X scale) let sphere = ModelEntity(mesh: mesh, materials: [material]) sphere.scale = SIMD3<Float>(-1, 1, 1) // Inverts normals for inside-out viewing sphere.position = SIMD3<Float>(0, 1.5, 0) // Eye level // 4. Create AVPlayer with video URL let player = AVPlayer(url: videoURL) // 5. Configure audio session for visionOS let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playback, mode: .moviePlayback, options: [.mixWithOthers]) try audioSession.setActive(true) // 6. Create VideoMaterial and apply to sphere let videoMaterial = VideoMaterial(avPlayer: player) if var modelComponent = sphere.components[ModelComponent.self] { modelComponent.materials = [videoMaterial] sphere.components.set(modelComponent) } // 7. Start playback player.play() ImmersiveSpace Configuration // browserApp.swift ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .environment(appModel) } .immersionStyle(selection: .constant(.mixed), in: .mixed) Entitlements <!-- browser.entitlements --> <key>com.apple.security.app-sandbox</key> <true/> <key>com.apple.security.network.client</key> <true/> <key>com.apple.security.network.server</key> <true/> Info.plist Network Configuration <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> </dict> The Issue Behavior in Simulator: Video plays correctly on the inverted sphere surface - 360° video is visible and wraps around the user as expected. Behavior on Physical Vision Pro: The sphere displays a black screen. No video content is visible, though the sphere entity itself is present. Important: Not a DRM/Licensing Issue This issue is NOT related to Digital Rights Management (DRM) or FairPlay. I have tested with: Unlicensed raw MP4 video files (no DRM protection) Self-hosted video content with no copy protection Direct MP4 URLs from CDN without any licensing requirements The same black screen behavior occurs with all unprotected video sources, ruling out DRM as the cause. (Plain H.264 MP4, no DRM) Screen Recording: Working in Simulator The following screen recording demonstrates playing a 360° YouTube video in the immersive sphere on the visionOS Simulator: https://cdn.commenda.kr/screen-001.mov This confirms that the VideoMaterial and sphere rendering work correctly in the simulator, but the same setup shows a black screen on the physical Vision Pro device. Observations AVPlayer status reports .readyToPlay - The video appears to load successfully VideoMaterial is created without errors - No exceptions thrown Sphere entity renders - The geometry is visible (black surface) Audio session is configured - No errors during audio session setup Network requests succeed - The video URL is accessible from the device Same result with local/unprotected content - DRM is not a factor Console Logs (Device) The logging shows: Sphere created and added to scene AVPlayer created with correct URL VideoMaterial created and applied Player status transitions to .readyToPlay player.play() called successfully Rate shows 1.0 (playing) Despite all success indicators, the rendered output is black. Questions for Apple Are there known differences in VideoMaterial behavior between the visionOS Simulator and physical Vision Pro hardware? Does VideoMaterial(avPlayer:) require specific video codec/format requirements that differ on device? (The test video is a standard H.264 MP4) Is there a required Metal capability or GPU feature for VideoMaterial that may not be available in certain contexts on device? Does the immersion style (.mixed) affect VideoMaterial rendering on hardware? Are there additional entitlements required for video texture rendering in RealityKit on physical hardware? Attempted Solutions Configured AVAudioSession with .playback category Added delay before player.play() to ensure material is applied Verified sphere scale inversion (-1, 1, 1) Tested multiple video URLs (including raw, unlicensed MP4 files) Confirmed network connectivity on device Ruled out DRM/FairPlay issues by testing unprotected content Environment Details Device: Apple Vision Pro visionOS Version: 26.2 Xcode Version: 26.2 macOS Version: Darwin 25.2.0
0
0
76
20h
Missing "Dolby Vision Profile" Option in Deliver Page - DaVinci Resolve 20 on iPadOS 26
Dear Support Team, ​I am writing to seek technical assistance regarding a persistent issue with Dolby Vision exporting in DaVinci Resolve 20 on my iPad Pro 12.9-inch (2021, M1 chip) running iPadOS 26.0.1. ​The Issue: Despite correctly configuring the project for a Dolby Vision workflow and successfully completing the dynamic metadata analysis, the "Dolby Vision Profile" dropdown menu (and related embedding options) is completely missing from the Advanced Settings in the Deliver page. ​My Current Configuration & Steps Taken: ​Software Version: DaVinci Resolve Studio 20 (Studio features like Dolby Vision analysis are active and functional). ​Project Settings: Color Science: DaVinci YRGB Color Managed. ​Dolby Vision: Enabled (Version 4.0) with Mastering Display set to 1000 nits. ​Output Color Space: Rec.2100 ST2084. ​Color Page: Dynamic metadata analysis has been performed, and "Trim" controls are functional. ​Export Settings: ​Format: QuickTime / MP4. ​Codec: H.265 (HEVC). ​Encoding Profile: Main 10. ​The Problem: Under "Advanced Settings," there is no option to select a Dolby Vision Profile (e.g., Profile 8.4) or to "Embed Dolby Vision Metadata." ​Potential Variables: ​System Version: I am currently running iPadOS 26. ​Apple ID: My iPad is currently not logged into an Apple ID. I suspect this might be preventing the app from accessing certain system-level AVFoundation frameworks or Dolby DRM/licensing certificates required for metadata embedding. ​Could you please clarify if the "Dolby Vision Profile" option is dependent on a signed-in Apple ID for hardware-level encoding authorization, or if this is a known compatibility issue with the current iPadOS 26 build? ​I look forward to your guidance on how to resolve this. ​Best regards, INSOFT_Fred
0
0
31
4d
Inquiry about Low-Latency Frame Interpolation & Super Resolution using VTFrameProcessor
Hello, I have implemented Low-Latency Frame Interpolation using the VTFrameProcessor framework, based on the sample code from https://developer.apple.com/kr/videos/play/wwdc2025/300. It is currently working well for both LIVE and VOD streams. However, I have a few questions regarding the lifecycle management and synchronization of this feature: 1. Common Questions (Applicable to both Frame Interpolation & Super Resolution) 1.1 Dynamic Toggling Do you recommend enabling/disabling these features dynamically during playback? Or is it better practice to configure them only during the initial setup/preparation phase? If dynamic toggling is supported, are there any recommended patterns for managing VTFrameProcessor session lifecycle (e.g., startSession / endSession timing)? 1.2 Synchronization Method I am currently using CADisplayLink to fetch frames from AVPlayerItemVideoOutput and perform processing. Is CADisplayLink the recommended approach for real-time frame acquisition with VTFrameProcessor? If the feature needs to be toggled on/off during active playback, are there any concerns or alternative approaches you would recommend? 1.3 Supported Resolution/Quality Range What are the minimum and maximum video resolutions supported for each feature? Are there any aspect ratio restrictions (e.g., does it support 1:1 square videos)? Is there a recommended resolution range for optimal performance and quality? 2. Frame Interpolation Specific Questions 2.1 LIVE Stream Support Is Low-Latency Frame Interpolation suitable for LIVE streaming scenarios where latency is critical? Are there any special considerations for LIVE vs VOD? 3. Super Resolution Specific Questions 3.1 Adaptive Bitrate (ABR) Stream Support In ABR (HLS/DASH) streams, the video resolution can change dynamically during playback. Is VTLowLatencySuperResolutionScaler compatible with ABR streams where resolution changes mid-playback? If resolution changes occur, should I recreate the VTLowLatencySuperResolutionScalerConfiguration and restart the session, or does the API handle this automatically? 3.2 Small/Square Resolution Issue I observed that 144x144 (1:1 square) videos fail with error:   "VTFrameProcessorErrorDomain Code=-19730: processWithSourceFrame within VCPFrameSuperResolutionProcessor failed" However, 480x270 (16:9) videos work correctly. minimumDimensions reports 96x96, but 144x144 still fails. Is there an undocumented restriction on aspect ratio or a practical minimum resolution? 3.3 Scale Factor Selection supportedScaleFactors returns [2.0, 4.0] for most resolutions. Is there a recommended scale factor for balancing quality and performance? Are there scenarios where 4.0x should be avoided? The documentation on this specific topic seems limited, so I would appreciate any insights or advice. Thank you.
0
0
240
1w
AVCam Sample Code - Undesired "Jump" in Video Recording Image
On iPhone 16 Pro Max (not tested other devices) there's a noticeable jump in the framing of the preview video when you record in the iOS AVCam Sample App. The same jump in camera framing can be observed by switching to the front facing camera and then back to the rear one. It looks roughly consistent with switching between the 0.5x and 1x camera (but not quite a match for the same viewable area in the Camera app) - and it's only when it's initially loaded, once recording is started it retains the 'closer' image no matter how many times it's stopped/started thereafter. I'm relatively new to Swift and haven't done anything with the camera before, so odd 'buggy' behaviour in the sample code isn't helping me understand it! :-) Is there any way to fix this?
0
0
177
1w
Are White Balance gains applied before or after ADC?
At which point in the image processing pipeline does iOS apply the white balance gains which can be set via AVCaptureDevice.setWhiteBalanceModeLocked(with:completionHandler:)? Are those gains applied in the analog part of the camera pipeline, before the pixel voltage gets converted via the ADC to digital values? Or does the camera first convert the pixel voltages to digital values and then the gains are applied to the digital values? Is this consistent across devices or can the behavior vary from device to device?
1
0
293
1w
Swift Student Challenge – Using iOS 26 APIs + Camera Input in Xcode App Playgrounds (Simulator Limitation)
Hi, I’m working on my Swift Student Challenge submission using iOS 26 APIs (FoundationModels) along with AVFoundation + Vision to capture user input and generate feedback. Since Swift Playgrounds doesn’t support FoundationModels framework, I’m using an Xcode App Playground, but I heared that submissions are reviewed in the Simulator, which doesn’t support live camera feed. I’m unsure how to handle this. Looking for guidance on the recommended approach. Thanks!
1
0
194
2w
is the output frame rate of a CMIOExtension rounded or capped?
I made a CMIOExtension (a virtual camera) which generates its own output, for use in our in-house software testing. I wanted to make a video source with 29.97, 30, 59.94 and 60fps output. To this end, I created a CMIOExtensionDeviceSource which creates a CMIOExtensionDevice with one CMIOExtensionStreamSource with various stream formats contained in [CMIOExtensionStreamFormat], including one with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1000, timescale: 30000) and another with both maxFrameDuration and minFrameDuration = CMTimeMake(value: 1001, timescale: 30000) I've held off on the creation of the 59.94/60fps source for now until this problem is resolved. my virtual camera works, it produces a signal, but when I examine its associated AVCaptureDevice in the debugger, I find (lldb) po self.captureDevice?.formats[0].videoSupportedFrameRateRanges[0].maxFrameDuration ▿ Optional<CMTime> ▿ some : CMTime - value : 1000000 - timescale : 30000000 ▿ flags : CMTimeFlags - rawValue : 1 - epoch : 0 I get the same value, 1000000/30000000, or exactly 30fps, for all the formats of my AVCaptureDevice. Is there something I'm doing wrong, or do CMIOExtensionDevices always round the frame rates? I can't force CoreMediaIO to produce frames at exactly my desired frame interval, but I'd like to ensure that the average frame rate is my desired rate. How can I do that? Frame emission is governed by a repeating DispatchSourceTimer with a repeat time specified in nanoseconds with the TimerFlags set to 'strict'.
2
0
581
2w
Dell monitor volume control issue on iMac via USB-C
I have a new 2725QC (Dell) Monitor that uses USB-C connection to connect with the iMac (2019, 27 inch) through the back port but the problem is that the volume control can currently only be done from the hardware, not the software control using the Apple keyboard. What should I do in terms of writing code to do this (Swift or Obj-C)? Is there a third-party solution for Intel iMac and ARM Mac?
2
0
165
2w
-46250 error when calling `makeSecureTokenForExpirationDateOfPersistableContentKey`
Hi there, We're working on offline playback of DRM tracks. The persistent keys (also known as track licenses) for offline playback are stored locally on the device and are served from cache when a user initiates playback of a downloaded track. Our persistent keys have a limited validity time and need to be refreshed when they expire. To prevent a situation where a persistent key expires while the user is offline, we've decided to eagerly refresh these keys one week before their expiration date. To make that happen we need to be able to obtain the expiration date of the given track license. We've been attempting to use the makeSecureTokenForExpirationDateOfPersistableContentKey API to facilitate this process. The documentation states that this API returns a secret token representing the persistent key, which we can then exchange with our license server for the expiration date: https://developer.apple.com/documentation/avfoundation/avcontentkeysession/makesecuretokenforexpirationdate(ofpersistablecontentkey:completionhandler:)?language=objc However, every time we call makeSecureTokenForExpirationDateOfPersistableContentKey, we receive an error with code -46250. We haven't been able to find any public references or documentation for this specific error code, which is preventing us from troubleshooting the issue. We are conducting our tests on a physical device, as the simulator does not support FairPlay playback. We don't use dual expiry approach. Is our understanding of how to obtain the expiration timestamp correct? Are we using the makeSecureTokenForExpirationDateOfPersistableContentKey API as it was intended? What does the -46250 error code mean, and what steps should we take to fix our FairPlay implementation to make this work? Thanks in advance for your assistance.
2
1
324
2w
AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture: behavior is different with iPhone 17
The front facing camera on iPhone 16 (and every model previous) gives the following values for AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture: 90 degrees portrait 180 degrees landscape left 270 degrees for upside-down 0 degrees for landscape right Using these values a transform is calculated: var transform: CGAffineTransform { let degrees = rotationCoordinator.videoRotationAngleForHorizonLevelCapture let radians = degrees * .pi / 180.0 return CGAffineTransform(rotationAngle: radians) } And then applied to the AVAssetInput: videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoSettings, sourceFormatHint: videoFormatDescription) videoInput.transform = transform And this ensures the correct transform is added to the metadata so that the recorded video plays in the correct orientation. However, with the iPhone 17 Pro and iPhone 17 Pro Max front facing cameras, AVCaptureDevice.RotationCoordinator.videoRotationAngleForHorizonLevelCapture return the different values: 0 degrees portrait 90 degrees landscape left 180 degrees for upside-down 270 degrees for landscape right So this approach breaks down, and the video orientation is incorrect. How is this intended to be handled?
2
0
509
2w
AVAssetWriterInput.PixelBufferReceiver.append hangs indefinitely (suspends and never resumes)
I’ve been struggling with a very frustrating issue using the new iOS 26 Swift Concurrency APIs for video processing. My pipeline reads frames using AVAssetReader, processes them via CIContext (Lanczos upscale), and then appends the result to an AVAssetWriter using the new PixelBufferReceiver. The Problem: The execution randomly stops at the ]await append(...)] call. The task suspends and never resumes. It is completely unpredictable: It might hang on the very first run, or it might work fine for 4-5 runs and then hang on the next one. It is independent of video duration: It happens with 5-second clips just as often as with long videos. No feedback from the system: There is no crash, no error thrown, and CPU usage drops to zero. The thread just stays in the suspended state indefinitely. If I manually cancel the operation and restart the VideoEngine, it usually starts working again for a few more attempts, which makes me suspect some internal resource exhaustion or a deadlock between the GPU context and the writer's input. The Code: Here is a simplified version of my processing loop: private func proccessVideoPipeline( readerOutputProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>>, pixelBufferReceiver: AVAssetWriterInput.PixelBufferReceiver, nominalFrameRate: Float, targetSize: CGSize ) async throws { while !Task.isCancelled, let payload = try await readerOutputProvider.next() { let sampleBufferInfo: (imageBuffer: CVPixelBuffer?, presentationTimeStamp: CMTime) = payload.withUnsafeSampleBuffer { sampleBuffer in return (sampleBuffer.imageBuffer, sampleBuffer.presentationTimeStamp) } guard let currentPixelBuffer = sampleBufferInfo.imageBuffer else { throw AsyncFrameProcessorError.missingImageBuffer } guard let pixelBufferPool = pixelBufferReceiver.pixelBufferPool else { throw NSError(domain: "PixelBufferPool", code: -1, userInfo: [NSLocalizedDescriptionKey: "No pixel buffer pool available"]) } let newPixelBuffer = try pixelBufferPool.makeMutablePixelBuffer() let newCVPixelBuffer = newPixelBuffer.withUnsafeBuffer({ $0 }) try upscale(currentPixelBuffer, outputPixelBuffer: newCVPixelBuffer, targetSize: targetSize ) let presentationTime = sampleBufferInfo.presentationTimeStamp try await pixelBufferReceiver.append(.init(unsafeBuffer: newCVPixelBuffer), with: presentationTime) } } Does anyone know how to fix it?
0
0
97
2w
Recurring FigXPCUtilities / FigCaptureSourceRemote err=-17281 logs when using AVCaptureVideoDataOutput on iOS 26.x
Hi everyone, I’m seeing recurring internal AVFoundation camera logs on iOS 26.2 and I’m trying to understand whether this is expected behavior or a regression in the capture pipeline. These logs appear shortly after starting an AVCaptureSession, while video frames are being delivered, and also when the camera is stopped or the capture session is torn down. <<<< FigXPCUtilities >>>> signalled err=-17281 at <>:302 <<<< FigCaptureSourceRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSourceRemote.m:569) - (err=-17281) Even in this clean, minimal setup, the same logs appear on iOS 26.2 The exact same logic did not produce these logs on iOS 18.x. To rule out issues caused by my own code, GPT created a minimal SwiftUI example from scratch. My primary interest is to perform real-time processing on the video frames delivered by the camera (via AVCaptureVideoDataOutput), for tasks such as analysis, computer vision, or custom frame handling, while simultaneously displaying the live preview. Thanks in advance for any insight. Example Code
1
0
635
2w
Any way to trigger cameraLensSmudgeDetectionStatus to change?
Looking to implement to UI to tell the user to clean their lens in our app. Implemented the KVO for the cameraLensSmudgeDetectionStatus but I'm having issues reliably triggering it in, both in our app and the main camera app. Tried to get inventive by putting tupperware over the lens, but I think the model driving this or the LiDAR sensor might be smart enough to detect there is something close to the lens. Is there any way to trigger this change in a similar way we can trigger thermal changes in debug? Thanks.
2
0
392
2w
Metal 4: Proper usage of requestResidency() with unique per-frame textures at 120fps
Hello, I have some confusion regarding ResidencySet. Specifically, about the requestResidency() function: how often should we call it? I have a captureOutput(_:didOutput:from:) method that is triggered at 60 or 120 fps. Inside this method, I am calling the following code every frame: computeResidencySet.removeAllAllocations() сomputeResidencySet.addAllocation(TextureA) computeResidencySet.addAllocation(TextureB) computeResidencySet.addAllocation(TextureC) computeResidencySet.commit() computeResidencySet.requestResidency() // Should we call it every frame? Please keep in mind that TextureA, TextureB, and TextureC are unique for each call (new instances are provided on every frame)."
1
0
505
2w
Is there any way to control alarm volume independently when using AlarmKit?
Is there any supported or recommended way to achieve user-configurable alarm volume while still using AlarmKit? Hi there! I’m currently building an alarm app on iOS using AlarmKit, and I’m running into a fundamental limitation around volume control. When using AlarmKit, alarm sounds are played at the system ringer volume. My goal is simple in concept: Allow users to set an alarm volume inside the app, and have the alarm sound play at that volume when triggered. However, AlarmKit does not provide any API to control or override the alarm volume. I’ve explored several approaches(AVSystemController, MPVolumeView...), but none achieved the desired result. Is there any supported or recommended way to achieve user-configurable alarm volume while still using AlarmKit? Any insights from developers who’ve shipped alarm apps, or from Apple engineers, would be greatly appreciated. Thanks in advance 🙏
1
0
147
2w
AVAudioSession.outputVolume does not reflect system volume changes made while app is in background
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume. Observed behavior: When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1). The app is then moved to the background. While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5). When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1). From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground. Questions: According to Apple’s documentation for AVAudioSession.outputVolume: “The systemwide output volume set by the user.” https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description. Questions: The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume? Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background? Any clarification on the intended behavior or recommended handling would be greatly appreciated.
0
0
127
3w
AVSpeechSynthesizer & Bluetooth Issues
Hello, I have a CarPlay Navigation app and utilize the AVSpeechSynthesizer to speak directions to a user. Everything works great on my CarPlay simulator as well as when plugged into my GMC truck. However, I found out yesterday that one of my users with a Ford truck the audio would cut in an out. After much troubleshooting, I was able to replicate this on my own truck when using Bluetooth to connect to CarPlay. My user was also utilizing Bluetooth. Has anyone else experienced this? Is there a fix to the problem? import SwiftUI import AVFoundation class TextToSpeechService: NSObject, ObservableObject, AVSpeechSynthesizerDelegate { private var speechSynthesizer = AVSpeechSynthesizer() static let shared = TextToSpeechService() override init() { super.init() speechSynthesizer.delegate = self } func configureAudioSession() { speechSynthesizer.delegate = self do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .voicePrompt, options: [.mixWithOthers, .allowBluetooth]) } catch { print("Failed to set audio session category: \(error.localizedDescription)") } } func speak(_ text: String) { Task(priority: .high) { let speechUtterance = AVSpeechUtterance(string: text) speechUtterance.voice = AVSpeechSynthesisVoice(language: AVSpeechSynthesisVoice.currentLanguageCode()) try AVAudioSession.sharedInstance().setActive(true, options: .notifyOthersOnDeactivation) speechSynthesizer.speak(speechUtterance) } } func speechSynthesizer(_ synthesizer: AVSpeechSynthesizer, didFinish utterance: AVSpeechUtterance) { Task { stopSpeech() try AVAudioSession.sharedInstance().setActive(false) } } func stopSpeech() { speechSynthesizer.stopSpeaking(at: .immediate) } }
1
1
691
3w