AVFoundation

RSS for tag

Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.

Posts under AVFoundation tag

200 Posts

Post

Replies

Boosts

Views

Activity

AVPlayer: escaped characters displayed wrong from WebVTT subtitles (HLS)
quotes are displayed incorrectly in subtitles of AVPlayerViewController when streaming VOD content using HLS. single quote ' (escaped ') is displayed as apos; double quotes " (escaped ") is displayed as quot; following the vtt specification. The same stream works fine in VLC player, showing quotes correctly in subtitles. subtitle vtt files use Content-Type: text/vtt WEBVTT X-TIMESTAMP-MAP=LOCAL:490014:06:04.000,MPEGTS:158764568760056 example line: 490014:05:46.000 --> 490014:05:50.440 align:start line:83% position:14% and the playlist has: #EXT-X-MEDIA:TYPE=SUBTITLES,GROUP-ID="subs",LANGUAGE="da",NAME="Dansk",AUTOSELECT=YES,CHARACTERISTICS="public.accessibility.transcribes-spoken-dialog,public.accessibility.describes-music-and-sound",URI="subs/dan_5/playlist.m3u8 #EXT-X-STREAM-INF:BANDWIDTH=780000,CODECS="mp4a.40.5,avc1.42c01e",RESOLUTION=256x144,AUDIO="audio-aac",SUBTITLES="subs" lære dig endnu bedre at kende." adding 'wvtt' to CODECS list in playlist does not make a difference. Is this a known bug? Is there a workaround? I guess the AVResourceLoaderDelegate can be used to intercept and parse the subtitle files, but it seems like quite a hack and not really intended to be used for this.
2
0
193
Dec ’25
How can I show a movable webcam preview above all windows in macOS without activating the app
I'm building a macOS app using SwiftUI, and I want to create a draggable floating webcam preview window Right now, I have something like this: import SwiftUI import AVFoundation struct WebcamPreviewView: View { let captureSession: AVCaptureSession? var body: some View { ZStack { if let session = captureSession { CameraPreviewLayer(session: session) .clipShape(RoundedRectangle(cornerRadius: 50)) .overlay( RoundedRectangle(cornerRadius: 50) .strokeBorder(Color.white.opacity(0.2), lineWidth: 2) ) } else { VStack(spacing: 8) { Image(systemName: "video.slash.fill") .font(.system(size: 40)) .foregroundColor(.white.opacity(0.6)) Text("No Camera") .font(.caption) .foregroundColor(.white.opacity(0.6)) } } } .shadow(color: .black.opacity(0.3), radius: 10, x: 0, y: 5) } } struct CameraPreviewLayer: NSViewRepresentable { let session: AVCaptureSession func makeNSView(context: Context) -> NSView { let view = NSView() view.wantsLayer = true let previewLayer = AVCaptureVideoPreviewLayer(session: session) previewLayer.videoGravity = .resizeAspectFill previewLayer.frame = view.bounds view.layer = previewLayer return view } func updateNSView(_ nsView: NSView, context: Context) { if let previewLayer = nsView.layer as? AVCaptureVideoPreviewLayer { previewLayer.frame = nsView.bounds } } } This is my SwiftUI side code to show the webcam, and I am trying to create it as a floating window which appears on top of all other apps windows etc. however, even when the webcam is clicked, it should not steal the focus from other apps, the other apps should be able to function properly as they already are. import Cocoa import SwiftUI class WebcamPreviewWindow: NSPanel { private static let defaultSize = CGSize(width: 200, height: 200) private var initialClickLocation: NSPoint = .zero init() { let screenFrame = NSScreen.main?.visibleFrame ?? .zero let origin = CGPoint( x: screenFrame.maxX - Self.defaultSize.width - 20, y: screenFrame.minY + 20 ) super.init( contentRect: CGRect(origin: origin, size: Self.defaultSize), styleMask: [.borderless], backing: .buffered, defer: false ) isOpaque = false backgroundColor = .clear hasShadow = false level = .screenSaver collectionBehavior = [ .canJoinAllSpaces, .fullScreenAuxiliary, .stationary, .ignoresCycle ] ignoresMouseEvents = false acceptsMouseMovedEvents = true hidesOnDeactivate = false becomesKeyOnlyIfNeeded = false } // MARK: - Focus Prevention override var canBecomeKey: Bool { false } override var canBecomeMain: Bool { false } override var acceptsFirstResponder: Bool { false } override func makeKey() { } override func mouseDown(with event: NSEvent) { initialClickLocation = event.locationInWindow } override func mouseDragged(with event: NSEvent) { let current = event.locationInWindow let dx = current.x - initialClickLocation.x let dy = current.y - initialClickLocation.y let newOrigin = CGPoint( x: frame.origin.x + dx, y: frame.origin.y + dy ) setFrameOrigin(newOrigin) } func show<Content: View>(with view: Content) { let host = NSHostingView(rootView: view) host.autoresizingMask = [.width, .height] host.frame = contentLayoutRect contentView = host orderFrontRegardless() } func hide() { orderOut(nil) contentView = nil } } This is my Appkit Side code make a floating window, however, when the webcam preview is clicked, it makes it as the focus app and I have to click anywhere else to loose the focus to be able to use the rest of the windows.
0
0
358
Nov ’25
watchOS longFormAudio cannot de active
My workout watch app supports audio playback during exercise sessions. When users carry both Apple Watch, iPhone, and AirPods, with AirPods connected to the iPhone, I want to route audio from Apple Watch to AirPods for playback. I've implemented this functionality using the following code. try? session.setCategory(.playback, mode: .default, policy: .longFormAudio, options: []) try await session.activate() When users are playing music on iPhone and trigger my code in the watch app, Apple Watch correctly guides users to select AirPods, pauses the iPhone's music, and plays my audio. However, when playback finishes and I end the session using the code below: try session.setActive(false, options:[.notifyOthersOnDeactivation]) the iPhone doesn't automatically resume the previously interrupted music playback—it requires manual intervention. Is this expected behavior, or am I missing other important steps in my code?
1
0
303
Nov ’25
AVCaptureSession setting preset has no effect if HDR configured with AVCaptureVideoDataOutput
I want to confirm if this is a bug or a programming error. Very easy to reproduce it by modifying AVCam sample code. Steps to reproduce: Add AVCaptureVideoDataOutput to AVCaptureSession, no need to set delegate in AVCam sample code (CaptureService actor) private let videoDataOutput = AVCaptureVideoDataOutput() and then in configureSession method, add the following line try addOutput(videoDataOutput) if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) { videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] } And next modify set HDR method: /// Sets whether the app captures HDR video. func setHDRVideoEnabled(_ isEnabled: Bool) { // Bracket the following configuration in a begin/commit configuration pair. captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } do { // If the current device provides a 10-bit HDR format, enable it for use. if isEnabled, let format = currentDevice.activeFormat10BitVariant { try currentDevice.lockForConfiguration() currentDevice.activeFormat = format currentDevice.unlockForConfiguration() isHDRVideoEnabled = true if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange) { videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange] } } else { captureSession.sessionPreset = .high isHDRVideoEnabled = false if videoDataOutput.availableVideoPixelFormatTypes.contains(kCVPixelFormatType_32BGRA) { print("Setting sdr pixel format \(kCVPixelFormatType_32BGRA)") videoDataOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey as AnyHashable as! String : kCVPixelFormatType_32BGRA] } try currentDevice.lockForConfiguration() currentDevice.activeColorSpace = .sRGB currentDevice.unlockForConfiguration() } } catch { logger.error("Unable to obtain lock on device and can't enable HDR video capture.") } The problem now is toggling HDR on and off no longer works in video mode. If after setting HDR on, you set HDR to off, active format of device does not change (setting sessionPreset has no effect). This does not happen if video data output is not added to session. Is there any workaround available?
1
0
238
Nov ’25
[AVPlayerItemVideoOutput initWithPixelBufferAttributes:] output attributes setting not work
My app want Converting iphone12 HDR Video to SDR,to edit。 follow the doc Apple-HDR-Convert. My code setting the pixBuffAttributes        [pixBuffAttributes setObject:(id)(kCVImageBufferYCbCrMatrix_ITU_R_709_2) forKey:(id)kCVImageBufferYCbCrMatrixKey];       [pixBuffAttributes setObject:(id)(kCVImageBufferColorPrimaries_ITU_R_709_2) forKey:(id)kCVImageBufferColorPrimariesKey];       [pixBuffAttributes setObject:(id)kCVImageBufferTransferFunction_ITU_R_709_2 forKey:(id)kCVImageBufferTransferFunctionKey];       playerItemOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:pixBuffAttributes]; but I get the playerItemOutput's output buffer   CFTypeRef colorAttachments = CVBufferGetAttachment(pixelBuffer, kCVImageBufferYCbCrMatrixKey, NULL);     CFTypeRef colorPrimaries = CVBufferGetAttachment(pixelBuffer, kCVImageBufferColorPrimariesKey, NULL);     CFTypeRef colorTransFunc = CVBufferGetAttachment(pixelBuffer, kCVImageBufferTransferFunctionKey, NULL);      NSLog(@"colorAttachments = %@", colorAttachments);     NSLog(@"colorPrimaries = %@", colorPrimaries);     NSLog(@"colorTransFunc = %@", colorTransFunc); log output: colorAttachments = ITU_R_2020 colorPrimaries = ITU_R_2020 colorTransFunc = ITU_R_2100_HLG pixBuffAttributes setting output format invalid,please help!
1
1
827
Nov ’25
How to dynamically update an existing AVComposition when users add a new custom video clip?
I’m building a macOS video editor that uses AVComposition and AVVideoComposition. Initially, my renderer creates a composition with some default video/audio tracks: @Published var composition: AVComposition? @Published var videoComposition: AVVideoComposition? @Published var playerItem: AVPlayerItem? Then I call a buildComposition() function that inserts all the default video segments. Later in the editing workflow, the user may choose to add their own custom video clip. For this I have a function like: private func handlePickedVideo(_ url: URL) { guard url.startAccessingSecurityScopedResource() else { print("Failed to access security-scoped resource") return } let asset = AVURLAsset(url: url) let videoTracks = asset.tracks(withMediaType: .video) guard let firstVideoTrack = videoTracks.first else { print("No video track found") url.stopAccessingSecurityScopedResource() return } renderer.insertUserVideoTrack(from: asset, track: firstVideoTrack) url.stopAccessingSecurityScopedResource() } What I want to achieve is the same behavior professional video editors provide, after the composition has already been initialized and built, the user should be able to add a new video track and the composition should update live, meaning the preview player should immediately reflect the changes without rebuilding everything from scratch manually. How can I structure my AVComposition / AVMutableComposition and my rendering pipeline so that adding a new clip later updates the existing composition in real time (similar to Final Cut/Adobe Premiere), instead of needing to rebuild everything from zero? You can find a playable version of this entire setup at :- https://github.com/zaidbren/SimpleEditor
0
0
320
Nov ’25
Adding AVCaptureMovieFileOutput and AVCaptureVideoDataOutput with ProRes422
Adding both AVCaptureMovieFileOutput and AVCaptureVideoDataOutput is supported in AVCaptureSession as seen in documentation (copied snippet below) but then when AVCaptureDevice is configured with ProRes422 codec, it fails unless one of the two outputs is removed from the capture session. It is very much reproducible on iPhone 14 pro running iOS 26.0. Prior to iOS 16, you can add an AVCaptureVideoDataOutput and an AVCaptureMovieFileOutput to the same session, but only one may have its connection active. If you attempt to enable both connections, the system chooses the movie file output as the active connection and disables the video data output’s connection. For apps that link against iOS 16 or later, this restriction no longer exists.
0
0
211
Nov ’25
Spatial Audio on iOS 18 don't work as inteneded
I’m facing a problem while trying to achieve spatial audio effects in my iOS 18 app. I have tried several approaches to get good 3D audio, but the effect never felt good enough or it didn’t work at all. Also what mostly troubles me is I noticed that AirPods I have doesn’t recognize my app as one having spatial audio (in audio settings it shows "Spatial Audio Not Playing"). So i guess my app doesn't use spatial audio potential. First approach uses AVAudioEnviromentNode with AVAudioEngine. Chaining position of player as well as changing listener’s doesn’t seem to change anything in how audio plays. Here's simple how i initialize AVAudioEngine import Foundation import AVFoundation class AudioManager: ObservableObject { // important class variables var audioEngine: AVAudioEngine! var environmentNode: AVAudioEnvironmentNode! var playerNode: AVAudioPlayerNode! var audioFile: AVAudioFile? ... //Sound set up func setupAudio() { do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .default, options: []) try session.setActive(true) } catch { print("Failed to configure AVAudioSession: \(error.localizedDescription)") } audioEngine = AVAudioEngine() environmentNode = AVAudioEnvironmentNode() playerNode = AVAudioPlayerNode() audioEngine.attach(environmentNode) audioEngine.attach(playerNode) audioEngine.connect(playerNode, to: environmentNode, format: nil) audioEngine.connect(environmentNode, to: audioEngine.mainMixerNode, format: nil) environmentNode.listenerPosition = AVAudio3DPoint(x: 0, y: 0, z: 0) environmentNode.listenerAngularOrientation = AVAudio3DAngularOrientation(yaw: 0, pitch: 0, roll: 0) environmentNode.distanceAttenuationParameters.referenceDistance = 1.0 environmentNode.distanceAttenuationParameters.maximumDistance = 100.0 environmentNode.distanceAttenuationParameters.rolloffFactor = 2.0 // example.mp3 is mono sound guard let audioURL = Bundle.main.url(forResource: "example", withExtension: "mp3") else { print("Audio file not found") return } do { audioFile = try AVAudioFile(forReading: audioURL) } catch { print("Failed to load audio file: \(error)") } } ... //Playing sound func playSpatialAudio(pan: Float ) { guard let audioFile = audioFile else { return } // left side playerNode.position = AVAudio3DPoint(x: pan, y: 0, z: 0) playerNode.scheduleFile(audioFile, at: nil, completionHandler: nil) do { try audioEngine.start() playerNode.play() } catch { print("Failed to start audio engine: \(error)") } ... } Second more complex approach using PHASE did better. I’ve made an exemplary app that allows players to move audio player in 3D space. I have added reverb, and sliders changing audio position up to 10 meters each direction from listener but audio seems to only really change left to right (x axis) - again I think it might be trouble with the app not being recognized as spatial. //Crucial class Variables: class PHASEAudioController: ObservableObject{ private var soundSourcePosition: simd_float4x4 = matrix_identity_float4x4 private var audioAsset: PHASESoundAsset! private let phaseEngine: PHASEEngine private let params = PHASEMixerParameters() private var soundSource: PHASESource private var phaseListener: PHASEListener! private var soundEventAsset: PHASESoundEventNodeAsset? // Initialization of PHASE init{ do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .default, options: []) try session.setActive(true) } catch { print("Failed to configure AVAudioSession: \(error.localizedDescription)") } // Init PHASE Engine phaseEngine = PHASEEngine(updateMode: .automatic) phaseEngine.defaultReverbPreset = .mediumHall phaseEngine.outputSpatializationMode = .automatic //nothing helps // Set listener position to (0,0,0) in World space let origin: simd_float4x4 = matrix_identity_float4x4 phaseListener = PHASEListener(engine: phaseEngine) phaseListener.transform = origin phaseListener.automaticHeadTrackingFlags = .orientation try! self.phaseEngine.rootObject.addChild(self.phaseListener) do{ try self.phaseEngine.start(); } catch { print("Could not start PHASE engine") } audioAsset = loadAudioAsset() // Create sound Source // Sphere soundSourcePosition.translate(z:3.0) let sphere = MDLMesh.newEllipsoid(withRadii: vector_float3(0.1,0.1,0.1), radialSegments: 14, verticalSegments: 14, geometryType: MDLGeometryType.triangles, inwardNormals: false, hemisphere: false, allocator: nil) let shape = PHASEShape(engine: phaseEngine, mesh: sphere) soundSource = PHASESource(engine: phaseEngine, shapes: [shape]) soundSource.transform = soundSourcePosition print(soundSourcePosition) do { try phaseEngine.rootObject.addChild(soundSource) } catch { print ("Failed to add a child object to the scene.") } let simpleModel = PHASEGeometricSpreadingDistanceModelParameters() simpleModel.rolloffFactor = rolloffFactor soundPipeline.distanceModelParameters = simpleModel let samplerNode = PHASESamplerNodeDefinition( soundAssetIdentifier: audioAsset.identifier, mixerDefinition: soundPipeline, identifier: audioAsset.identifier + "_SamplerNode") samplerNode.playbackMode = .looping do {soundEventAsset = try phaseEngine.assetRegistry.registerSoundEventAsset( rootNode: samplerNode, identifier: audioAsset.identifier + "_SoundEventAsset") } catch { print("Failed to register a sound event asset.") soundEventAsset = nil } } //Playing sound func playSound(){ // Fire new sound event with currently set properties guard let soundEventAsset else { return } params.addSpatialMixerParameters( identifier: soundPipeline.identifier, source: soundSource, listener: phaseListener) let soundEvent = try! PHASESoundEvent(engine: phaseEngine, assetIdentifier: soundEventAsset.identifier, mixerParameters: params) soundEvent.start(completion: nil) } ... } Also worth mentioning might be that I only own personal team account
4
0
1k
Nov ’25
Launch The Main App from LockedCameraCapture
If the app is launched from LockedCameraCapture and if the settings button is tapped, I need to launch the main app. CameraViewController: func settingsButtonTapped() { #if isLockedCameraCaptureExtension //App is launched from Lock Screen //Launch main app here... #else //App is launched from Home Screen self.showSettings(animated: true) #endif } In this document: https://developer.apple.com/documentation/lockedcameracapture/creating-a-camera-experience-for-the-lock-screen Apple asks you to use: func launchApp(with session: LockedCameraCaptureSession, info: String) { Task { do { let activity = NSUserActivityTypeLockedCameraCapture activity.userInfo = [UserInfoKey: info] try await session.openApplication(for: activity) } catch { StatusManager.displayError("Unable to open app - \(error.localizedDescription)") } } } However, the documentation states that this should be placed within the extension code - LockedCameraCapture. If I do that, how can I call that all the way down from the main app's CameraViewController?
3
0
520
Nov ’25
Error when capturing a high-resolution frame with depth data enabled in ARKit
Problem Description (1) I am using ARKit in an iOS app to provide AR capabilities. Specifically, I'm trying to use the ARSession's captureHighResolutionFrame(using:) method to capture a high-resolution frame along with its corresponding depth data: open func captureHighResolutionFrame(using photoSettings: AVCapturePhotoSettings?) async throws -> ARFrame (2) However, when I attempt to do so, the call fails at runtime with the following error, which I captured from the Xcode debugger: [AVCapturePhotoOutput capturePhotoWithSettings:delegate:] settings.depthDataDeliveryEnabled must be NO if self.isDepthDataDeliveryEnabled is NO Code Snippet Explanation (1) ARConfig and ARSession Initialization The following code configures the ARConfiguration and ARSession. A key part of this setup is setting the videoFormat to the one recommended for high-resolution frame capturing, as suggested by the documentation. func start(imagesDirectory: URL, configuration: Configuration = Configuration()) { // ... basic setup ... let arConfig = ARWorldTrackingConfiguration() arConfig.planeDetection = [.horizontal, .vertical] // Enable various frame semantics for depth and segmentation if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) { arConfig.frameSemantics.insert(.smoothedSceneDepth) } if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) { arConfig.frameSemantics.insert(.sceneDepth) } if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) { arConfig.frameSemantics.insert(.personSegmentationWithDepth) } // Set the recommended video format for high-resolution captures if let videoFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing { arConfig.videoFormat = videoFormat print("Enabled: High-Resolution Frame Capturing by selecting recommended video format.") } arSession.run(arConfig, options: [.resetTracking, .removeExistingAnchors]) // ... } (2) Capturing the High-Resolution Frame The code below is intended to manually trigger the capture of a high-resolution frame. The goal is to obtain both a high-resolution color image and its associated high-resolution depth data. To achieve this, I explicitly set the isDepthDataDeliveryEnabled property of the AVCapturePhotoSettings object to true. func requestImageCapture() async { // ... guard statements ... print("Manual image capture requested.") if #available(iOS 16.0, *) { // Assuming 16.0+ for this API if let defaultSettings = arSession.configuration?.videoFormat.defaultPhotoSettings { // Create a mutable copy from the default settings, as recommended let photoSettings = AVCapturePhotoSettings(from: defaultSettings) // Explicitly enable depth data delivery for this capture request photoSettings.isDepthDataDeliveryEnabled = true do { let highResFrame = try await arSession.captureHighResolutionFrame(using: photoSettings) print("Successfully captured a high-resolution frame.") if let initialDepthData = highResFrame.capturedDepthData { // Process depth data... } else { print("High-resolution frame was captured, but it contains no depth data.") } } catch { // The exception is caught here print("Error capturing high-resolution frame: \(error.localizedDescription)") } } } // ... } Issue Confirmation & Question (1) Through debugging, I have confirmed the following behavior: If I call captureHighResolutionFrame without providing the photoSettings parameter, or if photoSettings.isDepthDataDeliveryEnabled is set to false, the method successfully returns a high-resolution ARFrame, but its capturedDepthData is nil. (2) The error message clearly indicates that settings.depthDataDeliveryEnabled can only be true if the underlying AVCapturePhotoOutput instance's own isDepthDataDeliveryEnabled property is also true. (3) However, within the context of ARKit and ARSession, I cannot find any public API that would allow me to explicitly access and configure the underlying AVCapturePhotoOutput instance that ARSession manages. (4) My question is: Is there a way to configure the ARSession's internal AVCapturePhotoOutput to enable its isDepthDataDeliveryEnabled property? Or, is simultaneously capturing a high-resolution frame and its associated depth data simply not a supported use case in the current ARKit framework?
1
0
343
Nov ’25
LockedCameraCaptureExtension and Sharing User Preferences
I have the main app that saves preferences to UserDefaults.standard. So I have this one preference that the user is able to toggle - isRawOn UserDefaults.standard.set(self.isRawOn, forKey: "isRawOn") Now, I have LockedCameraCaptureExtension which is required know if that above setting on or off during launch. Also if it's toggled within the extension, the main app should know about it on the next launch. The main app and the extension runs on separate containers and the preferences are not shared due to privacy reasons. Apple mentions of using appContext of CameraCaptureIntent, but not sure how above scenario is possible through that....unless I am missing something. Apple Reference What I have for CameraCaptureIntent: @available(iOS 18, *) struct LaunchMyAppControlIntent: CameraCaptureIntent { typealias AppContext = MyAppContext static let title: LocalizedStringResource = "LaunchMyAppControlIntent" static let description = IntentDescription("Capture photos with MyApp.") @MainActor func perform() async throws -> some IntentResult { .result() } }
1
0
395
Nov ’25
AVSpeechSynthesizer pulls words out of thin air.
Hi, I'm working on a project that uses the AVSpeechSynthesizer and AVSpeechUtterance. I discovered by chance that the AVSpeechSynthesizer automatically completes some words instead of just outputting what it's supposed to. These are abbreviations for days of the week or months, but not all of them. I don't want either of them automatically completed, but only the specified text. The completion transcends languages. I have written a short example program for demonstration purposes. import SwiftUI import AVFoundation import Foundation let synthesizer: AVSpeechSynthesizer = AVSpeechSynthesizer() struct ContentView: View { var body: some View { VStack { Button { utter("mon") } label: { Text("mon") } .buttonStyle(.borderedProminent) Button { utter("tue") } label: { Text("tue") } .buttonStyle(.borderedProminent) Button { utter("thu") } label: { Text("thu") } .buttonStyle(.borderedProminent) Button { utter("feb") } label: { Text("feb") } .buttonStyle(.borderedProminent) Button { utter("feb", lang: "de-DE") } label: { Text("feb DE") } .buttonStyle(.borderedProminent) Button { utter("wed") } label: { Text("wed") } .buttonStyle(.borderedProminent) } .padding() } private func utter(_ text: String, lang: String = "en-US") { let utterance = AVSpeechUtterance(string: text) let voice = AVSpeechSynthesisVoice(language: lang) utterance.voice = voice synthesizer.speak(utterance) } } #Preview { ContentView() } Thank you Christian
1
0
214
Nov ’25
Control system video effects support for CMIO extension
We're distributing a virtual camera with our app that does not profit in the slightest from automatically applied system video effects both to the video going in (physical camera device) or out (virtual camera device). I'm aware of setting NSCameraReactionEffectGesturesEnabledDefault in Info.plist and determining active video effects via AVCaptureDevice API. Those are obviously crutches, because having to tell users to go look for and click around in menu bar apps is the opposite of a great UX. To make our product's video output more deterministic, I'm looking for a way to tell the CMIO subsystem that our virtual camera does not support any of the system video effects. I'm seeing properties like AVCaptureDevice.Format.isPortraitEffectSupported and AVCaptureDevice.Format.isStudioLightSupported whose documentation refers to the format's ability to support these effects. Since we're setting a CMFormatDescription via CMIOExtensionStreamSource.formats I was hoping to find something in the extensions, but wasn't successful so far. Can this be done?
2
0
222
Nov ’25
Retrieving the DRM expiration time for FairPlay offline assets on iOS
I’m implementing FairPlay offline streaming on iOS and ran into a question about DRM expiration handling. As far as I understand, when issuing a FairPlay offline license, there are typically two time windows: 1. The period during which the user can start offline playback (the longer “rental window”). 2. Once playback starts, the duration allowed to complete playback (the shorter “playback window”). I’d like to display this information (the remaining validity or expiration time) in the app’s UI next to each downloaded asset. My question is: 👉 Is there a way to programmatically check or retrieve the expiration time for a FairPlay offline asset on the client side (via AVFoundation or AVContentKeySession)? Any guidance or best practices for surfacing DRM expiration info in the UI would be greatly appreciated.
1
0
529
Oct ’25
AVAssetExportSession ignores frameDuration 60fps and exports at 30fps, but AVPlayer playback is correct
Hey everyone, I'm stuck on a really frustrating AVFoundation problem. I'm building a video editor that uses a custom AVVideoCompositor to add effects, and I need the final output to be 60 FPS. So basically, I create an AVMutableComposition to sequence my video clips. I create an AVMutableVideoComposition and set the frame rate to 60 FPS: videoComposition.frameDuration = CMTime(value: 1, timescale: 60) I assign my CustomVideoCompositor class to the videoComposition. I create an AVPlayerItem with the composition and video composition. The Problem: Playback Works: When I play the AVPlayerItem in an AVPlayer, it's perfect. It plays at a smooth 60 FPS, and my custom compositor's startRequest method is called 60 times per second. Export Fails: When I try to export the exact same composition and video composition using AVAssetExportSession, the final .mp4 file is always 30 FPS (or 29.97). I've logged inside my custom compositor during the export, and it's definitely being called 30 times per second, so it's generating the 30 frames. It seems like AVAssetExportSession is just dropping every other frame when it encodes the video. My source videos are screen recordings which I recorded using ScreenCaptureKit itself with the minimum frame interval to be 60. Here is my export function. I'm using the AVAssetExportPresetHighestQuality preset :- func exportVideo(to outputURL: URL) async throws { guard let composition = composition, let videoComposition = videoComposition else { throw VideoCompositionError.noValidVideos } try? FileManager.default.removeItem(at: outputURL) guard let exportSession = AVAssetExportSession( asset: composition, presetName: AVAssetExportPresetHighestQuality // Is this the problem? ) else { throw VideoCompositionError.trackCreationFailed } exportSession.outputFileType = .mp4 exportSession.videoComposition = videoComposition // This has the 60fps setting try await exportSession.export(to: outputURL, as: .mp4) } I've created a bare bones sample project that shows this exact bug in action. The resulting video is 60fps during playback, but only 30fps during the export. https://github.com/zaidbren/SimpleEditor My Question: Why is AVAssetExportSession ignoring my 60 FPS frameDuration and defaulting to 30 FPS, even though AVPlayer respects it?
1
0
385
Oct ’25
AVFoundation Custom Video Compositor Skipping Frames During AVPlayer Playback Despite 60 FPS Frame Duration
I'm building a Swift video editor with AVFoundation and a custom compositor. Despite setting AVVideoComposition.frameDuration to 60 FPS, I'm seeing significant frame skipping during playback. Console Output Shows Frame Skipping Frame #0 at 0.0 ms (fps: 60.0) Frame #2 at 33.333333333333336 ms (fps: 60.0) Frame #6 at 100.0 ms (fps: 60.0) Frame #10 at 166.66666666666666 ms (fps: 60.0) Frame #32 at 533.3333333333334 ms (fps: 60.0) Frame #62 at 1033.3333333333335 ms (fps: 60.0) Frame #96 at 1600.0 ms (fps: 60.0) Instead of frames every ~16.67ms (60 FPS), I'm getting irregular intervals, sometimes 33ms, 67ms, or hundreds of milliseconds apart. Renderer.swift (Key Parts) @MainActor class Renderer: ObservableObject { @Published var playerItem: AVPlayerItem? private let assetManager: ProjectAssetManager? private let compositorId: String func buildComposition() async { // ... load mouse moves/clicks data ... let composition = AVMutableComposition() let videoTrack = composition.addMutableTrack( withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid ) var currentTime = CMTime.zero var layerInstructions: [AVMutableVideoCompositionLayerInstruction] = [] // Insert video segments for videoURL in videoURLs { let asset = AVAsset(url: videoURL) let tracks = try await asset.loadTracks(withMediaType: .video) let assetVideoTrack = tracks.first let duration = try await asset.load(.duration) try videoTrack.insertTimeRange( CMTimeRange(start: .zero, duration: duration), of: assetVideoTrack, at: currentTime ) let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack) let transform = try await assetVideoTrack.load(.preferredTransform) layerInstruction.setTransform(transform, at: currentTime) layerInstructions.append(layerInstruction) currentTime = CMTimeAdd(currentTime, duration) } let videoComposition = AVMutableVideoComposition() videoComposition.frameDuration = CMTime(value: 1, timescale: 60) // 60 FPS // Set render size from first video if let firstURL = videoURLs.first { let firstAsset = AVAsset(url: firstURL) let firstTrack = try await firstAsset.loadTracks(withMediaType: .video).first let naturalSize = try await firstTrack.load(.naturalSize) let transform = try await firstTrack.load(.preferredTransform) videoComposition.renderSize = CGSize( width: abs(naturalSize.applying(transform).width), height: abs(naturalSize.applying(transform).height) ) } let instruction = CompositorInstruction() instruction.timeRange = CMTimeRange(start: .zero, duration: currentTime) instruction.layerInstructions = layerInstructions instruction.compositorId = compositorId videoComposition.instructions = [instruction] videoComposition.customVideoCompositorClass = CustomVideoCompositor.self let playerItem = AVPlayerItem(asset: composition) playerItem.videoComposition = videoComposition self.playerItem = playerItem } } class CompositorInstruction: NSObject, AVVideoCompositionInstructionProtocol { var timeRange: CMTimeRange = .zero var enablePostProcessing: Bool = false var containsTweening: Bool = false var requiredSourceTrackIDs: [NSValue]? var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid var layerInstructions: [AVVideoCompositionLayerInstruction] = [] var compositorId: String = "" } class CustomVideoCompositor: NSObject, AVVideoCompositing { var sourcePixelBufferAttributes: [String : Any]? = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] var requiredPixelBufferAttributesForRenderContext: [String : Any] = [ kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA) ] func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {} func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) { guard let sourceTrackID = asyncVideoCompositionRequest.sourceTrackIDs.first?.int32Value, let sourcePixelBuffer = asyncVideoCompositionRequest.sourceFrame(byTrackID: sourceTrackID), let outputBuffer = asyncVideoCompositionRequest.renderContext.newPixelBuffer() else { asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoCompositor", code: -1)) return } let videoComposition = asyncVideoCompositionRequest.renderContext.videoComposition let frameDuration = videoComposition.frameDuration let fps = Double(frameDuration.timescale) / Double(frameDuration.value) let compositionTime = asyncVideoCompositionRequest.compositionTime let seconds = CMTimeGetSeconds(compositionTime) let frameInMilliseconds = seconds * 1000 let frameNumber = Int(round(seconds * fps)) print("Frame #\(frameNumber) at \(frameInMilliseconds) ms (fps: \(fps))") asyncVideoCompositionRequest.finish(withComposedVideoFrame: outputBuffer) } func cancelAllPendingVideoCompositionRequests() {} } VideoPlayerViewModel @MainActor class VideoPlayerViewModel: ObservableObject { let player = AVPlayer() private let renderer: Renderer func loadVideo() async { await renderer.buildComposition() if let playerItem = renderer.playerItem { player.replaceCurrentItem(with: playerItem) } } } What I've Tried Frame skipping is consistent—exact same timestamps on every playback Issue persists even with minimal processing (just passing through buffers) Occurs regardless of compositor complexity Please note that I need every frame at exact millisecond intervals for my application. Frame loss or inconsistent frameInMillisecond values are not acceptable.
1
0
310
Oct ’25
Mute behavior of Volume button on AVPlayerViewController iOS 26
With older iOS versions, when user taps Mute/Volume button on AVPLayerViewController to unmute, the system restores the sound volume of device to the level when user muted before. On iOS 26, when user taps unmute button on screen, the volume starts from 0 (not restore). (but it still restores if user unmutes by pressing physical volume buttons). As I understand, the Volume bar/button on AVPlayerViewController is MPVolumeView, and I can not control it. So this is a feature of the system. But I got complaints that this is a bug. I did not find documents that describe this change of Mute button behavior. I need some bases to explain this situation. Thank you.
0
1
167
Oct ’25