Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

iOS AVPlayer Subtitles / Captions
As of iOS 18, as far as I can tell, it appears there's still no AVPlayer options that allow users to toggle the caption / subtitle track on and off. Does anyone know of a way to do this with AVPlayer or with SwiftUI's VideoPlayer? The following code reproduces this issue. It can be pasted into an app playground. This is a random video and a random vtt file I found on the internet. import SwiftUI import AVKit import UIKit struct ContentView: View { private let video = URL(string: "https://server15700.contentdm.oclc.org/dmwebservices/index.php?q=dmGetStreamingFile/p15700coll2/15.mp4/byte/json")! private let captions = URL(string: "https://gist.githubusercontent.com/samdutton/ca37f3adaf4e23679957b8083e061177/raw/e19399fbccbc069a2af4266e5120ae6bad62699a/sample.vtt")! @State private var player: AVPlayer? var body: some View { VStack { VideoPlayerView(player: player) .frame(maxWidth: .infinity, maxHeight: 200) } .task { // Captions won't work for some reason player = try? await loadPlayer(video: video, captions: captions) } } } private struct VideoPlayerView: UIViewControllerRepresentable { let player: AVPlayer? func makeUIViewController(context: Context) -> AVPlayerViewController { let controller = AVPlayerViewController() controller.player = player controller.modalPresentationStyle = .overFullScreen return controller } func updateUIViewController(_ uiViewController: AVPlayerViewController, context: Context) { uiViewController.player = player } } private func loadPlayer(video: URL, captions: URL?) async throws -> AVPlayer { let videoAsset = AVURLAsset(url: video) let videoPlusSubtitles = AVMutableComposition() try await videoPlusSubtitles.add(videoAsset, withMediaType: .video) try await videoPlusSubtitles.add(videoAsset, withMediaType: .audio) if let captions { let captionAsset = AVURLAsset(url: captions) // Must add as .text. .closedCaption and .subtitle don't work? try await videoPlusSubtitles.add(captionAsset, withMediaType: .text) } return await AVPlayer(playerItem: AVPlayerItem(asset: videoPlusSubtitles)) } private extension AVMutableComposition { func add(_ asset: AVAsset, withMediaType mediaType: AVMediaType) async throws { let duration = try await asset.load(.duration) try await asset.loadTracks(withMediaType: mediaType).first.map { track in let newTrack = self.addMutableTrack(withMediaType: mediaType, preferredTrackID: kCMPersistentTrackID_Invalid) let range = CMTimeRangeMake(start: .zero, duration: duration) try newTrack?.insertTimeRange(range, of: track, at: .zero) } } }
0
0
93
Apr ’25
How correctly setup AVSampleBufferDisplayLayer
How can I setup correctly AVSampleBufferDisplayLayer for video display when I have input picture format kCVPixelFormatType_32BGRA? Currently video i visible in simulator, but not iPhone, miss I something? Render code: var pixelBuffer: CVPixelBuffer? let attrs: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey as String: width, kCVPixelBufferHeightKey as String: height, kCVPixelBufferBytesPerRowAlignmentKey as String: width * 4, kCVPixelBufferIOSurfacePropertiesKey as String: [:] ] let status = CVPixelBufferCreateWithBytes( nil, width, height, kCVPixelFormatType_32BGRA, img, width * 4, nil, nil, attrs as CFDictionary, &pixelBuffer ) guard status == kCVReturnSuccess, let pb = pixelBuffer else { return } var formatDesc: CMVideoFormatDescription? CMVideoFormatDescriptionCreateForImageBuffer( allocator: nil, imageBuffer: pb, formatDescriptionOut: &formatDesc ) guard let format = formatDesc else { return } var timingInfo = CMSampleTimingInfo( duration: .invalid, presentationTimeStamp: currentTime, decodeTimeStamp: .invalid ) var sampleBuffer: CMSampleBuffer? CMSampleBufferCreateForImageBuffer( allocator: kCFAllocatorDefault, imageBuffer: pb, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: format, sampleTiming: &timingInfo, sampleBufferOut: &sampleBuffer ) if let sb = sampleBuffer { if CMSampleBufferGetPresentationTimeStamp(sb) == .invalid { print("Invalid video timestamp") } if (displayLayer.status == .failed) { displayLayer.flush() } DispatchQueue.main.async { [weak self] in guard let self = self else { print("Lost reference to self drawing") return } displayLayer.enqueue(sb) } frameIndex += 1 }
0
0
104
Apr ’25
AirDropped Videos from Photos Save to Files Instead of Photos on Receiving Device
My app allows users to capture and save videos to the Photos app using the following Swift code: PHPhotoLibrary.shared().performChanges { PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: fileURL) } completionHandler: { success, error in Videos are successfully saved to Photos and play correctly. However, users report that when they AirDrop these videos from the Photos app to another device (e.g., iPad to iPhone), the videos are saved in the Files app on the receiving device instead of the Photos app. This issue is more common with higher-resolution videos, such as 2K, recorded in HEVC format at 30 fps. I wasn't able to reproduce the issue locally. I've found a thread in public apple forum: https://discussions.apple.com/thread/255276865?sortBy=rank but I wonder maybe there are some special flags that I should clear or add to my videos (e.g. PHAssetChangeRequest)? Thank you!
0
0
87
May ’25
​​Can VideoToolbox properly decode HEVC bitstreams when a single frame is split into multiple slice NALUs?​
I am currently developing an HEVC player using VideoToolbox on an iOS device. I have successfully created an HEVC decoder that receives HEVC streams from our custom image capture and encoding device, and it can decode and display images properly. However, when my image capture and encoding device configures the encoder to output HEVC streams with ​​fragmented NALUs​​ (i.e., an I-frame or P-frame is split and stored across multiple slice NALUs), the iOS decoder can be initialized successfully but fails to decode and output images. ​​Can VideoToolbox properly decode HEVC bitstreams when a single frame is split into multiple slice NALUs?​​ Key Observations: ​​1. Single-NALU frames​​ work fine. ​​2. Multi-NALU frames​​ (sliced I/P-frames) cause decoding failure. 3. The decoder session is created successfully (VTDecompressionSessionCreate returns no error).
0
0
121
May ’25
how to enumerate build-in media devices?
Hello, I need to enumerate built-in media devices (cameras, microphones, etc.). For this purpose, I am using the CoreAudio and CoreMediaIO frameworks. According to the table 'Daemon-Safe Frameworks' in Apple’s TN2083, CoreAudio is daemon-safe. However, the documentation does not mention CoreMediaIO. Can CoreMediaIO be used in a daemon? If not, are there any documented alternatives to detect built-in cameras in a daemon (e.g., via device classes in IOKit)? Thank you in advance, Pavel
0
0
87
May ’25
Option to Set Default Interpolation for Keyframes to “Linear” in Final Cut Pro
In Final Cut Pro, keyframes for transform parameters (such as Position, Scale, and Rotation) are automatically set to “Smooth” interpolation. This often results in undesired easing between keyframes, especially when linear motion is required. Currently, we have to manually adjust each keyframe to "Linear" using the Video Animation Editor, which can be time-consuming when working with many keyframes. Would it be possible to add an option to set the default keyframe interpolation to "Linear"—either globally in Preferences or per parameter in the Inspector? This would greatly streamline the animation workflow for many editors. Thank you for considering this request!
0
0
95
Jun ’25
VTDecompressionSession consistently drops frames after sync frame
I'm seeking to a specific sync frame in a video file (HEVC, recorded on iPad). When I feed the buffers from that sync frame on to VTDecompressionSession it consistently drops the 2.,3.,4. buffer with a kVTVideoDecoderReferenceMissingErr (or no error but no buffer on the simulator). If I feed all the buffers from the penultimate sync frame prior to the desired frame the buffers come out fine but that would just create a massive overhead to always do it. Tried multiple OS versions, devices etc. Seems a consistent problem. Here's a sample project with the offending video (disregard memory handling etc): https://github.com/marcuseckert/vtSample I've filed a radar FB18228296 but would appreciate any feedback on circumventing or at least detecting this behavior prior to decoding.
0
0
98
Jun ’25
AsyncPublisher for AVPlayerItem not working
Hello, I'm trying to subscribe to AVPlayerItem status updates using Combine and it's bridge to Swift Concurrency – .values. This is my sample code. struct ContentView: View { @State var player: AVPlayer? @State var loaded = false var body: some View { VStack { if let player { Text("loading status: \(loaded)") Spacer() VideoPlayer(player: player) Button("Load") { Task { let item = AVPlayerItem( url: URL(string: "https://sample-videos.com/video321/mp4/360/big_buck_bunny_360p_5mb.mp4")! ) player.replaceCurrentItem(with: item) let publisher = player.publisher(for: \.status) for await status in publisher.values { print(status.rawValue) if status == .readyToPlay { loaded = true break } } print("we are out") } } } else { Text("No video selected") } } .task { player = AVPlayer() } } } After I click on the "load" button it prints out 0 (as the initial status of .unknown) and nothing after – even when the video is fully loaded. At the same time this works as expected (loading status is set to true): struct ContentView: View { @State var player: AVPlayer? @State var loaded = false @State var cancellable: AnyCancellable? var body: some View { VStack { if let player { Text("loading status: \(loaded)") Spacer() VideoPlayer(player: player) Button("Load") { Task { let item = AVPlayerItem( url: URL(string: "https://sample-videos.com/video321/mp4/360/big_buck_bunny_360p_5mb.mp4")! ) player.replaceCurrentItem(with: item) let stream = AsyncStream { continuation in cancellable = item.publisher(for: \.status) .sink { if $0 == .readyToPlay { continuation.yield($0) continuation.finish() } } } for await _ in stream { loaded = true cancellable?.cancel() cancellable = nil break } } } } else { Text("No video selected") } } .task { player = AVPlayer() } } } Is this a bug or something?
0
0
104
Jun ’25
Is it possible to query the TV's current refresh rate?
I'm working on a media app that would like to be able to tell if the TV connected to tvOS is running at 59.94hz or 60.00hz, so it can optimize a video stream. It looks like the best I can currently do is to check if the user has Match Content Rate enabled, and based on that, when calling displayManager.preferredDisplayCriteria to change video modes, I could guess which rate their TV might be in. It's not very ideal, because not all TVs support both of these rates, and my request for 59.94 might end up as 60 and vice versa. I dug around and can't find any available method in UIScreen to get this info. The odd thing is, the data is right there in currentMode when I look in the debugger, but it seems to be in a private or undocumented class. Is there any way to get at it?
0
0
201
Jun ’25
FCPX Workflow Extension: Drag & Drop FCPXML with Titles to Timeline Not Working Despite Following File Promise Guidelines
I'm developing a Final Cut Pro X workflow extension that transcribes audio and creates a text output. I need to allow users to drag this text directly from my extension into FCPX's timeline as titles. Current Implementation: Using NSFilePromiseProvider as per Apple's guidelines for drag and drop Generating valid FCPXML (v1.10) with proper structure: Complete resources section with format and asset references Event and project hierarchy Asset clip with connected title elements Proper timing and duration calculations Supporting multiple pasteboard types: com.apple.finalcutpro.xml.v1-10 com.apple.finalcutpro.xml.v1-9 com.apple.finalcutpro.xml What's Working: Drag operation initiates correctly File promise provider is set up properly FCPXML generation is successful (verified content) All required pasteboard types are registered Proper logging confirms data is being requested and provided Current Pasteboard Types Offered: com.apple.NSFilePromiseItemMetaData com.apple.pasteboard.promised-file-name com.apple.pasteboard.promised-suggested-file-name com.apple.pasteboard.promised-file-content-type Apple files promise pasteboard type com.apple.pasteboard.NSFilePromiseID com.apple.pasteboard.promised-file-url com.apple.finalcutpro.xml.v1-10 com.apple.finalcutpro.xml.v1-9 com.apple.finalcutpro.xml What additional requirements or considerations are needed to make FCPX accept the dragged FCPXML content? Are there specific requirements for workflow extensions regarding drag and drop operations with titles that aren't documented? Any insights, especially from those who have implemented similar functionality in FCPX workflow extensions, would be greatly appreciated. Technical Details: macOS Version: 15.5 (24F74) FCPX Version: 11.1.1 Extension built with SwiftUI and AppKit integration Using NSFilePromiseProvider and NSPasteboardItemDataProvider Full pasteboard type support for FCPXML versions
0
0
160
Jul ’25
Final Cut Pro Workflow Extension Drag and Drop to Timeline Not Working Despite Proper FCPXML Clip Structure
Our Final Cut Pro workflow extension built with ProExtensionHost framework uses an advanced NSPasteboardItemDataProvider system with multi-version FCPXML support (1.9, 1.10, 1.13) and proper relative path UIDs for Motion templates. We've implemented clip wrapper approach with placeholder assets and elements containing effects to enable direct timeline drag functionality. However, drag and drop from our Final Cut Pro workflow extension directly to timeline is still not working despite proper element structure in our FCPXML. Our implementation creates valid clip elements with effects applied, but Final Cut Pro timeline doesn't accept them during drag operations from our ProExtensionHost-based workflow extension. Steps to Reproduce: Create Final Cut Pro workflow extension using ProExtensionHost framework with NSPasteboardItemDataProvider implementation Generate FCPXML with proper element structure: Expected Result: Clip should be accepted by timeline and effect applied from workflow extension Actual Result: Timeline rejects drag operation from ProExtensionHost-based workflow extension Question: Are there additional requirements or ProExtensionHost API calls needed beyond standard NSPasteboardItemDataProvider for Final Cut Pro workflow extension timeline drag functionality?
0
0
144
Jul ’25
iOS 18 Picture-in-Picture dynamically changes UIScreen.main.bounds on iPhone 11 causing keyboard clipping and layout issues
When creating and activating AVPictureInPictureController on iPhone 11 running iOS 18.x, the main screen viewport unexpectedly changes from the native 375×812 logical points to 414×896 points (the size of a different device such as iPhone XR/11 Plus). Additionally, a separate UIWindow with zero CGRect is created. This causes UIKit layout calculations, especially related to the keyboard and safeAreaInsets, to malfunction and results in issues like keyboard clipping and general UI misalignment. The problem appears tied specifically to iOS 18+ behavior and does not manifest on earlier iOS versions or other devices consistently. Steps to Reproduce: Run the app on iPhone 11 with iOS 18.x (including the latest minor updates). Instantiate and start an AVPictureInPictureController during app runtime. Observe that UIScreen.main.bounds changes from 375×812 (native iPhone 11 size) to 414×896. Notice a secondary UIWindow appears with frame CGRectZero. Interact with text inputs that cause the keyboard to appear. The keyboard is clipped or partially obscured, causing UI usability issues. Layout elements dependent on UIScreen.main.bounds or safeAreaInsets behave incorrectly. Expected Behavior: UIScreen.main.bounds should not dynamically change upon PiP controller creation. The coordinate space and viewport should remain accurate to device native dimensions. The keyboard and UI layout should adjust properly with respect to the actual on-screen size and safe areas. Actual Behavior: UIScreen.main.bounds changes to 414×896 upon PiP controller creation on iPhone 11 iOS 18+. A zero-rect secondary UIWindow is created, impacting layout queries. Keyboard clipping and UI layout bugs occur. The app viewport and coordinate space are inconsistent with device hardware.
0
0
196
Jul ’25
UIViewController + AVPlayerLayer + AVPlayer cannot AirPlay to HomePod on tvOS 18.5, but AVPlayerViewController + AVPlayer is ok
1, Devices Apple TV HD tvOS 18.5 (22L572) Mode(A1625) HomePod Gen1 OS18.5 (22L572) 2, Test cases: 2.1 UIViewController + AVPlayerLayer + AVPlayer Result: Can not AirPlay to HomePod Sample code for UIViewController + AVPlayer 2.1 AVPlayerViewController + AVPlayer Result: Can AirPlay to HomePod Sample code for AVPlayerViewController + AVPlayer
0
0
225
Aug ’25
Save slow motion video with custom playback speed bar
Background: For iOS, I have built a custom video record app that records video at the highest possible FPS (supposed to be around 240 FPS but more like ~200 in practice). When I save this video to the user's camera roll, I notice this dynamic playback speed bar. This bar will speed up or slow down the video. My question: How can I save my video such that this playback speed bar is constantly slow or plays at real time speed? For reference I have include the playback speed bar that I am talking about in the screenshot, you can find this in the photo app when you record slow motion video.
0
0
197
Aug ’25
Sudden brightness/contrast shifts in videos across multiple apps on iPhone
Hi, I’ve been experiencing a strange issue with video playback on my iPhone. While watching videos, the image will suddenly shift it becomes more greyish, then sometimes briefly goes black, and then returns to normal bright quality. This can happen multiple times during a single video. This is not limited to the Photos app. I’ve seen it happen: In the Photos app when playing videos I recorded myself In Snapchat when watching videos sent by others Occasionally in other social media apps as well Additional details: HDR Video is turned off Apple ProRes is turned off Tried both 4K 60fps and 4K 30fps Camera format set to “Most Compatible” Low Power Mode is off Issue happens whether the phone is cool or warm Doesn’t seem related to the video file itself the same file exported to another device looks fine all the way through These exact same videos play back completely normally on my iPad with no brightness or contrast shifts at all I’m currently on the iOS 17 public beta, but this issue was happening before I installed the beta as well, so it’s not beta-specific It almost feels like the display or system is switching between different brightness/contrast profiles mid playback, regardless of the app. Has anyone else experienced this, and is there a way to disable this behavior so the brightness and color stay consistent during video playback? Thanks in advance!
0
0
233
Aug ’25
Issue with Airplay for DRM videos
When I try to send a DRM-protected video via Airplay to an Apple TV, the license request is made twice instead of once as it normally does on iOS. We only allow one request per session for security reasons, this causes the second request to fail and the video won't play. We've tested DRM-protected videos without token usage limits and it works, but this creates a security hole in our system. Why does it request the license twice in function: func contentKeySession(_ session: AVContentKeySession, didProvide keyRequest: AVContentKeyRequest)? Is there a way to prevent this?
0
0
215
Aug ’25
Threading guarantees with AVCaptureVideoDataOutput
I'm writing some camera functionality that uses AVCaptureVideoDataOutput. I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput. My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource. I have setup similar to the following: // Foreground thread logic dispatch_queue_t queue = dispatch_queue_create("avf_camera_queue", nullptr); AVCaptureSession *captureSession = [[AVCaptureSession alloc] init]; setupInputDevice(captureSession); // Connects the AVCaptureDevice... // Store some arbitrary data to be attached to the frame, stored on the foreground thread FrameMetaData frameMetaData = ...; MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc]; // Capture frameMetaData by reference in lambda [sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }]; AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue]; [captureSession addOutput:captureVideoDataOutput]; [captureSession startRunning]; [captureSession stopRunning]; // Is it now safe to destroy frameMetaData, or do we need manual barrier? And then in MySampleBufferDelegate: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Invokes the callback set above FrameMetaData *frameMetaData = frameMetaDataGetter(); emitSampleBuffer(sampleBuffer, frameMetaData); }
0
0
332
Sep ’25