Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

Threading guarantees with AVCaptureVideoDataOutput
I'm writing some camera functionality that uses AVCaptureVideoDataOutput. I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput. My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? For example, does [AVCaptureSession stopRunning] imply a blocking call until all pending frame-callbacks are done? I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource. I have setup similar to the following: // Foreground thread logic dispatch_queue_t queue = dispatch_queue_create("qt_avf_camera_queue", nullptr); AVCaptureSession *captureSession = [[AVCaptureSession alloc] init]; setupInputDevice(captureSession); // Connects the AVCaptureDevice... // Store some arbitrary data to be attached to the frame, stored on the foreground thread FrameMetaData frameMetaData = ...; MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc]; // Capture frameMetaData by reference in lambda [sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }]; AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue]; [captureSession addOutput:captureVideoDataOutput]; [captureSession startRunning]; [captureSession stopRunning]; // Is it now safe to destroy frameMetaData, or do we need manual barrier? And then in MySampleBufferDelegate: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Invokes the callback set above FrameMetaData *frameMetaData = frameMetaDataGetter(); emitSampleBuffer(sampleBuffer, frameMetaData); }
1
0
345
2d
The behavior of AVPlayerItem.didPlayToEndTimeNotification is not as expected in iOS 26.
Hello, Environment macOS 15.6.1 / Xcode 26 beta 7 / iOS 26 Beta 9 In a simple AVFoundation video-playback sample, I’m seeing different behavior between iOS 18 and iOS 26 regarding AVPlayerItem.didPlayToEndTimeNotification. I’ve attached a minimal sample below. Please replace videoURL with a valid short video URL. Repro steps Tap “Play” to start playback and let the video finish. The AVPlayerItem.didPlayToEndTimeNotification registered with NotificationCenter should fire, and you should see Play finished. in the console. Without relaunching, tap “Play” again. This is where the issue arises. Observed behavior On iOS 18 and earlier: The video does not play again (it does not restart from the beginning), but AVPlayerItem.didPlayToEndTimeNotification is posted and Play finished. appears in the console. The same happens every time you press “Play”. On iOS 26: Pressing “Play” does not post AVPlayerItem.didPlayToEndTimeNotification. The code path that prints Play finished. is never called (the callback enclosing that line is not invoked again). Building the same program with Xcode 16.4 and running it on an iOS 26 beta device shows the same phenomenon, which suggests there has been a behavioral change for AVPlayerItem.didPlayToEndTimeNotification on iOS 26. I couldn’t find any mention of this in the release notes or API Reference. Because the semantics around AVPlayerItem.didPlayToEndTimeNotification appear to differ, we’re forced to adjust our logic. If there is a way to achieve the iOS 18–style behavior on iOS 26, I would appreciate guidance. Alternatively, if this change is intentional, could you share the reasoning? Is iOS 26 the correct behavior from Apple’s perspective and iOS 18 (and earlier) behavior considered incorrect? Any official clarification would be extremely helpful. import UIKit import AVFoundation final class ViewController: UIViewController { private let videoURL = URL(string: "https://......mp4")! private var player: AVPlayer? private var playerItem: AVPlayerItem? private var playerLayer: AVPlayerLayer? private var observeForComplete: NSObjectProtocol? // UI private let playerContainerView = UIView() private let playButton = UIButton(type: .system) private let stopButton = UIButton(type: .system) private let replayButton = UIButton(type: .system) deinit { if let observeForComplete { NotificationCenter.default.removeObserver(observeForComplete) } } override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .systemBackground setupUI() setupPlayer() } override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() playerLayer?.frame = playerContainerView.bounds } // MARK: - Setup private func setupUI() { playerContainerView.translatesAutoresizingMaskIntoConstraints = false playerContainerView.backgroundColor = .black view.addSubview(playerContainerView) // Buttons playButton.setTitle("Play", for: .normal) stopButton.setTitle("Pause", for: .normal) replayButton.setTitle("RePlay", for: .normal) [playButton, stopButton, replayButton].forEach { $0.titleLabel?.font = .systemFont(ofSize: 16, weight: .semibold) $0.translatesAutoresizingMaskIntoConstraints = false $0.contentEdgeInsets = UIEdgeInsets(top: 10, left: 16, bottom: 10, right: 16) } let stack = UIStackView(arrangedSubviews: [playButton, stopButton, replayButton]) stack.axis = .horizontal stack.spacing = 16 stack.alignment = .center stack.distribution = .equalCentering stack.translatesAutoresizingMaskIntoConstraints = false view.addSubview(stack) NSLayoutConstraint.activate([ playerContainerView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor, constant: 20), playerContainerView.leadingAnchor.constraint(equalTo: view.leadingAnchor), playerContainerView.trailingAnchor.constraint(equalTo: view.trailingAnchor), playerContainerView.heightAnchor.constraint(equalToConstant: 200), stack.topAnchor.constraint(equalTo: playerContainerView.bottomAnchor, constant: 20), stack.centerXAnchor.constraint(equalTo: view.centerXAnchor) ]) // Action playButton.addTarget(self, action: #selector(didTapPlay), for: .touchUpInside) stopButton.addTarget(self, action: #selector(didTapStop), for: .touchUpInside) replayButton.addTarget(self, action: #selector(didTapReplayFromStart), for: .touchUpInside) } private func setupPlayer() { // AVURLAsset -> AVPlayerItem → AVPlayer let asset = AVURLAsset(url: videoURL) let item = AVPlayerItem(asset: asset) self.playerItem = item let player = AVPlayer(playerItem: item) player.automaticallyWaitsToMinimizeStalling = true self.player = player let layer = AVPlayerLayer(player: player) layer.videoGravity = .resizeAspect playerContainerView.layer.addSublayer(layer) layer.frame = playerContainerView.bounds self.playerLayer = layer // Notification if let observeForComplete { NotificationCenter.default.removeObserver(observeForComplete) } if let playerItem { observeForComplete = NotificationCenter.default.addObserver( forName: AVPlayerItem.didPlayToEndTimeNotification, object: playerItem, queue: .main ) { [weak self] _ in guard self != nil else { return } Task { @MainActor in print("Play finished.") } } } } // MARK: - Actions @objc private func didTapPlay() { player?.play() } @objc private func didTapStop() { player?.pause() } // RePlay @objc private func didTapReplayFromStart() { player?.seek(to: .zero, toleranceBefore: .zero, toleranceAfter: .zero) { [weak self] _ in self?.player?.play() } } } I would greatly appreciate an official response from Apple engineering on whether this is an intentional change, a regression, or an API contract clarification, and what the recommended approach is going forward. Thank you.
1
1
321
2d
What changes were made to the VideoToolbox HEVC encoder in iOS 26?
Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues. However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct. After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect. I created a VTCompressionSession using the following configuration. var newSession: VTCompressionSession! var status = VTCompressionSessionCreate( allocator: kCFAllocatorDefault, width: Int32(frameSize.width), height: Int32(frameSize.height), codecType: kCMVideoCodecType_HEVC, encoderSpecification: nil, imageBufferAttributes: nil, compressedDataAllocator: nil, outputCallback: nil, refcon: nil, compressionSessionOut: &newSession ) try check(status, VideoToolboxErrorDomain) let properties: [CFString: Any] = [ kVTCompressionPropertyKey_AllowFrameReordering: false, kVTCompressionPropertyKey_AllowTemporalCompression: false, kVTCompressionPropertyKey_RealTime: false, kVTCompressionPropertyKey_MaximizePowerEfficiency: false, kVTCompressionPropertyKey_ProfileLevel: profileLevel, kVTCompressionPropertyKey_Quality: quality.rawValue, ] status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary) try check(status, VideoToolboxErrorDomain) { VTCompressionSessionInvalidate(newSession) } Then use the following code to encode each Grid of the image. let status = VTCompressionSessionEncodeFrame( session, imageBuffer: buffer, presentationTimeStamp: presentationTimeStamp, duration: frameDuration, frameProperties: nil, infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in try check(status, VideoToolboxErrorDomain) if let sampleBuffer { let encodedImage = try self.encodedImage(from: sampleBuffer) // handle encodedImage } } try check(status, VideoToolboxErrorDomain) If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding. createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.
3
0
307
1w
Threading guarantees with AVCaptureVideoDataOutput
I'm writing some camera functionality that uses AVCaptureVideoDataOutput. I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput. My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource. I have setup similar to the following: // Foreground thread logic dispatch_queue_t queue = dispatch_queue_create("avf_camera_queue", nullptr); AVCaptureSession *captureSession = [[AVCaptureSession alloc] init]; setupInputDevice(captureSession); // Connects the AVCaptureDevice... // Store some arbitrary data to be attached to the frame, stored on the foreground thread FrameMetaData frameMetaData = ...; MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc]; // Capture frameMetaData by reference in lambda [sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }]; AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue]; [captureSession addOutput:captureVideoDataOutput]; [captureSession startRunning]; [captureSession stopRunning]; // Is it now safe to destroy frameMetaData, or do we need manual barrier? And then in MySampleBufferDelegate: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Invokes the callback set above FrameMetaData *frameMetaData = frameMetaDataGetter(); emitSampleBuffer(sampleBuffer, frameMetaData); }
0
0
301
1w
High shutter speed with low frame rate and auto exposure
Hi there, I want to set the iphone camera to "S mode, or Shutter Priority" in camera terminology. Which is a semi-auto exposure model with shutter speed fixed, or set manually. However, when setting the shutter speed manually, it disables the auto exposure. So is there a way to keep the auto exposure on while restrict the shutter speed? Also, I would like to keep a low frame rate, e.g. 30 fps. Would I be able to set shutter speed independent of frame rate? Here's the code for setting up the camera Best,
0
0
460
1w
Looking for solutions to build a video chat app (Omegle/Chatroulette style) on Vision Pro
Hi everyone, We are working on a prototype app for Apple Vision Pro that is similar in functionality to Omegle or Chatroulette, but exclusively for Vision Pro owners. The core idea is: – a matching system where one user connects to another through a virtual persona; – real-time video and audio transmission; – time limits for sessions with the ability to extend them; – users can skip a match and move on to the next one. We have explored WebRTC and Twilio, but unfortunately, they don’t fit our use case. Question: What alternative services or SDKs are available for implementing real-time video/audio communication on Vision Pro that would work with this scenario? Has anyone encountered a similar challenge and can recommend which technologies or tools to use? Thanks in advance!
2
0
225
2w
AVPlayerViewController `customInfoViewControllers` crash/workaround on tvOS 26
One thing I've noticed on tvOS 26 is that if you try to set the AVPlayerViewController customInfoViewControllers property while the Content Tabs are on screen, your app will crash. *** Terminating app due to uncaught exception 'UIViewControllerHierarchyInconsistency', reason: 'trying to add child view controller that is already presented: <AVInfoPanelViewController: 0x1030cdc00>' *** First throw call stack: (0x18a7167bc 0x189a77510 0x18a7166a8 0x1ab425658 0x1b2ee9d54 0x1b2efcd60 0x1b2eaf3f0 0x1080f744c 0x107e021a8 0x107e01b3c 0x18de41c14 0x18de41ba8 0x18de48d28 0x18ad9e358 0x101fac5f0 0x101fc6228 0x101fe7278 0x101fbc6fc 0x101fbc63c 0x18a67a2e0 0x18a679418 0x18a673b34 0x1937e4d5c 0x1abb36588 0x1abb3ae80 0x1aae9dec4 0x108610174 0x1086100e4 0x108615140 0x189abd4d0) I've logged a feedback (FB19554461) but it's getting awfully late in the dev cycle. So I've been trying to think of a workaround. The problem is that customInfoViewControllers is pretty declarative in nature. There are no properties or delegate methods I am aware of that let me know when they are displaying or not. One trick I came up with was seeing if my custom info view controller's view was "visible" or not - I put that in quotes because it turns out it can be visible even when I think it's not, as when the transport bar is scrolled to the top my custom VC still has its top pixels showing, so it gets a viewDidAppear call. So, I tried to see if my view controllers view is completely visible, ie based on the results of the GGRect contains method. And that works! But the problem is it only accounts for my own custom info view controllers, and not the standard one that Apple provides. I can't think of a way at all to know whether that is showing. Any ideas?
1
0
85
2w
Save slow motion video with custom playback speed bar
Background: For iOS, I have built a custom video record app that records video at the highest possible FPS (supposed to be around 240 FPS but more like ~200 in practice). When I save this video to the user's camera roll, I notice this dynamic playback speed bar. This bar will speed up or slow down the video. My question: How can I save my video such that this playback speed bar is constantly slow or plays at real time speed? For reference I have include the playback speed bar that I am talking about in the screenshot, you can find this in the photo app when you record slow motion video.
0
0
183
3w
Bluetooth headphones
Any one experience this bug, when playing a video the bluetooth headphones loose audio? my workaround is to select from one of the other audio outputs, and go back again and select the affected headphone.
1
0
83
3w
Issue with Airplay for DRM videos
When I try to send a DRM-protected video via Airplay to an Apple TV, the license request is made twice instead of once as it normally does on iOS. We only allow one request per session for security reasons, this causes the second request to fail and the video won't play. We've tested DRM-protected videos without token usage limits and it works, but this creates a security hole in our system. Why does it request the license twice in function: func contentKeySession(_ session: AVContentKeySession, didProvide keyRequest: AVContentKeyRequest)? Is there a way to prevent this?
0
0
184
3w
Sudden brightness/contrast shifts in videos across multiple apps on iPhone
Hi, I’ve been experiencing a strange issue with video playback on my iPhone. While watching videos, the image will suddenly shift it becomes more greyish, then sometimes briefly goes black, and then returns to normal bright quality. This can happen multiple times during a single video. This is not limited to the Photos app. I’ve seen it happen: In the Photos app when playing videos I recorded myself In Snapchat when watching videos sent by others Occasionally in other social media apps as well Additional details: HDR Video is turned off Apple ProRes is turned off Tried both 4K 60fps and 4K 30fps Camera format set to “Most Compatible” Low Power Mode is off Issue happens whether the phone is cool or warm Doesn’t seem related to the video file itself the same file exported to another device looks fine all the way through These exact same videos play back completely normally on my iPad with no brightness or contrast shifts at all I’m currently on the iOS 17 public beta, but this issue was happening before I installed the beta as well, so it’s not beta-specific It almost feels like the display or system is switching between different brightness/contrast profiles mid playback, regardless of the app. Has anyone else experienced this, and is there a way to disable this behavior so the brightness and color stay consistent during video playback? Thanks in advance!
0
0
204
Aug ’25
UIViewController + AVPlayerLayer + AVPlayer cannot AirPlay to HomePod on tvOS 18.5, but AVPlayerViewController + AVPlayer is ok
1, Devices Apple TV HD tvOS 18.5 (22L572) Mode(A1625) HomePod Gen1 OS18.5 (22L572) 2, Test cases: 2.1 UIViewController + AVPlayerLayer + AVPlayer Result: Can not AirPlay to HomePod Sample code for UIViewController + AVPlayer 2.1 AVPlayerViewController + AVPlayer Result: Can AirPlay to HomePod Sample code for AVPlayerViewController + AVPlayer
0
0
182
Aug ’25
iOS 18 Picture-in-Picture dynamically changes UIScreen.main.bounds on iPhone 11 causing keyboard clipping and layout issues
When creating and activating AVPictureInPictureController on iPhone 11 running iOS 18.x, the main screen viewport unexpectedly changes from the native 375×812 logical points to 414×896 points (the size of a different device such as iPhone XR/11 Plus). Additionally, a separate UIWindow with zero CGRect is created. This causes UIKit layout calculations, especially related to the keyboard and safeAreaInsets, to malfunction and results in issues like keyboard clipping and general UI misalignment. The problem appears tied specifically to iOS 18+ behavior and does not manifest on earlier iOS versions or other devices consistently. Steps to Reproduce: Run the app on iPhone 11 with iOS 18.x (including the latest minor updates). Instantiate and start an AVPictureInPictureController during app runtime. Observe that UIScreen.main.bounds changes from 375×812 (native iPhone 11 size) to 414×896. Notice a secondary UIWindow appears with frame CGRectZero. Interact with text inputs that cause the keyboard to appear. The keyboard is clipped or partially obscured, causing UI usability issues. Layout elements dependent on UIScreen.main.bounds or safeAreaInsets behave incorrectly. Expected Behavior: UIScreen.main.bounds should not dynamically change upon PiP controller creation. The coordinate space and viewport should remain accurate to device native dimensions. The keyboard and UI layout should adjust properly with respect to the actual on-screen size and safe areas. Actual Behavior: UIScreen.main.bounds changes to 414×896 upon PiP controller creation on iPhone 11 iOS 18+. A zero-rect secondary UIWindow is created, impacting layout queries. Keyboard clipping and UI layout bugs occur. The app viewport and coordinate space are inconsistent with device hardware.
0
0
150
Jul ’25
AVSampleBufferDisplayLayer drop frame when play uncompressed video with bframe>3
When using AVSampleBufferDisplayLayer to play uncompressed H.264 and H.265 video with B-frames more than 7, frame drops occur. The more B-frames there are, the more noticeable the frame drops become, for example 15 bframes. Use FFmpeg to transcode a video file with visible timestamps and frame numbers (x264 or x265 ): ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx264 -x264-params "bframes=15:b-adapt=0" -crf 30 -y x264_bf15.mp4 ffmpeg -i test.mp4 -vf "drawtext=fontsize=45:text=%{pts} %{n}:y=400" -c:v libx265 -x265-params "bframes=15:b-adapt=0" -crf 30 -y x265_bf15.mp4 Use the demo player from this repository to reproduce the issue: https://github.com/msfrms/CustomPlayer frame drops can be observed. And following log can be found in devices console. mediaserverd <<<< IQ-CA >>>> piqca_gmstats_dump: FIQCA(0x1266f4000) recent frames: enqueued: 184, displayed: 138, dropped: 42, flushed: 0, evicted: 3, >16ms late: 2 PS. I was using iphone11 iOS14.6, to replay this issue. May I ask why frame drops occur in this case? Is there any configuration or API usage change that could help fix the frame drop issue? Many thanks!
2
1
186
Jul ’25
AVAssetReaderOutput.Provider Missing symbols
Recurring crash on install of any app with the new sourceVideoTrackProvider.next() dyld[41966]: Symbol not found: _$sSo19AVAssetReaderOutputC12AVFoundationE8ProviderC4nextxSgyYaKFTjTu Referenced from: <79AA2BE0-A6B4-32F5-A804-E84BBE5D1AEA> /Users/<username>/Library/Developer/Xcode/DerivedData/TrackProviderCrash-bbbhjptcxnmfdcackxtpucnunxyc/Build/Products/Debug-maccatalyst/TrackProviderCrash.app/Contents/MacOS/TrackProviderCrash.debug.dylib Expected in: <1B847AF9-7973-3B28-95C2-09E73F6DD50B> /usr/lib/swift/libswiftAVFoundation.dylib Can be reproduced with the current Xcode Beta 4 by running on to MacCatalyst and macOS https://developer.apple.com/documentation/AVFoundation/converting-projected-video-to-apple-projected-media-profile Crash goes away of you comment out lines 154-158 and 164-170 which are while let sampleBuffer = try await sourceVideoTrackProvider.next(){/*other code*/} Can also be reproduced if you add the code below to a MacCatalyst project import AVKit let asset: AVURLAsset = .init(url: Bundle.main.url(forResource: "SomeVideo.mp4", withExtension: nil)!) let videoReader = try! AVAssetReader(asset: asset) let videoTracks = try! await asset.loadTracks(withMediaCharacteristic: .visual) // Get the side-by-side video track. let videoTrack = videoTracks.first! let videoInputTrack = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: nil) let sourceVideoTrackProvider: AVAssetReaderOutput.Provider<CMReadySampleBuffer<CMSampleBuffer.DynamicContent>> = videoReader.outputProvider(for: videoInputTrack) //Comment out this while let sb = try! await sourceVideoTrackProvider.next() { }
1
0
578
Jul ’25
PHPickerViewController Not Offering public.hevc UTI for a Known HEVC Video
I'm working on an app where a user needs to select a video from their Photos library, and I need to get the original, unmodified HEVC (H.265) data stream to preserve its encoding. The Problem I have confirmed that my source videos are HEVC. I can record a new video with my iPhone 15 Pro Max camera set to "High Efficiency," export the "Unmodified Original" from Photos on my Mac, and verify that the codec is MPEG-H Part2/HEVC (H.265). However, when I select that exact same video in my app using PHPickerViewController, the itemProvider does not list public.hevc as an available type identifier. This forces me to fall back to a generic movie type, which results in the system providing me with a transcoded H.264 version of the video. Here is the debug output from my app after selecting a known HEVC video: ⚠️ 'public.hevc' not found. Falling back to generic movie type (likely H.264). What I've Tried My code explicitly checks for the public.hevc identifier in the registeredTypeIdentifiers array. Since it's not found, my HEVC-specific logic is never triggered. Here is a minimal version of my PHPickerViewControllerDelegate implementation: import UniformTypeIdentifiers // ... inside the Coordinator class ... func picker(_ picker: PHPickerViewController, didFinishPicking results: [PHPickerResult]) { picker.dismiss(animated: true) guard let result = results.first else { return } let itemProvider = result.itemProvider let hevcIdentifier = "public.hevc" let identifiers = itemProvider.registeredTypeIdentifiers print("Available formats from itemProvider: \(identifiers)") if identifiers.contains(hevcIdentifier) { print("✅ HEVC format found, requesting raw data...") itemProvider.loadDataRepresentation(forTypeIdentifier: hevcIdentifier) { (data, error) in // ... process H.265 data ... } } else { print("⚠️ 'public.hevc' not found. Falling back to generic movie type (likely H.264).") itemProvider.loadFileRepresentation(forTypeIdentifier: UTType.movie.identifier) { url, error in // ... process H.264 fallback ... } } } My Environment Device: iPhone 15 Pro Max iOS Version: iOS 18.5 Xcode Version: 16.2 My Questions Are there specific conditions (e.g., the video being HDR/Dolby Vision, Cinematic, or stored in iCloud) under which PHPickerViewController's itemProvider would intentionally not offer the public.hevc type identifier, even for an HEVC video? What is the definitive, recommended API sequence to guarantee that I receive the original, unmodified data stream for a video asset, ensuring that no transcoding to H.264 occurs during the process? Any insight into why public.hevc might be missing from the registeredTypeIdentifiers for a known HEVC asset would be greatly appreciated. Thank you.
3
0
101
Jul ’25
play videos in webM format on iOS Deveices
I would like to play videos in webM format on my iPhone. I understand that it is basically impossible to play videos in webM format on an iPhone, but is there any way to display videos in webM format? I would like to know if there is an official Swift SDK or development kit released by Apple. Or if there are any third-party products, please let me know.
1
0
248
Jul ’25
Final Cut Pro Workflow Extension Drag and Drop to Timeline Not Working Despite Proper FCPXML Clip Structure
Our Final Cut Pro workflow extension built with ProExtensionHost framework uses an advanced NSPasteboardItemDataProvider system with multi-version FCPXML support (1.9, 1.10, 1.13) and proper relative path UIDs for Motion templates. We've implemented clip wrapper approach with placeholder assets and elements containing effects to enable direct timeline drag functionality. However, drag and drop from our Final Cut Pro workflow extension directly to timeline is still not working despite proper element structure in our FCPXML. Our implementation creates valid clip elements with effects applied, but Final Cut Pro timeline doesn't accept them during drag operations from our ProExtensionHost-based workflow extension. Steps to Reproduce: Create Final Cut Pro workflow extension using ProExtensionHost framework with NSPasteboardItemDataProvider implementation Generate FCPXML with proper element structure: Expected Result: Clip should be accepted by timeline and effect applied from workflow extension Actual Result: Timeline rejects drag operation from ProExtensionHost-based workflow extension Question: Are there additional requirements or ProExtensionHost API calls needed beyond standard NSPasteboardItemDataProvider for Final Cut Pro workflow extension timeline drag functionality?
0
0
116
Jul ’25
HDR video & screen brightness
When I play an HDR video in the iPhone Photos app, I can see the HDR effect obviously. But if this HDR video is played continuously for more than 30-40 minutes, the HDR effect will disappear and the brightness will be compressed to the SDR range. This issue will appear on any iPhone. Depending on the phone, it may be 20-30 minutes, or 30-40 minutes, or even a few minutes, such as iPhone 12 mini. Similarly, if I use AVPlayer to play and preview an HDR video, if it plays more than 30-40 minutes, the HDR effect will disappear and the screen brightness will dim. Also the currentEDRHeadroom will gradually decrease to 1 Note, test it with an HDR video longer than 1 hour, and if the video is short, please loop it. My question is how to avoid losing the HDR effect after 30-40 minutes when I use CAMetalLayer to render any HDR video.
1
0
100
Jul ’25
After playing an HDR video on iPhone for a while, the HDR effect disappears and the screen brightness decrease
When i use AVPlayer to obtain the video frame CVPixelBufferRef of an HDR video, and use AVSampleBufferDisplayLayer to display it on the screen, after a period of time, the HDR video content and screen gradually darken, losing the HDR effect. Steps to reproduce: Create an AVPlayer to loop an HDR video, specify the video frame format as kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange Create a timer to get the video frame CVPixelBufferRef at 30 frames per second Use AVSampleBufferDisplayLayer to display CVPixelBufferRef on the screen Don't operate the phone, wait for a period of time (such as 40 minutes), the HDR effect disappears and the screen darkens Note: You need to use an iPhone device, iOS 18.5 and below operating system You need to ensure that the HDR video is played in a loop, that is, to ensure that the screen continues to display HDR content, wait for a period of time, depending on different devices, you need to wait for 20-40 minutes. In the iPhone Photos app,the same problem will occur after playing HDR video in a loop for a long time Expected Results: When rendering HDR content for a long time, it is guaranteed that there is always an HDR effect, and the HDR content and screen will not be darkened. Current Results: After about 20-40 minutes, the HDR effect disappears and the screen darkens.
4
0
548
Jul ’25