Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

Looking for solutions to build a video chat app (Omegle/Chatroulette style) on Vision Pro
Hi everyone, We are working on a prototype app for Apple Vision Pro that is similar in functionality to Omegle or Chatroulette, but exclusively for Vision Pro owners. The core idea is: – a matching system where one user connects to another through a virtual persona; – real-time video and audio transmission; – time limits for sessions with the ability to extend them; – users can skip a match and move on to the next one. We have explored WebRTC and Twilio, but unfortunately, they don’t fit our use case. Question: What alternative services or SDKs are available for implementing real-time video/audio communication on Vision Pro that would work with this scenario? Has anyone encountered a similar challenge and can recommend which technologies or tools to use? Thanks in advance!
2
0
254
Aug ’25
Threading guarantees with AVCaptureVideoDataOutput
I'm writing some camera functionality that uses AVCaptureVideoDataOutput. I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput. My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread? For example, does [AVCaptureSession stopRunning] imply a blocking call until all pending frame-callbacks are done? I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource. I have setup similar to the following: // Foreground thread logic dispatch_queue_t queue = dispatch_queue_create("qt_avf_camera_queue", nullptr); AVCaptureSession *captureSession = [[AVCaptureSession alloc] init]; setupInputDevice(captureSession); // Connects the AVCaptureDevice... // Store some arbitrary data to be attached to the frame, stored on the foreground thread FrameMetaData frameMetaData = ...; MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc]; // Capture frameMetaData by reference in lambda [sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }]; AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; [captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate queue:queue]; [captureSession addOutput:captureVideoDataOutput]; [captureSession startRunning]; [captureSession stopRunning]; // Is it now safe to destroy frameMetaData, or do we need manual barrier? And then in MySampleBufferDelegate: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { // Invokes the callback set above FrameMetaData *frameMetaData = frameMetaDataGetter(); emitSampleBuffer(sampleBuffer, frameMetaData); }
2
0
404
Sep ’25
What changes were made to the VideoToolbox HEVC encoder in iOS 26?
Because I want to control the grid size and number of HEIC images myself, I decided to perform HEVC encoding manually and then generate the HEIC image. Previously, I used VTCompressionSession to accomplish this task, and the results were satisfactory. It worked perfectly on iOS 16 through iOS 18 — in other words, it was able to generate correct HEVC encoding, and its CMFormatDescription should also have been correct, since I relied on it to generate the decoderConfig; otherwise, the final image would have decoding issues. However, it can no longer generate a valid HEIC image on a physical device running iOS 26. Interestingly, it still works fine on the iOS 26 simulator — it only fails on real hardware. The abnormal result is that the image becomes completely black, although the image dimensions are still correct. After my troubleshooting, I suspect that the encoding behavior of VTCompressionSession has been modified on iOS 26, which causes the final hvc1 encoding I pass in to be incorrect. I created a VTCompressionSession using the following configuration. var newSession: VTCompressionSession! var status = VTCompressionSessionCreate( allocator: kCFAllocatorDefault, width: Int32(frameSize.width), height: Int32(frameSize.height), codecType: kCMVideoCodecType_HEVC, encoderSpecification: nil, imageBufferAttributes: nil, compressedDataAllocator: nil, outputCallback: nil, refcon: nil, compressionSessionOut: &newSession ) try check(status, VideoToolboxErrorDomain) let properties: [CFString: Any] = [ kVTCompressionPropertyKey_AllowFrameReordering: false, kVTCompressionPropertyKey_AllowTemporalCompression: false, kVTCompressionPropertyKey_RealTime: false, kVTCompressionPropertyKey_MaximizePowerEfficiency: false, kVTCompressionPropertyKey_ProfileLevel: profileLevel, kVTCompressionPropertyKey_Quality: quality.rawValue, ] status = VTSessionSetProperties(newSession, propertyDictionary: properties as CFDictionary) try check(status, VideoToolboxErrorDomain) { VTCompressionSessionInvalidate(newSession) } Then use the following code to encode each Grid of the image. let status = VTCompressionSessionEncodeFrame( session, imageBuffer: buffer, presentationTimeStamp: presentationTimeStamp, duration: frameDuration, frameProperties: nil, infoFlagsOut: nil) { [weak self] status, _, sampleBuffer in try check(status, VideoToolboxErrorDomain) if let sampleBuffer { let encodedImage = try self.encodedImage(from: sampleBuffer) // handle encodedImage } } try check(status, VideoToolboxErrorDomain) If I try to display this abnormal image in the App, my console outputs the following error, so it can be inferred that the issue probably occurred during decoding. createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray createImageBlock:3029: *** ERROR: CGImageBlockCreate {0, 0, 2316, 6176} - data is NULL callDecodeImage:2411: *** ERROR: decodeImageImp failed - NULL _blockArray It needs to be emphasized again that this code used to work fine in the past, and the issue only occurs on an iOS 26 physical device. I noticed that iOS 26 has introduced many new properties, but I’m not sure whether some of these new properties must be set in the new system, and there’s no information about this in the official documentation.
3
0
401
Sep ’25
Background GPU access in iOS 26 for iPhones
We build mobile apps for creators to edit their videos. Post editing the video, the creator has to export the video so that it can be uploaded to Youtube. The export is a time consuming and GPU intensive process. The creator can exit the app due to various reasons like receiving the call, putting the app in background etc. This causes the export to fail :( Keeping this limitation in mind there was an announcement from Apple that with the IOS 26 launch would start to support background GPU access. Here is the official documentation: https://developer.apple.com/documentation/BundleResources/Entitlements/com.apple.developer.background-tasks.continued-processing.gpu When we tried using this feature, we were not able to get it to work on IOS 26. We stumbled upon this ticket(https://developer.apple.com/forums/thread/797538?answerId=854825022#854825022) in the Apple Developer forum, in which possibly an Apple engineer claims it is supported ONLY for iPadOS 26. This is a very big bummer for us. 96% of the users are on iPhone(compared to iPad), and if we refer to the official documentation above, it claims that this feature should work on IOS 26. This feature is extremely important for having the best user experience and reducing user frustration and will be useful for other video editing apps. Looking forward to a resolution.
1
0
213
Oct ’25
"No signal" message when connecting LG tv via HDM
Hi everyone, I am currently on MacOS Tahoe (26.1), and for some weird reason my mac is not connecting via HDMI. To be accurate: it is connecting and the LG TV shows up in the Displays settings, but no image shows up in it, I have no idea why. This used to work as I've tried this cable before with the same exact tv. The cable is a basic Amazon Basics HDMI one. Allow me just to advanced this question a little: usually terminal commands are more advanced recommendations, whereas basic questions like "have you connected it right" are just a waste of time
4
0
656
Oct ’25
Capturing multiple screens no longer works with macOS Sequoia
Capturing more than one display is no longer working with macOS Sequoia. We have a product that allows users to capture up to 2 displays/screens. Our application is using gstreamer which in turn is based on AVFoundation. I found a quick way to replicate the issue by just running 2 captures from separate terminals. Assuming display 1 has device index 0, and display 2 has device index 1, here are the steps: install gstreamer with brew install gstreamer Then open 2 terminal windows and launch the following processes: terminal 1 (device-index:0): gst-launch-1.0 avfvideosrc -e device-index=0 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink terminal 2 (device-index:1): gst-launch-1.0 avfvideosrc -e device-index=1 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink The first process that is launched will show the screen, the second process launched will not. Testing this on macOS Ventura and Sonoma works as expected, showing both screens. I submitted the same issue on Feedback Assistant: FB15900976
2
0
355
Apr ’25
Best way to cache a inifinite scroll view of videos
Hi, Im working on a app with a infinite scrollable video similar to Tiktok or instagram reels. I initially thought it would be a good idea to cache videos in the file system but after reading this post it seems like it is not recommended to cache videos on the file system: https://forums.developer.apple.com/forums/thread/649810#:~:text=If%20the%20videos%20can%20be%20reasonably%20cached%20in%20RAM%20then%20we%20would%20recommend%20that.%20Regularly%20caching%20video%20to%20disk%20contributes%20to%20NAND%20wear The reason I am hesitant to cache videos to memory is because this will add up pretty quickly and increase memory pressure for my app. After seeing the amount of documents and data storage that instagram stores, its obvious they are caching videos on the file system. So I was wondering what is the updated best practice for caching for these kind of apps?
2
0
518
Dec ’24
AVPlayer fail
when I played a local video(I downloaded it to the sandbox),KVO the AVPlayerItem status is AVPlayerItemStatusFailed and error is Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(24), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x3004137e0 {Error Domain=NSPOSIXErrorDomain Code=24 "Too many open files"}} why?
2
0
366
Dec ’24
iPhone Video Upload Error - reason unknown
Hi everyone, I'm developing a customization tool in which our customer can upload a mp3 or mp4 file that will be scannable through our AR application. On desktop and Android, this is working perfectly. For some reason however, on iPhone we're unable to load most of the video files. I've checked the clips, and they are .mov/h.264 files which are supported by iPhone. We're currently not sure how we can fix this issue to allow customers that own an iPhone to upload clips to our website. Any tips in the right direction are more than welcome. thanks in advance!
1
0
520
Dec ’24
How to implement Picture-in-Picture (PiP) in Flutter for iOS using LiveKit without a video URL?
I am building a video conferencing app using LiveKit in Flutter and want to implement Picture-in-Picture (PiP) mode on iOS. My goal is to display a view showing the speaker's initials or avatar during PiP mode. I successfully implemented this functionality on Android but am struggling to achieve it on iOS. I am using a MethodChannel to communicate with the native iOS code. Here's the Flutter-side code: import 'package:flutter/foundation.dart'; import 'package:flutter/services.dart'; class PipController { static const _channel = MethodChannel('pip_channel'); static Future<void> startPiP() async { try { await _channel.invokeMethod('enterPiP'); } catch (e) { if (kDebugMode) { print("Error starting PiP: $e"); } } } static Future<void> stopPiP() async { try { await _channel.invokeMethod('exitPiP'); } catch (e) { if (kDebugMode) { print("Error stopping PiP: $e"); } } } } On the iOS side, I am using AVPictureInPictureController. Since it requires an AVPlayerLayer, I had to include a dummy video URL to initialize the AVPlayer. However, this results in the dummy video’s audio playing in the background, but no view is displayed in PiP mode. Here’s my iOS code: import Flutter import UIKit import AVKit @main @objc class AppDelegate: FlutterAppDelegate { var pipController: AVPictureInPictureController? var playerLayer: AVPlayerLayer? override func application( _ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]? ) -> Bool { let controller: FlutterViewController = window?.rootViewController as! FlutterViewController let pipChannel = FlutterMethodChannel(name: "pip_channel", binaryMessenger: controller.binaryMessenger) pipChannel.setMethodCallHandler { [weak self] (call: FlutterMethodCall, result: @escaping FlutterResult) in if call.method == "enterPiP" { self?.startPictureInPicture(result: result) } else if call.method == "exitPiP" { self?.stopPictureInPicture(result: result) } else { result(FlutterMethodNotImplemented) } } GeneratedPluginRegistrant.register(with: self) return super.application(application, didFinishLaunchingWithOptions: launchOptions) } private func startPictureInPicture(result: @escaping FlutterResult) { guard AVPictureInPictureController.isPictureInPictureSupported() else { result(FlutterError(code: "UNSUPPORTED", message: "PiP is not supported on this device.", details: nil)) return } // Set up the AVPlayer let player = AVPlayer(url: URL(string: "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4")!) let playerLayer = AVPlayerLayer(player: player) self.playerLayer = playerLayer // Create a dummy view let dummyView = UIView(frame: CGRect(x: 0, y: 0, width: 1, height: 1)) dummyView.isHidden = true window?.rootViewController?.view.addSubview(dummyView) dummyView.layer.addSublayer(playerLayer) playerLayer.frame = dummyView.bounds // Initialize PiP Controller pipController = AVPictureInPictureController(playerLayer: playerLayer) pipController?.delegate = self // Start playback and PiP player.play() pipController?.startPictureInPicture() print("Picture-in-Picture started") result(nil) } private func stopPictureInPicture(result: @escaping FlutterResult) { guard let pipController = pipController, pipController.isPictureInPictureActive else { result(FlutterError(code: "NOT_ACTIVE", message: "PiP is not currently active.", details: nil)) return } pipController.stopPictureInPicture() playerLayer = nil self.pipController = nil result(nil) } } extension AppDelegate: AVPictureInPictureControllerDelegate { func pictureInPictureControllerDidStartPictureInPicture(_ pictureInPictureController: AVPictureInPictureController) { print("PiP started") } func pictureInPictureControllerDidStopPictureInPicture(_ pictureInPictureController: AVPictureInPictureController) { print("PiP stopped") } } Questions: How can I implement PiP mode on iOS without using a video URL (or AVPlayerLayer)? Is there a way to display a custom UIView (like a speaker’s initials or an avatar) in PiP mode instead of requiring a video? Why does PiP not display any view, even though the dummy video URL is playing in the background? I am new to iOS development and would greatly appreciate any guidance or alternative approaches to achieve this functionality. Thank you!
1
0
751
Dec ’24
Does tvOS Support the Multiview Feature?
I've seen the Multiview feature on tvOS that displays a small grid icon when available. However, I only see this functionality in VisionOS using the AVMultiviewManager. Does a different name refer to this feature on tvOS? Relevant Links: https://www.reddit.com/r/appletv/comments/12opy5f/handson_with_the_new_multiview_split_screen/ https://www.pocket-lint.com/how-to-use-multiview-apple-tv/#:~:text=You'll%20see%20a%20grid,running%20at%20the%20same%20time.
1
0
556
Dec ’24
Debug MediaExtension plugin in system exectutable?
I am developing a macOS 15 MediaExtension plugin to enable additional codecs and container formats in AVFoundation My Plugin is sort of working, but i'd like to debug the XPC process that AVFoundation 'hoists' for me from the calling app (ie - the process hosting my plugin instance that is managing the MESampleBuffer protocol calls for example) Is there a method to configure XCode for interactive attaching to this background process for interactive debugging? Right now I have to use Console + Print which is not fun or productive. Does Apple have a working example of a MediaExtension anywhere? This is an exciting API that is very under-documented. I'm willing to spend a Code Review 'credit' for this, but my issues are not quite focused. Any assistance is highly appreciated!
0
0
511
Dec ’24
Why AvAssetWriter adds pts drift when writing fMP4
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames. For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0. However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither. Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
0
0
585
Dec ’24
Custom Share Desination stopped working in FCP X 11
We integrate with FCP X using a custom share destination and the Apple Script interface. This has been working fine until the the recent version 11 update of FCP X. With this update we are no longer receiving the open event when the export has completed. We get the apple event to creat the Asset and the file is exported to the location we set in the response. There is just no open event after that. I suspect something is wrong with our scripting support but I have no idea what or how to troubleshoot. This works fine in 10.8.1 and below.
0
0
370
Dec ’24
Capacitor app on iOS inline video stops
Hello, I’m experiencing an issue with video playback in my Javascript (SvelteKit) application using Capacitor. The video plays and loops correctly on Android and web browsers (including Safari) but stops unexpectedly after a few iterations on iOS native App. <video src={videoPath} autoplay muted loop playsinline class="h-auto w-full max-w-full object-cover"></video> Has anyone encountered a similar issue or have insights into what might be causing this behavior on iOS? Any suggestions or workarounds would be greatly appreciated. Maybe it has something to do with the iOS power saving policy? Thank you in advance for your help!
0
0
489
Jan ’25
AVAssetExportSession in iOS18- Thread 11: "*** -[AVAssetExportSession exportAsynchronouslyWithCompletionHandler:] Cannot call exportAsynchronouslyWithCompletionHandler: more than once."
I’m experiencing a crash at runtime when trying to extract audio from a video. This issue occurs on both iOS 18 and earlier versions. The crash is caused by the following error: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '*** -[AVAssetExportSession exportAsynchronouslyWithCompletionHandler:] Cannot call exportAsynchronouslyWithCompletionHandler: more than once.' *** First throw call stack: (0x1875475ec 0x184ae1244 0x1994c49c0 0x217193358 0x217199899 0x192e208b9 0x217192fd9 0x30204c88d 0x3019e5155 0x301e5fb41 0x301af7add 0x301aff97d 0x301af888d 0x301aff27d 0x301ab5fa5 0x301ab6101 0x192e5ee39) libc++abi: terminating due to uncaught exception of type NSException My previous code worked fine, but it's crashing with Swift 6. Does anyone know a solution for this? ## **Previous code:** func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) { let asset = AVAsset(url: videoURL) // Create an AVAssetExportSession to export the audio track guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else { completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"]))) return } // Set the output file type and path guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return } let outputURL = VideoUtils.getTempAudioExportUrl(filename) VideoUtils.deleteFileIfExists(outputURL.path) exportSession.outputFileType = .m4a exportSession.outputURL = outputURL let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0) if let exportHandler = exportHandler { exportHandler(exportSession, audioExportProgressPublisher) } // Periodically check the progress of the export session let timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { _ in audioExportProgressPublisher.send(exportSession.progress) } // Export the audio track asynchronously exportSession.exportAsynchronously { switch exportSession.status { case .completed: completion(.success(outputURL)) case .failed: completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"]))) case .cancelled: completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"]))) default: completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"]))) } // Invalidate the timer when the export session completes or is cancelled timer.invalidate() } } ## New Code: func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) async { let asset = AVAsset(url: videoURL) // Create an AVAssetExportSession to export the audio track guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else { completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"]))) return } // Set the output file type and path guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return } let outputURL = VideoUtils.getTempAudioExportUrl(filename) VideoUtils.deleteFileIfExists(outputURL.path) let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0) if let exportHandler { exportHandler(exportSession, audioExportProgressPublisher) } if #available(iOS 18.0, *) { do { try await exportSession.export(to: outputURL, as: .m4a) let states = exportSession.states(updateInterval: 0.1) for await state in states { switch state { case .pending, .waiting: break case .exporting(progress: let progress): print("Exporting: \(progress.fractionCompleted)") if progress.isFinished { completion(.success(outputURL)) }else if progress.isCancelled { completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"]))) }else { audioExportProgressPublisher.send(Float(progress.fractionCompleted)) } } } }catch let error { print(error.localizedDescription) } }else { // Periodically check the progress of the export session let publishTimer = Timer.publish(every: 0.1, on: .main, in: .common) .autoconnect() .sink { [weak exportSession] _ in guard let exportSession else { return } audioExportProgressPublisher.send(exportSession.progress) } exportSession.outputFileType = .m4a exportSession.outputURL = outputURL await exportSession.export() switch exportSession.status { case .completed: completion(.success(outputURL)) case .failed: completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"]))) case .cancelled: completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"]))) default: completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"]))) } // Invalidate the timer when the export session completes or is cancelled publishTimer.cancel() } }
1
0
689
Jan ’25
Non-sendable type AVMediaSelectionGroup
Hi all, we try migrate project to Swift 6 Project use AVPlayer in MainActor Selection audio and subtitiles not work Task { @MainActor in let group = try await item.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible) get error: Non-sendable type 'AVMediaSelectionGroup?' returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary and second example `if #available(iOS 15.0, *) { player?.currentItem?.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible, completionHandler: { group, error in if error != nil { return } if let groupWrp = group { DispatchQueue.main.async { self.setupAudio(groupWrp, audio: audioLang) } } }) }` get error: Sending 'groupWrp' risks causing data races
1
0
495
Feb ’25
4k 120fps Showing Black Screen on iPhone 16
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)? Thanks so much for the help. class CameraManager: NSObject { enum Errors: Error { case noCaptureDevice case couldNotAddInput case unsupportedConfiguration } enum Resolution { case hd1080p case uhd4K var preset: AVCaptureSession.Preset { switch self { case .hd1080p: return .hd1920x1080 case .uhd4K: return .hd4K3840x2160 } } var dimensions: CMVideoDimensions { switch self { case .hd1080p: return CMVideoDimensions(width: 1920, height: 1080) case .uhd4K: return CMVideoDimensions(width: 3840, height: 2160) } } } enum CameraType { case wide case ultraWide var captureDeviceType: AVCaptureDevice.DeviceType { switch self { case .wide: return .builtInWideAngleCamera case .ultraWide: return .builtInUltraWideCamera } } } enum FrameRate: Int { case fps60 = 60 case fps120 = 120 case fps240 = 240 } let orientationManager = OrientationManager() let captureSession: AVCaptureSession let previewLayer: AVCaptureVideoPreviewLayer let movieFileOutput = AVCaptureMovieFileOutput() let videoDataOutput = AVCaptureVideoDataOutput() private var videoCaptureDevice: AVCaptureDevice? override init() { self.captureSession = AVCaptureSession() self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession) super.init() self.previewLayer.videoGravity = .resizeAspect } func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws { assert(Thread.isMainThread) captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } captureSession.sessionPreset = resolution.preset if captureSession.canAddOutput(movieFileOutput) { captureSession.addOutput(movieFileOutput) } else { throw Errors.couldNotAddInput } videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue")) if captureSession.canAddOutput(videoDataOutput) { captureSession.addOutput(videoDataOutput) // Set the video orientation if needed if let connection = videoDataOutput.connection(with: .video) { //connection.videoOrientation = .portrait } } else { throw Errors.couldNotAddInput } guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else { throw Errors.noCaptureDevice } let useDimensions = resolution.dimensions guard let format = videoCaptureDevice.formats.first(where: { format in let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription) let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height let frameRates = format.videoSupportedFrameRateRanges return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) }) }) else { throw Errors.unsupportedConfiguration } self.videoCaptureDevice = videoCaptureDevice do { let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice) if captureSession.canAddInput(videoInput) { captureSession.addInput(videoInput) } else { throw Errors.couldNotAddInput } try videoCaptureDevice.lockForConfiguration() videoCaptureDevice.activeFormat = format videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000) videoCaptureDevice.exposureMode = .locked videoCaptureDevice.unlockForConfiguration() } catch { throw error } configureStabilization(enabled: stabilizationEnabled) }`
0
0
450
Jan ’25
Transparent overlay changes color in HDR video
Overlay changes color in HDR video When I’m using trying to add an overlay to an image with AVMutableVideoComposition, When the video is in HDR the overlay colors are changing and white becomes grey screen shot from original HDR video result from the code with the wrong overlay colorthe result when reducing to SDR (the right overlay color) the distorted colorsthe way it should look(sdr) Im creating the overlay with a CGContext class CustomHdrCompositor: NSObject, AVVideoCompositing { private let coreImageContext = CIContext(options: [CIContextOption.cacheIntermediates: false]) let combinedFilter = CIFilter(name: "CISourceOverCompositing")! var sourcePixelBufferAttributes: [String: Any]? = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var requiredPixelBufferAttributesForRenderContext: [String: Any] = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var supportsWideColorSourceFrames = true var supportsHDRSourceFrames = true func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) { return } func startRequest(_ request: AVAsynchronousVideoCompositionRequest) { guard let outputPixelBuffer = request.renderContext.newPixelBuffer() else { print("No valid pixel buffer found. Returning.") request.finish(with: CustomCompositorError.ciFilterFailedToProduceOutputImage) return } guard let requiredTrackIDs = request.videoCompositionInstruction.requiredSourceTrackIDs, !requiredTrackIDs.isEmpty else { print("No valid track IDs found in composition instruction.") return } let sourceCount = requiredTrackIDs.count if sourceCount > 1 { request.finish(with: CustomCompositorError.notSupportingMoreThanOneSources) return } if sourceCount == 1 { let sourceID = requiredTrackIDs[0] let sourceBuffer = request.sourceFrame(byTrackID: sourceID.value(of: Int32.self)!)! let sourceCIImage = CIImage(cvPixelBuffer: sourceBuffer) var textImage = TextLayerPlayer.instance.getTextLayerAtTimesStamp(ts:request.compositionTime.seconds) combinedFilter.setValue(textImage, forKey: "inputImage") if let outputImage = combinedFilter.outputImage { let renderDestination = CIRenderDestination(pixelBuffer: outputPixelBuffer) do { try coreImageContext.startTask(toRender: outputImage, to: renderDestination) } catch { } } } request.finish(withComposedVideoFrame: outputPixelBuffer) } } func regularCompositionHdr(asset: AVAsset) -> AVVideoComposition { self.isHdr = checkHdr(asset: asset) let avComposition = AVMutableComposition() let composition = AVMutableVideoComposition() composition.colorPrimaries = AVVideoColorPrimaries_ITU_R_2020 composition.colorTransferFunction = AVVideoTransferFunction_ITU_R_2100_HLG composition.colorYCbCrMatrix = AVVideoYCbCrMatrix_ITU_R_2020 composition.renderSize = assetSize composition.frameDuration = CMTime(value: 1, timescale: 30) composition.customVideoCompositorClass = CustomHdrCompositor.self composition.perFrameHDRDisplayMetadataPolicy = .propagate return composition } I’m using this function to transfer the transparent CGImage to CIImage that supports HDR func convertToHDRCIImage(from cgImage: CGImage, maxBrightness: CGFloat = 3.0) -> CIImage? { // Create a CIImage from the input CGImage let baseImage = CIImage(cgImage: cgImage) // Create HDR color adjustment filter let colorAdjust = CIFilter(name: "CIColorMatrix")! colorAdjust.setValue(baseImage, forKey: kCIInputImageKey) // Calculate HDR multipliers based on maxBrightness // This will maintain color ratios while increasing brightness colorAdjust.setValue(CIVector(x: maxBrightness, y: 0, z: 0, w: 0), forKey: "inputRVector") colorAdjust.setValue(CIVector(x: 0, y: maxBrightness, z: 0, w: 0), forKey: "inputGVector") colorAdjust.setValue(CIVector(x: 0, y: 0, z: maxBrightness, w: 0), forKey: "inputBVector") // Maintain alpha channel colorAdjust.setValue(CIVector(x: 0, y: 0, z: 0, w: 1), forKey: "inputAVector") guard let adjustedImage = colorAdjust.outputImage else { return nil } // Apply color space transformation using CIImage's colorSpace property let transformedImage = adjustedImage.matchedFromWorkingSpace(to: hdrWorkingSpace)! // Create context with HDR color space let context = CIContext(options: [ .workingColorSpace: hdrColorSpace, .outputColorSpace: hdrColorSpace ]) // Get the image bounds let bounds = transformedImage.extent // Create a new pixel buffer with HDR format var pixelBuffer: CVPixelBuffer? let pixelBufferAttributes = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_64RGBAHalf, kCVPixelBufferMetalCompatibilityKey: true ] as CFDictionary CVPixelBufferCreate(kCFAllocatorDefault, Int(bounds.width), Int(bounds.height), kCVPixelFormatType_64RGBAHalf, pixelBufferAttributes, &pixelBuffer) guard let destinationBuffer = pixelBuffer else { return nil } context.render(transformedImage, to: destinationBuffer, bounds: bounds, colorSpace: hdrColorSpace) // Create final CIImage from the HDR pixel buffer let finalImage = CIImage(cvPixelBuffer: destinationBuffer, options: [.colorSpace: hdrColorSpace]) return finalImage } When reducing the HDR to SDR it keeps the right color of the overlay with, but than it reduces the HDR effect which I want to keep
0
0
378
Jan ’25