Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

How to save 4K60 ProRes Log Video Internally on iPhone internally?
Hello Apple Engineers, Specific Issue: I am working on a video recording feature in my SwiftUI app, and I am trying to record 4K60 video in ProRes Log format using the iPhone's internal storage. Here's what I have tried so far: I am using AVCaptureSession with AVCaptureMovieFileOutput and configuring the session to support 4K resolution and ProRes codec. The sessionPreset is set to .inputPriority, and the video device is configured with settings such as disabling HDR to prepare for Log. However, when attempting to record 4K60 ProRes video, I get the error: "Capturing 4k60 with ProRes codec on this device is supported only on external storage device." This error seems to imply that 4K60 ProRes recording is restricted to external storage devices. But I am trying to achieve this internally on devices such as the iPhone 15 Pro Max, which has native support for ProRes encoding. Here are my questions: Is it technically possible to record 4K60 ProRes Log video internally on supported iPhones (for example: iPhone 15 Pro Max)? There are some 3rd apps (i.e. Blackmagic 👍🏻) that can save 4K60 ProRes Log video on iPhone internally. If internal saving is supported, what additional configuration is needed for the AVCaptureSession or other technique to bypass this limitation? If anyone has successfully saved 4K60 ProRes Log video on iPhone internal storage, your guidance would be highly appreciated. Thank you for your help!
0
0
628
Nov ’24
AVPlayer fail
when I played a local video(I downloaded it to the sandbox),KVO the AVPlayerItem status is AVPlayerItemStatusFailed and error is Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(24), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x3004137e0 {Error Domain=NSPOSIXErrorDomain Code=24 "Too many open files"}} why?
2
0
333
Dec ’24
iPhone Video Upload Error - reason unknown
Hi everyone, I'm developing a customization tool in which our customer can upload a mp3 or mp4 file that will be scannable through our AR application. On desktop and Android, this is working perfectly. For some reason however, on iPhone we're unable to load most of the video files. I've checked the clips, and they are .mov/h.264 files which are supported by iPhone. We're currently not sure how we can fix this issue to allow customers that own an iPhone to upload clips to our website. Any tips in the right direction are more than welcome. thanks in advance!
1
0
489
Dec ’24
How to implement Picture-in-Picture (PiP) in Flutter for iOS using LiveKit without a video URL?
I am building a video conferencing app using LiveKit in Flutter and want to implement Picture-in-Picture (PiP) mode on iOS. My goal is to display a view showing the speaker's initials or avatar during PiP mode. I successfully implemented this functionality on Android but am struggling to achieve it on iOS. I am using a MethodChannel to communicate with the native iOS code. Here's the Flutter-side code: import 'package:flutter/foundation.dart'; import 'package:flutter/services.dart'; class PipController { static const _channel = MethodChannel('pip_channel'); static Future<void> startPiP() async { try { await _channel.invokeMethod('enterPiP'); } catch (e) { if (kDebugMode) { print("Error starting PiP: $e"); } } } static Future<void> stopPiP() async { try { await _channel.invokeMethod('exitPiP'); } catch (e) { if (kDebugMode) { print("Error stopping PiP: $e"); } } } } On the iOS side, I am using AVPictureInPictureController. Since it requires an AVPlayerLayer, I had to include a dummy video URL to initialize the AVPlayer. However, this results in the dummy video’s audio playing in the background, but no view is displayed in PiP mode. Here’s my iOS code: import Flutter import UIKit import AVKit @main @objc class AppDelegate: FlutterAppDelegate { var pipController: AVPictureInPictureController? var playerLayer: AVPlayerLayer? override func application( _ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]? ) -> Bool { let controller: FlutterViewController = window?.rootViewController as! FlutterViewController let pipChannel = FlutterMethodChannel(name: "pip_channel", binaryMessenger: controller.binaryMessenger) pipChannel.setMethodCallHandler { [weak self] (call: FlutterMethodCall, result: @escaping FlutterResult) in if call.method == "enterPiP" { self?.startPictureInPicture(result: result) } else if call.method == "exitPiP" { self?.stopPictureInPicture(result: result) } else { result(FlutterMethodNotImplemented) } } GeneratedPluginRegistrant.register(with: self) return super.application(application, didFinishLaunchingWithOptions: launchOptions) } private func startPictureInPicture(result: @escaping FlutterResult) { guard AVPictureInPictureController.isPictureInPictureSupported() else { result(FlutterError(code: "UNSUPPORTED", message: "PiP is not supported on this device.", details: nil)) return } // Set up the AVPlayer let player = AVPlayer(url: URL(string: "http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4")!) let playerLayer = AVPlayerLayer(player: player) self.playerLayer = playerLayer // Create a dummy view let dummyView = UIView(frame: CGRect(x: 0, y: 0, width: 1, height: 1)) dummyView.isHidden = true window?.rootViewController?.view.addSubview(dummyView) dummyView.layer.addSublayer(playerLayer) playerLayer.frame = dummyView.bounds // Initialize PiP Controller pipController = AVPictureInPictureController(playerLayer: playerLayer) pipController?.delegate = self // Start playback and PiP player.play() pipController?.startPictureInPicture() print("Picture-in-Picture started") result(nil) } private func stopPictureInPicture(result: @escaping FlutterResult) { guard let pipController = pipController, pipController.isPictureInPictureActive else { result(FlutterError(code: "NOT_ACTIVE", message: "PiP is not currently active.", details: nil)) return } pipController.stopPictureInPicture() playerLayer = nil self.pipController = nil result(nil) } } extension AppDelegate: AVPictureInPictureControllerDelegate { func pictureInPictureControllerDidStartPictureInPicture(_ pictureInPictureController: AVPictureInPictureController) { print("PiP started") } func pictureInPictureControllerDidStopPictureInPicture(_ pictureInPictureController: AVPictureInPictureController) { print("PiP stopped") } } Questions: How can I implement PiP mode on iOS without using a video URL (or AVPlayerLayer)? Is there a way to display a custom UIView (like a speaker’s initials or an avatar) in PiP mode instead of requiring a video? Why does PiP not display any view, even though the dummy video URL is playing in the background? I am new to iOS development and would greatly appreciate any guidance or alternative approaches to achieve this functionality. Thank you!
1
0
701
Dec ’24
Does tvOS Support the Multiview Feature?
I've seen the Multiview feature on tvOS that displays a small grid icon when available. However, I only see this functionality in VisionOS using the AVMultiviewManager. Does a different name refer to this feature on tvOS? Relevant Links: https://www.reddit.com/r/appletv/comments/12opy5f/handson_with_the_new_multiview_split_screen/ https://www.pocket-lint.com/how-to-use-multiview-apple-tv/#:~:text=You'll%20see%20a%20grid,running%20at%20the%20same%20time.
1
0
544
Dec ’24
Debug MediaExtension plugin in system exectutable?
I am developing a macOS 15 MediaExtension plugin to enable additional codecs and container formats in AVFoundation My Plugin is sort of working, but i'd like to debug the XPC process that AVFoundation 'hoists' for me from the calling app (ie - the process hosting my plugin instance that is managing the MESampleBuffer protocol calls for example) Is there a method to configure XCode for interactive attaching to this background process for interactive debugging? Right now I have to use Console + Print which is not fun or productive. Does Apple have a working example of a MediaExtension anywhere? This is an exciting API that is very under-documented. I'm willing to spend a Code Review 'credit' for this, but my issues are not quite focused. Any assistance is highly appreciated!
0
0
494
Dec ’24
Why AvAssetWriter adds pts drift when writing fMP4
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames. For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0. However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither. Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
0
0
567
Dec ’24
Custom Share Desination stopped working in FCP X 11
We integrate with FCP X using a custom share destination and the Apple Script interface. This has been working fine until the the recent version 11 update of FCP X. With this update we are no longer receiving the open event when the export has completed. We get the apple event to creat the Asset and the file is exported to the location we set in the response. There is just no open event after that. I suspect something is wrong with our scripting support but I have no idea what or how to troubleshoot. This works fine in 10.8.1 and below.
0
0
355
Dec ’24
Capacitor app on iOS inline video stops
Hello, I’m experiencing an issue with video playback in my Javascript (SvelteKit) application using Capacitor. The video plays and loops correctly on Android and web browsers (including Safari) but stops unexpectedly after a few iterations on iOS native App. <video src={videoPath} autoplay muted loop playsinline class="h-auto w-full max-w-full object-cover"></video> Has anyone encountered a similar issue or have insights into what might be causing this behavior on iOS? Any suggestions or workarounds would be greatly appreciated. Maybe it has something to do with the iOS power saving policy? Thank you in advance for your help!
0
0
444
Jan ’25
AVAssetExportSession in iOS18- Thread 11: "*** -[AVAssetExportSession exportAsynchronouslyWithCompletionHandler:] Cannot call exportAsynchronouslyWithCompletionHandler: more than once."
I’m experiencing a crash at runtime when trying to extract audio from a video. This issue occurs on both iOS 18 and earlier versions. The crash is caused by the following error: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '*** -[AVAssetExportSession exportAsynchronouslyWithCompletionHandler:] Cannot call exportAsynchronouslyWithCompletionHandler: more than once.' *** First throw call stack: (0x1875475ec 0x184ae1244 0x1994c49c0 0x217193358 0x217199899 0x192e208b9 0x217192fd9 0x30204c88d 0x3019e5155 0x301e5fb41 0x301af7add 0x301aff97d 0x301af888d 0x301aff27d 0x301ab5fa5 0x301ab6101 0x192e5ee39) libc++abi: terminating due to uncaught exception of type NSException My previous code worked fine, but it's crashing with Swift 6. Does anyone know a solution for this? ## **Previous code:** func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) { let asset = AVAsset(url: videoURL) // Create an AVAssetExportSession to export the audio track guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else { completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"]))) return } // Set the output file type and path guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return } let outputURL = VideoUtils.getTempAudioExportUrl(filename) VideoUtils.deleteFileIfExists(outputURL.path) exportSession.outputFileType = .m4a exportSession.outputURL = outputURL let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0) if let exportHandler = exportHandler { exportHandler(exportSession, audioExportProgressPublisher) } // Periodically check the progress of the export session let timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { _ in audioExportProgressPublisher.send(exportSession.progress) } // Export the audio track asynchronously exportSession.exportAsynchronously { switch exportSession.status { case .completed: completion(.success(outputURL)) case .failed: completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"]))) case .cancelled: completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"]))) default: completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"]))) } // Invalidate the timer when the export session completes or is cancelled timer.invalidate() } } ## New Code: func extractAudioFromVideo(from videoURL: URL, exportHandler: ((AVAssetExportSession, CurrentValueSubject<Float, Never>?) -> Void)? = nil, completion: @escaping (Swift.Result<URL, Error>) -> Void) async { let asset = AVAsset(url: videoURL) // Create an AVAssetExportSession to export the audio track guard let exportSession = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetAppleM4A) else { completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Failed to create AVAssetExportSession"]))) return } // Set the output file type and path guard let filename = videoURL.lastPathComponent.components(separatedBy: ["."]).first else { return } let outputURL = VideoUtils.getTempAudioExportUrl(filename) VideoUtils.deleteFileIfExists(outputURL.path) let audioExportProgressPublisher = CurrentValueSubject<Float, Never>(0.0) if let exportHandler { exportHandler(exportSession, audioExportProgressPublisher) } if #available(iOS 18.0, *) { do { try await exportSession.export(to: outputURL, as: .m4a) let states = exportSession.states(updateInterval: 0.1) for await state in states { switch state { case .pending, .waiting: break case .exporting(progress: let progress): print("Exporting: \(progress.fractionCompleted)") if progress.isFinished { completion(.success(outputURL)) }else if progress.isCancelled { completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"]))) }else { audioExportProgressPublisher.send(Float(progress.fractionCompleted)) } } } }catch let error { print(error.localizedDescription) } }else { // Periodically check the progress of the export session let publishTimer = Timer.publish(every: 0.1, on: .main, in: .common) .autoconnect() .sink { [weak exportSession] _ in guard let exportSession else { return } audioExportProgressPublisher.send(exportSession.progress) } exportSession.outputFileType = .m4a exportSession.outputURL = outputURL await exportSession.export() switch exportSession.status { case .completed: completion(.success(outputURL)) case .failed: completion(.failure(exportSession.error ?? NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown error occurred while exporting audio"]))) case .cancelled: completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Export session was cancelled"]))) default: completion(.failure(NSError(domain: "com.example.app", code: -1, userInfo: [NSLocalizedDescriptionKey: "Unknown export session status"]))) } // Invalidate the timer when the export session completes or is cancelled publishTimer.cancel() } }
1
0
654
Jan ’25
Non-sendable type AVMediaSelectionGroup
Hi all, we try migrate project to Swift 6 Project use AVPlayer in MainActor Selection audio and subtitiles not work Task { @MainActor in let group = try await item.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible) get error: Non-sendable type 'AVMediaSelectionGroup?' returned by implicitly asynchronous call to nonisolated function cannot cross actor boundary and second example `if #available(iOS 15.0, *) { player?.currentItem?.asset.loadMediaSelectionGroup(for: AVMediaCharacteristic.audible, completionHandler: { group, error in if error != nil { return } if let groupWrp = group { DispatchQueue.main.async { self.setupAudio(groupWrp, audio: audioLang) } } }) }` get error: Sending 'groupWrp' risks causing data races
1
0
478
Feb ’25
4k 120fps Showing Black Screen on iPhone 16
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)? Thanks so much for the help. class CameraManager: NSObject { enum Errors: Error { case noCaptureDevice case couldNotAddInput case unsupportedConfiguration } enum Resolution { case hd1080p case uhd4K var preset: AVCaptureSession.Preset { switch self { case .hd1080p: return .hd1920x1080 case .uhd4K: return .hd4K3840x2160 } } var dimensions: CMVideoDimensions { switch self { case .hd1080p: return CMVideoDimensions(width: 1920, height: 1080) case .uhd4K: return CMVideoDimensions(width: 3840, height: 2160) } } } enum CameraType { case wide case ultraWide var captureDeviceType: AVCaptureDevice.DeviceType { switch self { case .wide: return .builtInWideAngleCamera case .ultraWide: return .builtInUltraWideCamera } } } enum FrameRate: Int { case fps60 = 60 case fps120 = 120 case fps240 = 240 } let orientationManager = OrientationManager() let captureSession: AVCaptureSession let previewLayer: AVCaptureVideoPreviewLayer let movieFileOutput = AVCaptureMovieFileOutput() let videoDataOutput = AVCaptureVideoDataOutput() private var videoCaptureDevice: AVCaptureDevice? override init() { self.captureSession = AVCaptureSession() self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession) super.init() self.previewLayer.videoGravity = .resizeAspect } func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws { assert(Thread.isMainThread) captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } captureSession.sessionPreset = resolution.preset if captureSession.canAddOutput(movieFileOutput) { captureSession.addOutput(movieFileOutput) } else { throw Errors.couldNotAddInput } videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue")) if captureSession.canAddOutput(videoDataOutput) { captureSession.addOutput(videoDataOutput) // Set the video orientation if needed if let connection = videoDataOutput.connection(with: .video) { //connection.videoOrientation = .portrait } } else { throw Errors.couldNotAddInput } guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else { throw Errors.noCaptureDevice } let useDimensions = resolution.dimensions guard let format = videoCaptureDevice.formats.first(where: { format in let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription) let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height let frameRates = format.videoSupportedFrameRateRanges return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) }) }) else { throw Errors.unsupportedConfiguration } self.videoCaptureDevice = videoCaptureDevice do { let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice) if captureSession.canAddInput(videoInput) { captureSession.addInput(videoInput) } else { throw Errors.couldNotAddInput } try videoCaptureDevice.lockForConfiguration() videoCaptureDevice.activeFormat = format videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000) videoCaptureDevice.exposureMode = .locked videoCaptureDevice.unlockForConfiguration() } catch { throw error } configureStabilization(enabled: stabilizationEnabled) }`
0
0
431
Jan ’25
Transparent overlay changes color in HDR video
Overlay changes color in HDR video When I’m using trying to add an overlay to an image with AVMutableVideoComposition, When the video is in HDR the overlay colors are changing and white becomes grey screen shot from original HDR video result from the code with the wrong overlay colorthe result when reducing to SDR (the right overlay color) the distorted colorsthe way it should look(sdr) Im creating the overlay with a CGContext class CustomHdrCompositor: NSObject, AVVideoCompositing { private let coreImageContext = CIContext(options: [CIContextOption.cacheIntermediates: false]) let combinedFilter = CIFilter(name: "CISourceOverCompositing")! var sourcePixelBufferAttributes: [String: Any]? = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var requiredPixelBufferAttributesForRenderContext: [String: Any] = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var supportsWideColorSourceFrames = true var supportsHDRSourceFrames = true func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) { return } func startRequest(_ request: AVAsynchronousVideoCompositionRequest) { guard let outputPixelBuffer = request.renderContext.newPixelBuffer() else { print("No valid pixel buffer found. Returning.") request.finish(with: CustomCompositorError.ciFilterFailedToProduceOutputImage) return } guard let requiredTrackIDs = request.videoCompositionInstruction.requiredSourceTrackIDs, !requiredTrackIDs.isEmpty else { print("No valid track IDs found in composition instruction.") return } let sourceCount = requiredTrackIDs.count if sourceCount > 1 { request.finish(with: CustomCompositorError.notSupportingMoreThanOneSources) return } if sourceCount == 1 { let sourceID = requiredTrackIDs[0] let sourceBuffer = request.sourceFrame(byTrackID: sourceID.value(of: Int32.self)!)! let sourceCIImage = CIImage(cvPixelBuffer: sourceBuffer) var textImage = TextLayerPlayer.instance.getTextLayerAtTimesStamp(ts:request.compositionTime.seconds) combinedFilter.setValue(textImage, forKey: "inputImage") if let outputImage = combinedFilter.outputImage { let renderDestination = CIRenderDestination(pixelBuffer: outputPixelBuffer) do { try coreImageContext.startTask(toRender: outputImage, to: renderDestination) } catch { } } } request.finish(withComposedVideoFrame: outputPixelBuffer) } } func regularCompositionHdr(asset: AVAsset) -> AVVideoComposition { self.isHdr = checkHdr(asset: asset) let avComposition = AVMutableComposition() let composition = AVMutableVideoComposition() composition.colorPrimaries = AVVideoColorPrimaries_ITU_R_2020 composition.colorTransferFunction = AVVideoTransferFunction_ITU_R_2100_HLG composition.colorYCbCrMatrix = AVVideoYCbCrMatrix_ITU_R_2020 composition.renderSize = assetSize composition.frameDuration = CMTime(value: 1, timescale: 30) composition.customVideoCompositorClass = CustomHdrCompositor.self composition.perFrameHDRDisplayMetadataPolicy = .propagate return composition } I’m using this function to transfer the transparent CGImage to CIImage that supports HDR func convertToHDRCIImage(from cgImage: CGImage, maxBrightness: CGFloat = 3.0) -> CIImage? { // Create a CIImage from the input CGImage let baseImage = CIImage(cgImage: cgImage) // Create HDR color adjustment filter let colorAdjust = CIFilter(name: "CIColorMatrix")! colorAdjust.setValue(baseImage, forKey: kCIInputImageKey) // Calculate HDR multipliers based on maxBrightness // This will maintain color ratios while increasing brightness colorAdjust.setValue(CIVector(x: maxBrightness, y: 0, z: 0, w: 0), forKey: "inputRVector") colorAdjust.setValue(CIVector(x: 0, y: maxBrightness, z: 0, w: 0), forKey: "inputGVector") colorAdjust.setValue(CIVector(x: 0, y: 0, z: maxBrightness, w: 0), forKey: "inputBVector") // Maintain alpha channel colorAdjust.setValue(CIVector(x: 0, y: 0, z: 0, w: 1), forKey: "inputAVector") guard let adjustedImage = colorAdjust.outputImage else { return nil } // Apply color space transformation using CIImage's colorSpace property let transformedImage = adjustedImage.matchedFromWorkingSpace(to: hdrWorkingSpace)! // Create context with HDR color space let context = CIContext(options: [ .workingColorSpace: hdrColorSpace, .outputColorSpace: hdrColorSpace ]) // Get the image bounds let bounds = transformedImage.extent // Create a new pixel buffer with HDR format var pixelBuffer: CVPixelBuffer? let pixelBufferAttributes = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_64RGBAHalf, kCVPixelBufferMetalCompatibilityKey: true ] as CFDictionary CVPixelBufferCreate(kCFAllocatorDefault, Int(bounds.width), Int(bounds.height), kCVPixelFormatType_64RGBAHalf, pixelBufferAttributes, &pixelBuffer) guard let destinationBuffer = pixelBuffer else { return nil } context.render(transformedImage, to: destinationBuffer, bounds: bounds, colorSpace: hdrColorSpace) // Create final CIImage from the HDR pixel buffer let finalImage = CIImage(cvPixelBuffer: destinationBuffer, options: [.colorSpace: hdrColorSpace]) return finalImage } When reducing the HDR to SDR it keeps the right color of the overlay with, but than it reduces the HDR effect which I want to keep
0
0
333
Jan ’25
Using AVCaptureSession to record a video on initialising audio session from a Push To Talk call, audio of the ongoing video recording is getting stopped while the video recording is still ongoing.
We have a Push To Talk application which allow user to record video and audio. When user is recording a video using AVCaptureSession and receive's an Push To Talk call, from moment the Push To Talk call is received the audio in the video which is being captured is stopped while the video capture is still in progress. Here after the PTT call is completed, we have tried restarting the audio session, there are no errors that are getting printed but we still don't see the audio getting restarted in video capture. We have also tried to add a new input for AVCaptureSession we are receiving error that is resulting in video capture stopping, error mentioned below: [OS-PLT] [CameraManager] Movie file finished with error: Error Domain=AVFoundationErrorDomain Code=-11818 "Recording Stopped" UserInfo={AVErrorRecordingSuccessfullyFinishedKey=true, NSLocalizedDescription=Recording Stopped, NSLocalizedRecoverySuggestion=Stop any other actions using the recording device and try again., AVErrorRecordingFailureDomainKey=1, NSUnderlyingError=0x3026bff60 {Error Domain=NSOSStatusErrorDomain Code=-16414 "(null)"}}, success We have also raised a Feedback Ticket on same: https://feedbackassistant.apple.com/feedback/16050598
1
0
552
Feb ’25
Use Metal to conver HDR Pixelbuffer to SDR Pixelbuffer
I see some demo show convert HDR video to SDR Pixelbuffer,such AVAssetReader、 AVVideoComposition 、AVComposition 、AVFoundation. But In some cases,I want to render HDR Pixelbuffer and record video. AVCaptureSession *session = [[AVCaptureSession alloc] init]; session.sessionPreset = AVCaptureSessionPresetHigh; AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; if ([videoDevice isVideoHDRSupported]) { NSError *error = nil; if ([videoDevice lockForConfiguration:&error]) { videoDevice.automaticallyAdjustsVideoHDREnabled = NO; videoDevice.videoHDREnabled = YES; // 开启 HDR [videoDevice unlockForConfiguration]; } else { NSLog(@"Error: %@", error.localizedDescription); } } Real-time processing of HDR data requires processing of video frame data (such as filters), ensuring that the processing chain supports 10-bit color depth and HDR metadata. And use imagesBuffer to object tracking, etc. How to solve this problem?
1
0
430
Jan ’25
AVURLAsset with AVURLAssetHTTPCookiesKey - Cookies not persisting on retry requests
I'm experiencing an unexpected behavior with AVURLAsset and cookies. When setting cookies through AVURLAssetHTTPCookiesKey option, they seem to be sent only on the initial request but not on retry attempts. Here's my current implementation: let cookieProperties: [HTTPCookiePropertyKey: Any] = [ .name: "sessionCookie", .value: "testValue", .domain: url.host ?? "", .path: "/", .secure: true ] if let cookie = HTTPCookie(properties: cookieProperties) { let asset = AVURLAsset(url: url, options: [ AVURLAssetHTTPCookiesKey: [cookie], ]) } According to the documentation, AVURLAssetHTTPCookiesKey should apply the cookies to all requests made by this asset. However, when the initial request fails and AVPlayer retries, the cookies are not included in subsequent requests. Only when I store the cookie with HTTPCookieStorage.shared.setCookie, then it persists. Questions: Is this the expected behavior? If not, what could be causing the cookies to not persist for retry attempts? Is using HTTPCookieStorage.shared the recommended approach instead? Environment: iOS 16+ Using AVPlayer with AVURLAsset Streaming HLS content Any insights would be greatly appreciated.
0
0
318
Feb ’25
Missing Depth Frames When Recording with AVCaptureVideoDataOutputSampleBufferDelegate/AVCaptureDataOutputSynchronizerDelegate and AVAssetWriter
I’ve tried both AVCaptureVideoDataOutputSampleBufferDelegate (captureOutput) and AVCaptureDataOutputSynchronizerDelegate (dataOutputSynchronizer), but the number of depth frames and saved timestamps is significantly lower than the number of frames in the .mp4 file written by AVAssetWriter. In my code, I save: Timestamps for each frame to a metadata file Depth frames to a binary file Video to an .mp4 file If I record a 4-second video at 30fps, the .mp4 file correctly plays for 4 seconds, but the number of stored timestamps and depth frames is much lower—around 70 frames instead of the expected 120. Does anyone know why this mismatch happens? func dataOutputSynchronizer(_ synchronizer: AVCaptureDataOutputSynchronizer, didOutput synchronizedDataCollection: AVCaptureSynchronizedDataCollection) { // Read all outputs guard let syncedDepthData: AVCaptureSynchronizedDepthData = synchronizedDataCollection.synchronizedData(for: depthDataOutput) as? AVCaptureSynchronizedDepthData, let syncedVideoData: AVCaptureSynchronizedSampleBufferData = synchronizedDataCollection.synchronizedData(for: videoDataOutput) as? AVCaptureSynchronizedSampleBufferData else { // only work on synced pairs return } if syncedDepthData.depthDataWasDropped || syncedVideoData.sampleBufferWasDropped { return } let depthData = syncedDepthData.depthData let depthPixelBuffer = depthData.depthDataMap let sampleBuffer = syncedVideoData.sampleBuffer guard let videoPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer), let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else { return } addToPreviewStream?(CIImage(cvPixelBuffer: videoPixelBuffer)) if !canWrite() { return } // Extract the presentation timestamp (PTS) from the sample buffer let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) //sessionAtSourceTime is the first buffer we will write to the file if self.sessionAtSourceTime == nil { //Make sure we don't start recording until the buffer reaches the correct time (buffer is always behind, this will fix the difference in time) guard sampleBuffer.presentationTimeStamp >= self.recordFromTime! else { return } self.sessionAtSourceTime = sampleBuffer.presentationTimeStamp self.videoWriter!.startSession(atSourceTime: sampleBuffer.presentationTimeStamp) } if self.videoWriterInput!.isReadyForMoreMediaData { self.videoWriterInput!.append(sampleBuffer) self.videoTimestamps.append( Timestamp( frame: videoTimestamps.count, value: timestamp.value, timescale: timestamp.timescale ) ) let ddm = depthData.depthDataMap depthCapture.addDepthData(pixelBuffer: ddm, timestamp: timestamp) } }
3
0
425
Feb ’25