VideoToolbox

RSS for tag

Work directly with hardware-accelerated video encoding and decoding capabilities using VideoToolbox.

Posts under VideoToolbox tag

33 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

VideoToolbox AV1 decoding on Apple platforms
As some have clocked, this was added in a recent SDK release... kCMVideoCodecType_AV1 ...does anyone know if and when AV1 decode support, even if software-only, is going to be available on Apple platforms? At the moment, one must decode using dav1d, (which is pretty performant, to be fair) but are we expecting at least software AV1 support on existing hardware any time soon, does anybody know?
4
0
3.2k
Oct ’23
Does iOS support H.265 GDR (gradual decoding refresh)?
Hello, I'm trying to play an H.265 video stream, containing no I-frames but using GDR. No video is shown using VTDecompressionSessionDecodeFrame, other H.265 videos work fine. In the device log I can see the following errors: AppleAVD: VADecodeFrame(): isRandomAccessSkipPicture fail followed by An unknown error occurred (-17694) Is it possible to play such a video stream? Do I need to configure CMVideoFormatDescription in some way to enable it? Thanks for your help.
2
0
953
Oct ’23
iOS17 beta5 Video Encode with VideoToolBox, hang or stuck after VTCompressionSessionInvalidate appears kVTInvalidSessionErr
Use iOS17 VTCompressionSessionEncodeFrame beta5 version of video coding, if App switch back to the foreground from the background, Can appear kVTInvalidSessionErr reset according to the error invoked when VTCompressionSessionInvalidate cause gets stuck, if anyone else encountered this kind of situation, how should solve?
0
0
599
Aug ’23
How to prevent camera from adjusting brightness in manual mode?
In case when I have locked white balance and custom exposure, on black background when I introduce new object in view, both objects become brighter. How to turn off this feature or compensate for that change in a performant way? This is how I configure the session, note that Im setting a video format which supports at least 180 fps which is required for my needs. private func configureSession() { self.sessionQueue.async { [self] in //MARK: Init session guard let session = try? validSession() else { fatalError("Session is unexpectedly nil") } session.beginConfiguration() guard let device = AVCaptureDevice.default(AVCaptureDevice.DeviceType.builtInWideAngleCamera, for:AVMediaType.video, position: .back) else { fatalError("Video Device is unexpectedly nil") } guard let videoDeviceInput: AVCaptureDeviceInput = try? AVCaptureDeviceInput(device:device) else { fatalError("videoDeviceInput is unexpectedly nil") } guard session.canAddInput(videoDeviceInput) else { fatalError("videoDeviceInput could not be added") } session.addInput(videoDeviceInput) self.videoDeviceInput = videoDeviceInput self.videoDevice = device //MARK: Connect session IO let dataOutput = AVCaptureVideoDataOutput() dataOutput.setSampleBufferDelegate(self, queue: sampleBufferQueue) session.automaticallyConfiguresCaptureDeviceForWideColor = false guard session.canAddOutput(dataOutput) else { fatalError("Could not add video data output") } session.addOutput(dataOutput) dataOutput.alwaysDiscardsLateVideoFrames = true dataOutput.videoSettings = [ String(kCVPixelBufferPixelFormatTypeKey): pixelFormat.rawValue ] if let captureConnection = dataOutput.connection(with: .video) { captureConnection.preferredVideoStabilizationMode = .off captureConnection.isEnabled = true } else { fatalError("No Capture Connection for the session") } //MARK: Configure AVCaptureDevice do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } if let format = format(fps: fps, minWidth: minWidth, format: pixelFormat) { // 180FPS, YUV layout device.activeFormat = format device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(fps)) } else { fatalError("Compatible format not found") } device.activeColorSpace = .sRGB device.isGlobalToneMappingEnabled = false device.automaticallyAdjustsVideoHDREnabled = false device.automaticallyAdjustsFaceDrivenAutoExposureEnabled = false device.isFaceDrivenAutoExposureEnabled = false device.setFocusModeLocked(lensPosition: 0.4) device.isSubjectAreaChangeMonitoringEnabled = false device.exposureMode = AVCaptureDevice.ExposureMode.custom let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) device.setExposureModeCustom(duration: exp, iso: isoValue) { t in } device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) { (timestamp:CMTime) -> Void in } device.unlockForConfiguration() session.commitConfiguration() onAVSessionReady() } } This post (https://stackoverflow.com/questions/34511431/ios-avfoundation-different-photo-brightness-with-the-same-manual-exposure-set) suggests that the effect can be mitigated by settings camera exposure to .locked right after setting device.setExposureModeCustom(). This works properly only if used with async api and still does not influence the effect. Async approach: private func onAVSessionReady() { guard let device = device() else { fatalError("Device is unexpectedly nil") } guard let sesh = try? validSession() else { fatalError("Device is unexpectedly nil") } MCamSession.shared.activeFormat = device.activeFormat MCamSession.shared.currentDevice = device self.observer = SPSDeviceKVO(device: device, session: sesh) self.start() Task { await lockCamera(device) } } private func lockCamera(_ device: AVCaptureDevice) async { do { try device.lockForConfiguration() } catch { fatalError(error.localizedDescription) } _ = await device.setFocusModeLocked(lensPosition: 0.4) let exp = CMTime(value: Int64(40), timescale: 100_000) let isoValue = min(max(40, device.activeFormat.minISO), device.activeFormat.maxISO) _ = await device.setExposureModeCustom(duration: exp, iso: isoValue) _ = await device.setWhiteBalanceModeLocked(with: AVCaptureDevice.WhiteBalanceGains(redGain: 1.0, greenGain: 1.0, blueGain: 1.0)) device.exposureMode = AVCaptureDevice.ExposureMode.locked device.unlockForConfiguration() } private func configureSession() { // same session init as before ... onAVSessionReady() }
1
0
768
Nov ’23
iOS17 VideoToolBox video encode h264 file set average bitrate not work?
I use VTCompressionSession and set the average bit rate by kVTCompressionPropertyKey_AverageBitRate and kVTCompressionPropertyKey_DataRateLimits. The code like this: VTCompressionSessionRef vtSession = session; if (vtSession == NULL) { vtSession = _encoderSession; } if (vtSession == nil) { return; } int tmp = bitrate; int bytesTmp = tmp * 0.15; int durationTmp = 1; CFNumberRef bitrateRef = CFNumberCreate(NULL, kCFNumberSInt32Type, &tmp); CFNumberRef bytes = CFNumberCreate(NULL, kCFNumberSInt32Type, &bytesTmp); CFNumberRef duration = CFNumberCreate(NULL, kCFNumberSInt32Type, &durationTmp); if ([self isSupportPropertyWithSession:vtSession key:kVTCompressionPropertyKey_AverageBitRate]) { [self setSessionPropertyWithSession:vtSession key:kVTCompressionPropertyKey_AverageBitRate value:bitrateRef]; }else { NSLog(@"Video Encoder: set average bitRate error"); } NSLog(@"Video Encoder: set bitrate bytes = %d, _bitrate = %d",bytesTmp, bitrate); CFMutableArrayRef limit = CFArrayCreateMutable(NULL, 2, &kCFTypeArrayCallBacks); CFArrayAppendValue(limit, bytes); CFArrayAppendValue(limit, duration); if([self isSupportPropertyWithSession:vtSession key:kVTCompressionPropertyKey_DataRateLimits]) { OSStatus ret = VTSessionSetProperty(vtSession, kVTCompressionPropertyKey_DataRateLimits, limit); if(ret != noErr){ NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:ret userInfo:nil]; NSLog(@"Video Encoder: set DataRateLimits failed with %s",error.description.UTF8String); } }else { NSLog(@"Video Encoder: set data rate limits error"); } CFRelease(bitrateRef); CFRelease(limit); CFRelease(bytes); CFRelease(duration); } This work fine on iOS16 and below. But on iOS17 the bitrate of the generate video file is much lower than the value I set. For exmaple, I set biterate 600k but on iOS17 the encoded video bitrate is lower than 150k. What went wrong?
3
2
1.1k
Sep ’23
iOS 17 & iOS 16.7 videotoolbox crash
0 VideoToolbox ___vtDecompressionSessionRemote_DecodeFrameCommon_block_invoke_2() 1 libdispatch.dylib __dispatch_client_callout() 2 libdispatch.dylib __dispatch_client_callout() 3 libdispatch.dylib __dispatch_lane_barrier_sync_invoke_and_complete() 4 VideoToolbox _vtDecompressionSessionRemote_DecodeFrameCommon() After the release of iOS 17 & iOS 16.7, thousands of crashes were added as videotoolbox each day. Has anyone encountered similar problems?
6
2
1.4k
Oct ’23
AV1 Hardware Decoding
Recently I've been trying to play some AV1-encoded streams on my iPhone 15 Pro Max. First, I check for hardware support: VTIsHardwareDecodeSupported(kCMVideoCodecType_AV1); // YES Then I need to create a CMFormatDescription in order to pass it into a VTDecompressionSession. I've tried the following: { mediaType:'vide' mediaSubType:'av01' mediaSpecific: { codecType: 'av01' dimensions: 394 x 852 } extensions: {{ CVFieldCount = 1; CVImageBufferChromaLocationBottomField = Left; CVImageBufferChromaLocationTopField = Left; CVPixelAspectRatio = { HorizontalSpacing = 1; VerticalSpacing = 1; }; FullRangeVideo = 0; }} } but VTDecompressionSessionCreate gives me error -8971 (codecExtensionNotFoundErr, I assume). So it has something to do with the extensions dictionary? I can't find anywhere which set of extensions is necessary for it to work 😿. VideoToolbox has convenient functions for creating descriptions of AVC and HEVC streams (CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets), but not for AV1. As of today I am using XCode 15.0 with iOS 17.0.0 SDK.
7
3
4.1k
May ’24
Linker Error when using VTCompressionSessionEncodeMultiImageFrame in Swift
I'm working on a MV-HEVC transcoder, based on the VTEncoderForTranscoding sample code. In swift the following code snippet generates a linker error on macOS 14.0 and 14.1. let err = VTCompressionSessionEncodeMultiImageFrame(compressionSession, taggedBuffers: taggedBuffers, presentationTimeStamp: pts, duration: .invalid, frameProperties: nil, infoFlagsOut: nil) { (status: OSStatus, infoFlags: VTEncodeInfoFlags, sbuf: CMSampleBuffer?) -> Void in outputHandler(status, infoFlags, sbuf, thisFrameNumber) } Error: ld: Undefined symbols: VideoToolbox.VTCompressionSessionEncodeMultiImageFrame(_: __C.VTCompressionSessionRef, taggedBuffers: [CoreMedia.CMTaggedBuffer], presentationTimeStamp: __C.CMTime, duration: __C.CMTime, frameProperties: __C.CFDictionaryRef?, infoFlagsOut: Swift.UnsafeMutablePointer<__C.VTEncodeInfoFlags>?, outputHandler: (Swift.Int32, __C.VTEncodeInfoFlags, __C.CMSampleBufferRef?) -> ()) -> Swift.Int32, referenced from: (3) suspend resume partial function for VTEncoderForTranscoding_Swift.(compressFrames in _FE7277D5F28D8DABDFC10EA0164D825D)(from: VTEncoderForTranscoding_Swift.VideoSource, options: VTEncoderForTranscoding_Swift.Options, expectedFrameRate: Swift.Float, outputHandler: @Sendable (Swift.Int32, __C.VTEncodeInfoFlags, __C.CMSampleBufferRef?, Swift.Int) -> ()) async throws -> () in VTEncoderForTranscoding.o Using VTCompressionSessionEncodeMultiImageFrameWithOutputHandler in ObjC doesn't trigger a linker error. Anybody knows how to get it to work in Swift?
0
0
554
Nov ’23
Unable to manually parse the HEVC video with alpha format
I wish to parse the bitstream of HEVC video with alpha (specific video format reference WWDC2019: https://developer.apple.com/videos/play/wwdc2019/506). Taking the 'puppets_with_alpha_hevc.mov' file from 'Using HEVC Video with Alpha' as an example, I would first extract the HEVC bitstream, then parse its fields. When it comes to the VPS field, as I reach the vps_extension, I find that the bitstream in 'puppets_with_alpha_hevc.mov' does not conform to the HEVC standard document, preventing further parsing. Besides the 'HEVC Video with Alpha Interoperability Profile.pdf', are there any more detailed documents describing the HEVC video with alpha format? Also, is there anyone who can encode or decode HEVC with alpha videos on systems other than macOS?
0
0
544
Nov ’23
Request for Videotoolbox Encoder to Support External Control for Frame Dropping
We are currently working on a real-time, low-latency solution for video conferencing scenarios and have encountered some issues with the current implementation of the encoder. We need a feature enhancement for the Videotoolbox encoder. In our use case, we need to control the encoding quality, which requires setting the maximum encoding QP. However, the kVTCompressionPropertyKey_MaxAllowedFrameQP only takes effect in the kVTVideoEncoderSpecification_EnableLowLatencyRateControl mode. In this mode, when the maximum QP is limited and the bitrate is insufficient, the encoder will drop frames. Our desired scenario is for the encoder to not actively drop frames when the maximum QP is limited. Instead, when the bitrate is insufficient, the encoder should be able to encode the frame with the maximum QP, allowing the frame size to be larger. This would provide a more seamless experience for users in video conferencing situations, where maintaining consistent video quality is crucial. It is worth noting that Android has already implemented this feature in Android 12, which demonstrates the value and feasibility of this enhancement. We kindly request that you consider adding support for external control of frame dropping in the Videotoolbox encoder to accommodate our needs. This enhancement would greatly benefit our project and others that require real-time, low-latency video encoding solutions.
0
0
462
Nov ’23
iOS17 the encoder sets an average bit rate of 5Mbps. After one hour of live broadcast, the bit rate will suddenly drop to 1Mbps and cannot be restored.
iOS17, the encoder sets an average bit rate of 5Mbps. in the first 25 minutes:the encoding rate is normal. 25 minutes-30minutes:The encoding bit rate will be reduced to 4M but can be restoredl. 30 minutes-70minutes:the encoding rate is normal. 70minutes-late:the bit rate will suddenly drop to 1Mbps and cannot be restored. As shown in the figure below, the yellow line is the frame rate and the green line is the code rate. The Code show as below - (void)_setBitrate:(NSUInteger)bitrate forSession:(VTCompressionSessionRef)session { NSParameterAssert(session && bitrate); OSStatus status = VTSessionSetProperty(session, kVTCompressionPropertyKey_AverageBitRate, (__bridge CFTypeRef)@(bitrate)); if (status != noErr) NSLog(@"set AverageBitRate error"); NSArray *limit = @[@(bitrate * 1.5/8), @(1)]; status = VTSessionSetProperty(session, kVTCompressionPropertyKey_DataRateLimits, (__bridge CFArrayRef)limit); if (status != noErr) NSLog(@"set DataRateLimits error"); } The problem only occurs in iOS17. Does anyone know what the reason is?
2
0
584
Dec ’23
使用VideoTool库进行h264/h265编码
我使用VideoToolBox库进行编码后再通过NDI发送到网络上,是可以成功再苹果电脑上接收到ndi源屏显示画面的,但是在windows上只能ndi源名称,并没有画面显示。 我想知道是不是使用VideoToolBox库无法在windows上进行正确编码,这个问题需要如何解决
0
0
441
Dec ’23
Stereo video HLS
I am trying to set up HLS with MV HEVC. I have an MV HEVC MP4 converted with AVAssetWriter that plays as a "spatial video" in Photos in the simulator. I've used ffmpeg to fragment the video for HLS (sample m3u8 file below). The HLS of the mp4 plays on a VideoMaterial with an AVPlayer in the simulator, but it is hard to determine if the streamed video is stereo. Is there any guidance on confirming that the streamed mp4 video is properly being read as stereo? Additionally, I see that REQ-VIDEO-LAYOUT is required for multivariant HLS. However if there is ONLY stereo video in the playlist is it needed? Are there any other configurations need to make the device read as stereo? Sample m3u8 playlist #EXTM3U #EXT-X-VERSION:3 #EXT-X-TARGETDURATION:13 #EXT-X-MEDIA-SEQUENCE:0 #EXTINF:12.512500, sample_video0.ts #EXTINF:8.341667, sample_video1.ts #EXTINF:12.512500, sample_video2.ts #EXTINF:8.341667, sample_video3.ts #EXTINF:8.341667, sample_video4.ts #EXTINF:12.433222, sample_video5.ts #EXT-X-ENDLIST
5
1
1.4k
Dec ’23
Seeking Advice: Best Apple Laptop for Coding?
Hey Developers, I'm on the hunt for a new Apple laptop geared towards coding, and I'd love to tap into your collective wisdom. If you have recommendations or personal experiences with a specific model that excels in the coding realm, please share your insights. Looking for optimal performance and a seamless coding experience. Your input is gold – thanks a bunch!
2
0
1.4k
Feb ’24
How to live stream a UDP broadcast with ffmpeg
First of all, I tried MobileVLCKit but there is too much delay Then I wrote a UDPManager class and I am writing my codes below. I would be very happy if anyone has information and wants to direct me. Broadcast code ffmpeg -f avfoundation -video_size 1280x720 -framerate 30 -i "0" -c:v libx264 -preset medium -tune zerolatency -f mpegts "udp://127.0.0.1:6000?pkt_size=1316" Live View Code (almost 0 delay) ffplay -fflags nobuffer -flags low_delay -probesize 32 -analyzeduration 1 -strict experimental -framedrop -f mpegts -vf setpts=0 udp://127.0.0.1:6000 OR mpv udp://127.0.0.1:6000 --no-cache --untimed --no-demuxer-thread --vd-lavc-threads=1 UDPManager import Foundation import AVFoundation import CoreMedia import VideoDecoder import SwiftUI import Network import Combine import CocoaAsyncSocket import VideoToolbox class UDPManager: NSObject, ObservableObject, GCDAsyncUdpSocketDelegate { private let host: String private let port: UInt16 private var socket: GCDAsyncUdpSocket? @Published var videoOutput: CMSampleBuffer? init(host: String, port: UInt16) { self.host = host self.port = port } func connectUDP() { do { socket = GCDAsyncUdpSocket(delegate: self, delegateQueue: .global()) //try socket?.connect(toHost: host, onPort: port) try socket?.bind(toPort: port) try socket?.enableBroadcast(true) try socket?.enableReusePort(true) try socket?.beginReceiving() } catch { print("UDP soketi oluşturma hatası: \(error)") } } func closeUDP() { socket?.close() } func udpSocket(_ sock: GCDAsyncUdpSocket, didConnectToAddress address: Data) { print("UDP Bağlandı.") } func udpSocket(_ sock: GCDAsyncUdpSocket, didNotConnect error: Error?) { print("UDP soketi bağlantı hatası: \(error?.localizedDescription ?? "Bilinmeyen hata")") } func udpSocket(_ sock: GCDAsyncUdpSocket, didReceive data: Data, fromAddress address: Data, withFilterContext filterContext: Any?) { if !data.isEmpty { DispatchQueue.main.async { self.videoOutput = self.createSampleBuffer(from: data) } } } func createSampleBuffer(from data: Data) -> CMSampleBuffer? { var blockBuffer: CMBlockBuffer? var status = CMBlockBufferCreateWithMemoryBlock( allocator: kCFAllocatorDefault, memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes), blockLength: data.count, blockAllocator: kCFAllocatorNull, customBlockSource: nil, offsetToData: 0, dataLength: data.count, flags: 0, blockBufferOut: &blockBuffer) if status != noErr { return nil } var sampleBuffer: CMSampleBuffer? let sampleSizeArray = [data.count] status = CMSampleBufferCreateReady( allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, formatDescription: nil, sampleCount: 1, sampleTimingEntryCount: 0, sampleTimingArray: nil, sampleSizeEntryCount: 1, sampleSizeArray: sampleSizeArray, sampleBufferOut: &sampleBuffer) if status != noErr { return nil } return sampleBuffer } } I didn't know how to convert the data object to video, so I searched and found this code and wanted to try it func createSampleBuffer(from data: Data) -> CMSampleBuffer? { var blockBuffer: CMBlockBuffer? var status = CMBlockBufferCreateWithMemoryBlock( allocator: kCFAllocatorDefault, memoryBlock: UnsafeMutableRawPointer(mutating: (data as NSData).bytes), blockLength: data.count, blockAllocator: kCFAllocatorNull, customBlockSource: nil, offsetToData: 0, dataLength: data.count, flags: 0, blockBufferOut: &blockBuffer) if status != noErr { return nil } var sampleBuffer: CMSampleBuffer? let sampleSizeArray = [data.count] status = CMSampleBufferCreateReady( allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, formatDescription: nil, sampleCount: 1, sampleTimingEntryCount: 0, sampleTimingArray: nil, sampleSizeEntryCount: 1, sampleSizeArray: sampleSizeArray, sampleBufferOut: &sampleBuffer) if status != noErr { return nil } return sampleBuffer } And I tried to make CMSampleBuffer a player but it just shows a white screen and doesn't work struct SampleBufferPlayerView: UIViewRepresentable { typealias UIViewType = UIView var sampleBuffer: CMSampleBuffer func makeUIView(context: Context) -> UIView { let view = UIView(frame: .zero) let displayLayer = AVSampleBufferDisplayLayer() displayLayer.videoGravity = .resizeAspectFill view.layer.addSublayer(displayLayer) context.coordinator.displayLayer = displayLayer return view } func updateUIView(_ uiView: UIView, context: Context) { context.coordinator.sampleBuffer = sampleBuffer context.coordinator.updateSampleBuffer() } func makeCoordinator() -> Coordinator { Coordinator() } class Coordinator { var displayLayer: AVSampleBufferDisplayLayer? var sampleBuffer: CMSampleBuffer? func updateSampleBuffer() { guard let displayLayer = displayLayer, let sampleBuffer = sampleBuffer else { return } if displayLayer.isReadyForMoreMediaData { displayLayer.enqueue(sampleBuffer) } else { displayLayer.requestMediaDataWhenReady(on: .main) { if displayLayer.isReadyForMoreMediaData { displayLayer.enqueue(sampleBuffer) print("isReadyForMoreMediaData") } } } } } } And I tried to use it but I couldn't figure it out, can anyone help me? struct ContentView: View { // udp://@127.0.0.1:6000 @ObservedObject var udpManager = UDPManager(host: "127.0.0.1", port: 6000) var body: some View { VStack { if let buffer = udpManager.videoOutput{ SampleBufferDisplayLayerView(sampleBuffer: buffer) .frame(width: 300, height: 200) } } .onAppear(perform: { udpManager.connectUDP() }) } }
4
2
854
6d