Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

AVAssetWriter append audio/video streams concurrently in Real time recording setup
I see in most of the old sample codes from Apple that when using AVAssetWriter to append audio, video, and metadata samples in a real time camera recording setup, calls to .append(sampleBuffer) are either synchronised using an NSLock or all the samples are sent to the asset writer on the same dispatch queue thereby preventing concurrent writes. However I can't find any documentation that calls to assetWriterInput.append(sampleBuffer) for different media samples such as Audio and Video should not be done concurrently. Is it not valid for these methods to be executed in parallel for instance? `videoSamplesAssetWriterInput.append(videoSampleBuffer)` from DispatchQueue 1 `audioSamplesAssetWriterInput.append(audioSampleBuffer)` from DispatchQueue 2
1
0
612
Jan ’25
Exporting Higher Resolution Spatial Videos in Final Cut
How do we export a spatial video at a higher resolution than 2200 x 2200? For example, I want to export a video that I edited in a spatial timeline as 4096 x 4096 resolution with spatial metadata to output as an MV-HEVC file. I looked under both Final Cut and Compressor export settings, but couldn't find a way to do this. Thanks for your help!
1
0
460
Jan ’25
Help diagnosing crash with only system code in its stack trace
Hello, I work on a video streaming app, and I have been working on this crash that we are seeing quite frequently (it is our #2 crasher at the moment). The stack trace indicates that an AVPictureInPictureController is being deallocated on a background thread. This leads to dangling AutoLayout constraints getting cleaned up, further resulting in an exception being thrown from the framework about the layout engine being accessed from a background thread. We have internal analytics which indicate that the crash occurs after the user comes back to the app after putting it in the background, and switches playback from Picture in Picture mode back to the app's regular playback UI. What has me puzzled here is that, as I'm sure you know, we have no control over the PIP UI. It is entirely system-provided, and there is none of our app's code in the stack trace. The whole process is even initiated by a KVO on an AVPlayerController property, and our app doesn't use that class directly anywhere. So how did we manage to cause this process to happen on a background thread? Add to this the fact that the bug only appeared once we switched to compiling with Xcode 16, and is overwhelmingly present only on devices running iOS 18. These factors lead me to believe that this is probably an OS issue. But before I go to file a feedback, I thought I would post here in case anyone has any ideas. I have attached an instance of the crash log to this post. 2025-01-08_17-32-45.7003_-0800-ea8d5c3323e0f1fc059cf83f6ec86377bdae1788.crash
1
0
411
Jan ’25
How start AVPictureInPicture when video is paused
I have AVPlayer with AVPictureInPictureController. Play video in app and picture In Picture works except one situation. Issue is: I pause video in application and during switch to background is not PiP activate. What do I wrong? import UIKit import AVKit import AVFoundation class ViewControllerSec: UIViewController,AVPictureInPictureControllerDelegate { var pipPlayer: AVPlayer! var avCanvas : UIView! var pipCanvas: AVPlayerLayer? var pipController: AVPictureInPictureController! var mainViewControler : UIViewController! var playerItem : AVPlayerItem! var videoAvasset : AVAsset! public func link(to parentViewController : UIViewController) { mainViewControler = parentViewController setup() } @objc func appWillResignActiveNotification(application: UIApplication) { guard let pipController = pipController else { print("PiP not supported") return } print("PIP isSuspend: \(pipController.isPictureInPictureSuspended)") print("PIP isPossible: \(pipController.isPictureInPicturePossible)" if playerItem.status == .readyToPlay { if pipPlayer.rate == 0 { pipPlayer.play() } pipController.startPictureInPicture(). ---> Errorin log: Failed to start picture in picture. } else { print("Player not ready for PiP.") } } private func setupAudio() { do { let session = AVAudioSession.sharedInstance() try session.setCategory(.playback, mode: .moviePlayback) try session.setActive(true) } catch { print("Audio session setup failed: \(error.localizedDescription)") } } @objc func playerItemDidFailToPlayToEnd(_ notification: Notification) { if let error = notification.userInfo?[AVPlayerItemFailedToPlayToEndTimeErrorKey] as? Error { print("Failed to play to end: \(error.localizedDescription)") } } func setup() { setupAudio() guard let videoURL = URL(string: "https://demo.unified-streaming.com/k8s/features/stable/video/tears-of-steel/tears-of-steel.mp4/.m3u8") else { return } videoAvasset = AVAsset(url: videoURL) playerItem = AVPlayerItem(asset: videoAvasset) addPlayerObservers() pipPlayer = AVPlayer(playerItem: playerItem) avCanvas = UIView(frame: view.bounds) pipCanvas = AVPlayerLayer(player: pipPlayer) guard let pipCanvas else { return } pipCanvas.frame = avCanvas.bounds //pipCanvas.videoGravity = .resizeAspectFill mainViewControler.view.addSubview(avCanvas) avCanvas.layer.addSublayer(pipCanvas) if AVPictureInPictureController.isPictureInPictureSupported() { pipController = AVPictureInPictureController(playerLayer: pipCanvas) pipController?.delegate = self pipController?.canStartPictureInPictureAutomaticallyFromInline = true } let playButton = UIButton(frame: CGRect(x: 20, y: 50, width: 100, height: 50)) playButton.setTitle("Play", for: .normal) playButton.backgroundColor = .blue playButton.addTarget(self, action: #selector(playTapped), for: .touchUpInside) mainViewControler.view.addSubview(playButton) let pauseButton = UIButton(frame: CGRect(x: 140, y: 50, width: 100, height: 50)) pauseButton.setTitle("Pause", for: .normal) pauseButton.backgroundColor = .red pauseButton.addTarget(self, action: #selector(pauseTapped), for: .touchUpInside) mainViewControler.view.addSubview(pauseButton) let pipButton = UIButton(frame: CGRect(x: 260, y: 50, width: 150, height: 50)) pipButton.setTitle("Start PiP", for: .normal) pipButton.backgroundColor = .green pipButton.addTarget(self, action: #selector(startPictureInPicture), for: .touchUpInside) mainViewControler.view.addSubview(pipButton) print("Error:\(String(describing: pipPlayer.error?.localizedDescription))") NotificationCenter.default.addObserver(forName: UIApplication.didEnterBackgroundNotification, object: nil, queue: nil) { [weak self] _ in guard let self = self else { return } if self.pipPlayer.rate == 0 { self.pipPlayer.play() pipController?.startPictureInPicture() } } func addPlayerObservers() { playerItem?.addObserver(self, forKeyPath: "status", options: [.old, .new], context: nil) NotificationCenter.default.addObserver(self, selector: #selector(playerDidFinishPlaying(_:)), name: .AVPlayerItemDidPlayToEndTime, object: playerItem) } override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) { if keyPath == "status" { if let statusNumber = change?[.newKey] as? NSNumber { let status = AVPlayer.Status(rawValue: statusNumber.intValue)! switch status { case .readyToPlay: print("Player is ready to play") case .failed: print("Player failed: \(String(describing: playerItem?.error))") case .unknown: print("Player status is unknown") @unknown default: fatalError() } } } } @objc func playerDidFinishPlaying(_ notification: Notification) { print("Video finished playing.") } deinit { playerItem?.removeObserver(self, forKeyPath: "status") NotificationCenter.default.removeObserver(self) } @objc func playTapped() { pipPlayer.play() } @objc func pauseTapped() { pipPlayer.pause() } @objc func startPictureInPicture() { if let pipController = pipController, !pipController.isPictureInPictureActive { pipController.startPictureInPicture() } } @objc func stopPictureInPicture() { if let pipController = pipController, pipController.isPictureInPictureActive { pipController.stopPictureInPicture() } } func pictureInPictureController(_ pictureInPictureController: AVPictureInPictureController, failedToStartPictureInPictureWithError error: Error) { print("Failed to start PiP: \(error.localizedDescription)") if let underlyingError = (error as NSError).userInfo[NSUnderlyingErrorKey] { print("Underlying error: \(underlyingError)") } } }
1
0
517
Jan ’25
AVPlayerItem step(byCount:) callback or notification
Hello there, I need to move through video loaded in an AVPlayer one frame at a time back or forth. For that I tried to use AVPlayerItem's method step(byCount:) and it works just fine. However I need to know when stepping happened and as far as I observed it is not immediate using the method. If I check the currentTime() just after calling the method it's the same and if I do it slightly later (depending of the video itself) it shows the correct "jumped" time. To achieve my goal I tried subclassing AVPlayerItem and implement my own async method utilizing NotificationCenter and the timeJumpedNotification assuming it would deliver it as the time actually jumps but it's not the case. Here is my "stripped" and simplified version of the custom Player Item: import AVFoundation final class PlayerItem: AVPlayerItem { private var jumpCompletion: ( (CMTime) -> () )? override init(asset: AVAsset, automaticallyLoadedAssetKeys: [String]?) { super .init(asset: asset, automaticallyLoadedAssetKeys: automaticallyLoadedAssetKeys) NotificationCenter.default.addObserver(self, selector: #selector(timeDidChange(_:)), name: AVPlayerItem.timeJumpedNotification, object: self) } deinit { NotificationCenter.default.removeObserver(self, name: AVPlayerItem.timeJumpedNotification, object: self) jumpCompletion = nil } @discardableResult func step(by count: Int) async -> CMTime { await withCheckedContinuation { continuation in step(by: count) { time in continuation.resume(returning: time) } } } func step(by count: Int, completion: @escaping ( (CMTime) -> () )) { guard jumpCompletion == nil else { completion(currentTime()) return } jumpCompletion = completion step(byCount: count) } @objc private func timeDidChange(_ notification: Notification) { switch notification.name { case AVPlayerItem.timeJumpedNotification where notification.object as? AVPlayerItem [==](https://www.example.com/) self: jumpCompletion?(currentTime()) jumpCompletion = nil default: return } } } In short the notification never gets called thus the above is not working. I guess the key there is that in the docs about the timeJumpedNotification: is said: "A notification the system posts when a player item’s time changes discontinuously." so the step(byCount:) is not considered as discontinuous operation and doesn't trigger it. I'd be really helpful if somebody can help as I don't want to use seek(to:toleranceBefore:toleranceAfter:) mainly cause it's not accurate in terms of the exact next/previous frame as the video might have VFR and that causes repeating frames sometimes or even skipping one or another. Thanks a lot
2
0
597
Jan ’25
Encoder Performance Unable to Achieve 4k 120fps on iPhone 16 Pro
Hello, I am trying to get the new iPhone 16 pro to achieve 4k 120fps encoding when we are getting the video feed from the default, wide angle camera on the back. We are using the apple API to capture the individual frames from the camera as they are processed and we get them in this callback: // this is the main callback function to handle video frames captured func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { We are then taking these frames as they come in and encoding them using VideoToolBox. After they are encoded, they are added to a ring buffer so we can access them after they have been encoded. The problem is that when we are encoding these frames on an iPhone 16 Pro, we are only reaching 80-90fps instead of 120fps. We have removed as much processing as we can. We get some small attributes about the frame when it comes in, encode the frame, and then add it to our ring buffer. I have attached a sample project that is broken down as much as possible to the basic task of encoding 4k 120fps footage. Inside the sample app, there is an fps and pps display showing how many frames we are encoding per second. FPS represents how many frames we are coming in per second from the camera, and PPS represents how many frames we are processing (encoding) per second. Link to sample project: https://github.com/jake-fishtech/EncoderPerformance Thanks you for any help or suggestions.
2
0
545
Jan ’25
H.265 Decoding with VideoToolBox
I am creating an app that decodes H.265 elementary streams on iOS. I use VideoToolBox to decode from H.265 to NV12. The decoded data is enqueued in the CMSampleBufferDisplayLayer as a CMSampleBuffer. However, nothing is displayed in the VideoPlayerView. It remains black. The decoding in VideoToolBox is successful. I confirmed this by saving the NV12 data in the CMSampleBuffer to a file and displaying it using a tool. Why is nothing displayed in the VideoPlayerView? I can provide other source code as well. // // ContentView.swift // H265Decoder // // Created by Kohshin Tokunaga on 2025/02/15. // import SwiftUI struct ContentView: View { var body: some View { VStack { Text("H.265 Player (temp.h265)") .font(.headline) VideoPlayerView() .frame(width: 360, height: 640) // Adjust or make it responsive for iOS } .padding() } } #Preview { ContentView() } // // VideoPlayerView.swift // H265Decoder // // Created by Kohshin Tokunaga on 2025/02/15. // import SwiftUI import AVFoundation struct VideoPlayerView: UIViewRepresentable { // Return an H265Player as the coordinator, and start playback there. func makeCoordinator() -> H265Player { H265Player() } func makeUIView(context: Context) -> UIView { let uiView = UIView(frame: .zero) // Base layer for attaching sublayers uiView.backgroundColor = .black // Screen background color (for iOS) // Create the display layer and add it to uiView.layer let displayLayer = context.coordinator.displayLayer displayLayer.frame = uiView.bounds displayLayer.backgroundColor = UIColor.clear.cgColor uiView.layer.addSublayer(displayLayer) // Start playback context.coordinator.startPlayback() return uiView } func updateUIView(_ uiView: UIView, context: Context) { // Reset the frame of the AVSampleBufferDisplayLayer when the view's size changes. let displayLayer = context.coordinator.displayLayer displayLayer.frame = uiView.layer.bounds // Optionally update the layer's background color, etc. uiView.backgroundColor = .black displayLayer.backgroundColor = UIColor.clear.cgColor // Flush transactions if necessary CATransaction.flush() } } // // H265Player.swift // H265Decoder // // Created by Kohshin Tokunaga on 2025/02/15. // import Foundation import AVFoundation import CoreMedia class H265Player: NSObject, VideoDecoderDelegate { let displayLayer = AVSampleBufferDisplayLayer() private var decoder: H265Decoder? override init() { super.init() // Initial configuration for the display layer displayLayer.videoGravity = .resizeAspect // Initialize the decoder (delegate = self) decoder = H265Decoder(delegate: self) // For simple playback, set isBaseline to true decoder?.isBaseline = true } func startPlayback() { // Load the file "cars_320x240.h265" guard let url = Bundle.main.url(forResource: "temp2", withExtension: "h265") else { print("File not found") return } do { let data = try Data(contentsOf: url) // Set FPS and video size as needed let packet = VideoPacket(data: data, type: .h265, fps: 30, videoSize: CGSize(width: 1080, height: 1920)) // Decode as a single packet decoder?.decodeOnePacket(packet) } catch { print("Failed to load file: \(error)") } } // MARK: - VideoDecoderDelegate func decodeOutput(video: CMSampleBuffer) { // When decoding is complete, send the output to AVSampleBufferDisplayLayer displayLayer.enqueue(video) } func decodeOutput(error: DecodeError) { print("Decoding error: \(error)") } }
2
0
522
May ’25
SDAVAssetExportSession or AVAssetWriter fail to process iphone16 video with spatial audio tracks
I have been using SDAVAssetExportSession to compress videos in an app I am building, everything goes very smoothly until I have my new Iphone16, on the device, the spatial audio in camera setting is turned on by default, then the SDAVAssetExportSession starts to fail. I know it has something to do with audioSetting. the current setting is something like this: exportSession.audioSettings = [ AVFormatIDKey: kAudioFormatMPEG4AAC, AVNumberOfChannelsKey: 2, AVSampleRateKey: 44100, AVEncoderBitRateKey: 128000 ] And also, this is passed to the underlying object AVAssetReader or AVAssetWriter. I am not experienced in this area, and I really had a hard time trying to figure out. Does anyone know how to set up AVAssetReader or AVAssetWriter to process video with spatial audio tracks ? thanks in advance.
1
0
264
Mar ’25
iPhone 15 Pro Has USB-C, but AVCaptureDevice Doesn't Support External Devices?
I'm using an iPhone 15 Pro, which has switched from Lightning to USB Type-C. My iOS version is 18.3. According to Apple's documentation, AVCaptureDevice.DeviceType should support external device types. 🔗 Apple's Official Documentation: https://developer.apple.com/documentation/avfoundation/avcapturedevice/devicetype-swift.struct/external The documentation clearly states that iPadOS 17.0+ and iOS 17.0+ support external devices. However, in my actual tests: On iPhone, discoverySession does not detect any external devices. On iPad, discoverySession can detect external devices without any issues. My Question: Does iPhone USB-C actually support external devices (e.g., UVC cameras)? If not, why does Apple's documentation claim that iOS 17 supports external devices instead of specifying iPadOS 17 only?
1
0
371
Mar ’25
Configuring CaptureVideoDelegate to avoid gamma/transfer function
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive video in as unprocessed format as possible. In particular, I'm interested in getting pixel buffers that contain pretty much the bayer data as the sensor sees it - with the minimum processing of color possible. Currently we configure the AVCaptureDevice to fix the focus and exposure, use a low ISO with no gain and set the white balance gains to 1. AVCaptureVideoDataOutput is using 32BGRA. What I'd like to do is remove any additional color and brightness processing such that the data is effectively processed with a linear transfer function (i.e. gamma function is 1). I thought that this might be down to using the AVCaptureDevice activeColorSpace - we currently use P3_D65 for this. But there only seems to be a few choices (e.g. sRGB, HLG_BT2020) all of which I think affect the gamma. So: is it possible to control or specify the gamma / transfer function when using CaptureVideoDelegate? if not, does one of the color space settings have a defined gamma function that I can effectively reverse it from the pixel data without losing too much information? or is there a better way to capture video-ish speed images (15-30fps) from the camera sensor that skips processing like this? Many thanks for any suggestions.
3
0
75
Mar ’25
Case-ID: 12759603: Memory Leak in UIViewControllerRepresentable and VideoPlayer
Dear Developers and DTS team, This is writing to seek your expert guidance on a persistent memory leak issue I've discovered while implementing video playback in a SwiftUI application. Environment Details: iOS 17+, Swift (SwiftUI, AVKit), Xcode 16.2 Target Devices: iPhone 15 Pro (iOS 18.3.2) iPhone 16 Plus (iOS 18.3.2) Detailed Issue Description: I am experiencing consistent memory leaks when using UIViewControllerRepresentable with AVPlayerViewController for FullscreenVideoPlayer and native VideoPlayer during video playback termination. Code Context: I have implemented the following approaches: Added static func dismantleUIViewController(: coordinator:) Included deinit in Coordinator Utilized both UIViewControllerRepresentable and native VideoPlayer /// A custom AVPlayer integrated with AVPlayerViewController for fullscreen video playback. /// /// - Parameters: /// - videoURL: The URL of the video to be played. struct FullscreenVideoPlayer: UIViewControllerRepresentable { // @Binding something for controlling fullscreen let videoURL: URL? func makeUIViewController(context: Context) -> AVPlayerViewController { let controller = AVPlayerViewController() controller.delegate = context.coordinator print("AVPlayerViewController created: \(String(describing: controller))") return controller } /// Updates the `AVPlayerViewController` with the provided video URL and playback state. /// /// - Parameters: /// - uiViewController: The `AVPlayerViewController` instance to update. /// - context: The SwiftUI context for updates. func updateUIViewController(_ uiViewController: AVPlayerViewController, context: Context) { guard let videoURL else { print("Invalid videoURL") return } // Initialize AVPlayer if it's not already set if uiViewController.player == nil || uiViewController.player?.currentItem == nil { uiViewController.player = AVPlayer(url: videoURL) print("AVPlayer updated: \(String(describing: uiViewController.player))") } // Handle playback state } func makeCoordinator() -> Coordinator { Coordinator(parent: self) } static func dismantleUIViewController(_ uiViewController: AVPlayerViewController, coordinator: Coordinator) { uiViewController.player?.pause() uiViewController.player?.replaceCurrentItem(with: nil) uiViewController.player = nil print("dismantleUIViewController called for \(String(describing: uiViewController))") } } extension FullscreenVideoPlayer { class Coordinator: NSObject, AVPlayerViewControllerDelegate { var parent: FullscreenVideoPlayer init(parent: FullscreenVideoPlayer) { self.parent = parent } deinit { print("Coordinator deinitialized") } } } struct ContentView: View { private let videoURL: URL? = URL(string: "https://interactive-examples.mdn.mozilla.net/media/cc0-videos/flower.mp4") var body: some View { NavigationStack { Text("My Userful View") List { Section("VideoPlayer") { NavigationLink("FullscreenVideoPlayer") { FullscreenVideoPlayer(videoURL: videoURL) .frame(height: 500) } NavigationLink("Native VideoPlayer") { VideoPlayer(player: .init(url: videoURL!)) .frame(height: 500) } } } } } } Reproducibility Steps: Run application on target devices Scenario A - FullscreenVideoPlayer: Tap FullscreenVideoPlayer Play video to completion Repeat process 5 times Scenario B - VideoPlayer: Navigate back to main screen Tap Video Player Play video to completion Repeat process 5 times Observed Memory Leak Characteristics: Per Iteration (Debug Memory Graph): 4 instances of NSMutableDictionary (Storage) leaked 4 instances of __NSDictionaryM leaked 4 × 112-byte malloc blocks leaked Cumulative Effects: Debug console prints: "dismantleUIViewController called for <AVPlayerViewController: 0x{String}> Coordinator deinitialized" when navigate back to main screen After multiple iterations, leak instances double Specific Questions: What underlying mechanisms are causing these memory leaks in UIViewControllerRepresentable and VideoPlayer? What are the recommended strategies to comprehensively prevent and resolve these memory management issues?
1
0
121
Mar ’25
Coverting CVPixelBuffer 2VUY to a Metal Texture
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image. I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off. I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors. On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues. My code for the conversion is also attached. If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it. Many Thanks Conversion_Logs.txt ConversionCode.swift
3
0
139
Apr ’25
AVCaptureSession video and audio out of sync
I'm using an AVCaptureSession to send video and audio samples to an AVAssetWriter. When I play back the resultant video, sometimes there is a significant lag between the audio compared with the video, so they're just not in sync. But sometimes they are, with the same code. If I look at the very first presentation time stamps of the buffers being sent to the delegate, via func captureOutput(_: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) I see something like this: Adding audio samples for pts time 227711.0855328798, Adding video samples for pts time 227710.778785374 That is, the clock for audio vs video is behind: the first audio sample I receive is at 11.08 something, while the video video sample is earlier in time, at 10.778 something. The times are the presentation time stamps of the buffer, and the outputPresentationTimeStamp is the exact same number. It feels like "video" vs the "audio" clock are just mismatched. This doesn't always happen: sometimes they're synced. Sometimes they're not. Any ideas? The device I'm recording is a webcam, on iPadOS, connected via the usb-c port.
3
0
80
Apr ’25
CVPixelBufferCreate EXC_BAD_ACCESS
I am doing something similar to this post Within an AVCaptureDataOutputSynchronizerDelegate method, I create a pixelBuffer using CVPixelBufferCreate with the following attributes: kCVPixelBufferIOSurfacePropertiesKey as String: true, kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey as String: true When I copy the data from the vImagePixelBuffer "rotatedImageBuffer", I get the following error: Thread 10: EXC_BAD_ACCESS (code=1, address=0x14caa8000) I get the same error with memcpy and data.copyBytes (not running them at the same time obviously). If I use CVPixelBufferCreateWithBytes, I do not get this error. However, CVPixelBufferCreateWithBytes does not let you include attributes (see linked post above). I am using vImage because I need the original CVPixelBuffer from the camera output and a rotated version with a different color scheme. // Copy to pixel buffer let attributes: NSDictionary = [ true : kCVPixelBufferIOSurfacePropertiesKey, true : kCVPixelBufferIOSurfaceOpenGLESTextureCompatibilityKey, ] var colorBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, attributes, &colorBuffer) //let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(rotatedImageBuffer.width), Int(rotatedImageBuffer.height), kCVPixelFormatType_32BGRA, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes, nil, nil, attributes as CFDictionary, &colorBuffer) // does NOT produce error, but also does not have attributes guard status == kCVReturnSuccess, let colorBuffer = colorBuffer else { print("Failed to create buffer") return } let lockFlags = CVPixelBufferLockFlags(rawValue: 0) guard kCVReturnSuccess == CVPixelBufferLockBaseAddress(colorBuffer, lockFlags) else { print("Failed to lock base address") return } let colorBufferMemory = CVPixelBufferGetBaseAddress(colorBuffer)! let data = Data(bytes: rotatedImageBuffer.data, count: rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) data.copyBytes(to: colorBufferMemory.assumingMemoryBound(to: UInt8.self), count: data.count) // Fails here //memcpy(colorBufferMemory, rotatedImageBuffer.data, rotatedImageBuffer.rowBytes * Int(rotatedImageBuffer.height)) // Also produces the same error CVPixelBufferUnlockBaseAddress(colorBuffer, lockFlags)
1
0
92
Apr ’25
Save MPEG-TS (h264 or HEVC) video stream using AVAssetWriter.
I'm capturing video stream from GoPro camera (I demux UDP MPEG-TS packets) and create CMSampleBuffers from them, this works fine when I display them using CMSampleBufferLayer. However when I dump them to disk using AVAssetWriter and then playback it with AVPlayer, AVPlayer has problems with scrubbing, it also cannot render previous frames, it needs to go back to key frames. Also thumbnails generated with AVAssetImageGenerator are mostly distorted and green, even though I set the requestedTimeToleranceAfter longer than the key frames frequency. When I re-encode saved video once again with AVAssetExportSession and play it back then I can scrub the video just fine. Is it because re-transcoding adds additional metadata to enable generating frames when rewinding the video and scrubbing? If so is there a way to achieve it with AVAssetWriter without much time penalty? I need the dump/save operation to be very fast. I also considered the following: Instead of de-muxing video and creating CMSampleBuffers, maybe I could directly dump the stream to disk and somehow add moov atoms with timing information. Would this approach work? If so where I can find information how to do it? Thank you!
3
0
100
Apr ’25
videoCaptureQueue would make the app crashed when I using IOS 18.4.1
Hi All I have some problem when I using the IOS 18.4.1 I have iphone16 pro and ipad Air, both are updated to IOS 18.4.1 I tried to following sample code. However, when I run the app around 30 seconds to 1 minutes, the application would be crashed When I using another Ipad with IOS 17, it would not have the same problem. https://developer.apple.com/documentation/createml/creating-an-action-classifier-model https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed#overview%29,
6
0
105
May ’25