Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Post

Replies

Boosts

Views

Activity

Crash with AVMutableComposition insertTimeRange Method
A small number of crashes are being reported on Firebase. When attempting to use the insertTimeRange:ofAsset:atTime:error: method of AVMutableComposition, a crash occurred with the error message -[__NSArrayM insertObject:atIndex:]: object cannot be nil. Most of them appear in versions of ios 17.0 and above. Here's my code: - (AVMutableComposition *)createtrimAsset:(AVAsset *)asset andStartTime:(CGFloat)startTime endTime:(CGFloat)endTime{ NSError *error = nil; CGFloat timescale = 1000000; AVMutableComposition *mutableComposition = [AVMutableComposition composition]; CMTime sStartTime = CMTimeMakeWithSeconds(CMTimeGetSeconds(asset.duration)*startTime, timescale); CMTime eEndTime = CMTimeMakeWithSeconds(CMTimeGetSeconds(asset.duration)*endTime, timescale); [mutableComposition insertTimeRange:CMTimeRangeMake(sStartTime, CMTimeSubtract(eEndTime, sStartTime)) ofAsset:asset atTime:kCMTimeZero error:&error]; return mutableComposition; } I attempted to reproduce this crash by deliberately setting the timeRange or asset to unusual values, such as asset=nil, or asset.duration=0, or asset.duration=NAN, but all attempts failed. What could be causing this exception? Any advice would be of great help to me.
0
0
26
1d
When adding a VideoPlayerComponent to an Entity placed in ImmersiveSpace and attempting to play an 8K video, the application crashes.
OS:VisionOS 1.0 Xcode:15.2 In the application under development, do the following Open ImmersiveSpace Add VideoPlayerComponent to Entity Play 8K Video the App crash The Apple symbol appears and returned to the Home but, The problem does not occur if the application is created by extracting only the part of the 8K video to be played back. Error Log apply fence tx failed (client=0x6fbf0fcc) [0xfffffecc (ipc/mig) server died] Failed to commit transaction (client=0x58510d43) [0x10000003 (ipc/send) invalid destination port] nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_protocol_socket_reset_linger [C1:2] setsockopt SO_LINGER failed [22: Invalid argument] Failed to set override status for bind point component member. Message from debugger: Terminated due to signal 9 I can't share the entire application, but is anyone else experiencing the same problem? Is this a memory issue?
0
0
64
1d
When VideoTool Box compress JPEG and H264, the output format(YUV420) is different from the input format.
The M series utilizes VideoToolBox GPU compression with a YUV422 format kCVPixelFormatType_422YpCbCr8BiPlanarVideoRange input, and the compressed output JPEG image format remains YUV420. For the Intel series GPU compression, a YUV420 format kCVPixelFormatType_420YpCbCr8Planar input is required, and the compressed output JPEG image format is YUV422. The output format after compression is not consistent with the input format. Does VideoToolBox GPU compression support output YUV422 or YUV444 JPEG images and H.264 streams?
0
1
127
3d
WWDC Lab feedback
I am writing to follow up with my lab in WWDC24. I had 1:1 lab with Mr. Kavin, we had good 30 minutes lab and for follow up questions Kavin asked me to post it using feedback. Following is my questin: We have screenshare in our application and trying to use CFMessagePort for passing CVPixelBufferRef from broadcast extension to Applicaiton. Questions: How to copy planes of IOSurface backed CVPixelBufferRef onto another one without using memcpy, is there a zero-copy method? How to get notified when an IOSurface backed CVPixelBufferRef data get changed by another process. How to send an IOSurface backed CVPixelBufferRef from Broadcast Extension to application. How to pass unowned IOSurfaceRef from the Broadcast Extension to appliction.
0
1
42
4d
ProRes 4444 blocky compression artifacts
I’m creating a objective C command-line utility to encode RAW image sequences to ProRes 4444, but I’m encountering, blocky compression artifacts in the ProRes 4444 video output. To test the integrity of the image data before encoding to ProRes, I added a snippet in my encoding function that saves a 16-bit PNG before encoding to ProRes and the PNG looks perfect, I can see all detail in every part of the image dynamic range. Here’s a comparison between the 16-bit PNG(on the right) and the ProRes 4444 output. (on the left) As a further test, I re-encoded the ‘test PNG’ to ProRes 4444 using DaVinci Resolve, and the ProRes4444 output video from Resolve doesn’t have any blocky compression artifacts. Looks identical. In short, this is what the utility does: Unpacks the 12-bit raw data into 16-bit values. After unpacking, the raw data is debayered to convert it into a standard color image format (BGR) using OpenCV. Scale the debayered pixel values from their original 12-bit depth to fit into a 16-bit range. Up to this point everything is fine and confirmed by saving 16bit PNGs. The images are encoded to ProRes 4444 using the AVFoundation framework. The pixel buffers are created and managed using dictionary method with ‘kCVPixelFormatType_64RGBALE’. I need help figuring this out, I’m a real novice when it comes to AVfoundation/encoding to ProRes. See relevant parts of my 'encodeToProRes' function: void encodeToProRes(const std::string &outputPath, const std::vector<std::string> &rawPaths, const std::string &proResFlavor) { NSError *error = nil; NSURL *url = [NSURL fileURLWithPath:[NSString stringWithUTF8String:outputPath.c_str()]]; AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:url fileType:AVFileTypeQuickTimeMovie error:&error]; if (error) { std::cerr << "Error creating AVAssetWriter: " << error.localizedDescription.UTF8String << std::endl; return; } // Load the first image to get the dimensions std::cout << "Debayering the first image to get dimensions..." << std::endl; Mat firstImage; int width = 5320; int height = 3900; if (!debayer_image(rawPaths[0], firstImage, width, height)) { std::cerr << "Error debayering the first image" << std::endl; return; } width = firstImage.cols; height = firstImage.rows; // Save the first frame as a PNG 16-bit image for validation std::string pngFilePath = outputPath + "_frame1.png"; if (!imwrite(pngFilePath, firstImage)) { std::cerr << "Error: Failed to save the first frame as a PNG image" << std::endl; } else { std::cout << "First frame saved as PNG: " << pngFilePath << std::endl; } NSString *codecKey = nil; if (proResFlavor == "4444") { codecKey = AVVideoCodecTypeAppleProRes4444; } else if (proResFlavor == "422HQ") { codecKey = AVVideoCodecTypeAppleProRes422HQ; } else if (proResFlavor == "422") { codecKey = AVVideoCodecTypeAppleProRes422; } else if (proResFlavor == "LT") { codecKey = AVVideoCodecTypeAppleProRes422LT; } else { std::cerr << "Error: Invalid ProRes flavor specified: " << proResFlavor << std::endl; return; } NSDictionary *outputSettings = @{ AVVideoCodecKey: codecKey, AVVideoWidthKey: @(width), AVVideoHeightKey: @(height) }; AVAssetWriterInput *videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings]; videoInput.expectsMediaDataInRealTime = YES; NSDictionary *pixelBufferAttributes = @{ (id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_64RGBALE), (id)kCVPixelBufferWidthKey: @(width), (id)kCVPixelBufferHeightKey: @(height) }; AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoInput sourcePixelBufferAttributes:pixelBufferAttributes]; ... [assetWriter startSessionAtSourceTime:kCMTimeZero]; CMTime frameDuration = CMTimeMake(1, 24); // Frame rate of 24 fps int numFrames = static_cast<int>(rawPaths.size()); ... // Encoding thread std::thread encoderThread([&]() { int frameIndex = 0; std::vector<CVPixelBufferRef> pixelBufferBuffer; while (frameIndex < numFrames) { std::unique_lock<std::mutex> lock(queueMutex); queueCondVar.wait(lock, [&]() { return !frameQueue.empty() || debayeringFinished; }); if (!frameQueue.empty()) { auto [index, debayeredImage] = frameQueue.front(); frameQueue.pop(); lock.unlock(); if (index == frameIndex) { cv::Mat rgbaImage; cv::cvtColor(debayeredImage, rgbaImage, cv::COLOR_BGR2RGBA); CVPixelBufferRef pixelBuffer = NULL; CVReturn result = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pixelBuffer); if (result != kCVReturnSuccess) { std::cerr << "Error: Could not create pixel buffer" << std::endl; dispatch_group_leave(dispatchGroup); return; } CVPixelBufferLockBaseAddress(pixelBuffer, 0); void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer); for (int row = 0; row < height; ++row) { memcpy(static_cast<uint8_t*>(pxdata) + row * CVPixelBufferGetBytesPerRow(pixelBuffer), rgbaImage.ptr(row), width * 8); } CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); pixelBufferBuffer.push_back(pixelBuffer); ... Thanks very much!
1
0
118
6d
AirDrop Requesting to Open Video via 3rd Party App Instead of Photo Library
I film and edit content using my iPhone and PremierePro. I've been doing this since 2017, so I'd like to say I know my way around an iPhone and the PremierePro software enough to get a finished product to my client. Before I send content to clients, I airdrop it to my phone to make sure that the quality of it is up to par. On this most recent project, I've had issues airdropping it to myself. Each time I do so, my iPhone prompts which 3rd party app I would like to open the video in, rather than automatically opening or saving the video into my photo library. I will list the specs below: Filmed on iPhone 15 Plus iOS Version 17.5.1 (at the moment this is the most up to date software update) Filmed in 4K at 60 FPS I have ample storage space on my phone and the video file size is 220MB Premiere Pro Export Settings: Video Settings - H.264, Field Order: Progressive, Bit Rate Encoding: CBR I will say that I purchased a transition and burns bundle and used it for the first time on this project. All materials used are in an .mp4 format and blend mode was set to overlay. Nothing out of pocket. I figured it wouldn't be a problem since my client would just be downloading it via dropbox, but there was an issue there as well. My client received an error message saying, "Sorry, this type of video cannot be saved to this device". The back road workaround was to take the saved video, plug it into a separate PremierePro project, and export it with the preset: Match Source - Adaptive High Bitrate. I was able to airdrop it to myself, it saved in my photo album and my client was able to download it without receiving that error message. If there is an explanation as to why I am having this issue and how I can avoid it, I would really appreciate it as I have never had this problem in the past.
0
0
117
1w
Reducing storage of similar PNGs by compressing them into a video and retrieving them losslessly--possibility or dumb idea?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another. I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist. Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage. Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
18
0
291
1w
How to setup SharePlay reliably?
Hi, I’m developing an app that uses SharePlay. In specific, I’m using ShareLink in my SwiftUI-based app so that when 2 devices come close, it will start SharePlay via AirDrop, just like how Name Drop works (the animation is super cool, btw). However, I’ve notice that SharePlay doesn’t start reliably under the following conditions: Do both devices need to be signed in using different Apple ID? I wish it works with the same Apple ID. When both devices are running my app, the sharing does not seem to start; maybe both of them are trying to be the host app? When I try to demo this NameDrop-like transaction via Zoom, it usually doesn’t work; maybe because the cable is connected in Lightening port? Is some Mac app (in my case, Zoom or even QuickTime) capturing the screen of the device make it less likely to have successful SharePlay transaction? Thanks!
0
0
96
1w
Add 30 frames per secons in assetWriter
Hello, I have converted UIImage to CVPixelBuffer. I am creating a video writing app. In some cases, the same CVPixelBuffer should last in the video for 2 seconds or more. However, I need to add 30 CVPixelBuffers per second because the video, to work on social media, must be 30 frames per second. The problem is that whenever I try to add frames to long videos, like 50-minute videos, it gives an error. The error is something like "Operation cannot be completed". Give me an example of a loop to add 30 CVPixelBuffers per second to a currently written video. Example: while true { if videoInput.isReadyForMoreMediaData { break } if videoInput.isReadyForMoreMediaData, let buffer = videoProvider.getNextFrame() { adaptor.append(buffer, withPresentationTime: CMTime(value: 1, timescale: 30)) } } I await your response.
0
0
151
1w
Taking a screenshot that includes AVCaptureVideoPreviewLayer
I have an app that displays overlays on top of an AVCaptureVideoPreviewLayer (basically AR without ARKit), and my users have repeatedly requested a button that will allow them to capture a screenshot of both the video and the overlays and surrounding UI with a single tap. However, I cannot find a way to actually take such a screenshot. I have tried the usual methods of rendering views to images, such as calling drawViewHierarchyInRect on my top level view or calling renderInContext on the same view's layer. These all work perfectly to capture the overlays and the surrounding UI elements, but there is nothing but black where the video preview's contents should be. snapshotViewAfterScreenUpdates: does capture exactly what I want, but snapshot views cannot be written to an image. From what I understand that's an intentional security decision by Apple. I've considered using ReplayKit to take a very short screen recording and then using an AVAssetImageGenerator to grab a frame from that video, but I don't think that's how those APIs were intended to be used and it's an additional permission to request from the user. I would really rather not do this if there is any alternative (and I'm not even sure it would work). Is there any reasonable method to render a view hierarchy to an image in such a way as to capture the contents of any video preview layers found within that hierarchy?
0
0
161
2w
On Sonoma 14.5, after upgrading CMIO CameraExtension, daemon is not running
I made CameraExtension and installed by OSSystemExtensionRequest. I got success callback. I did uninstall old version of my CameraExtension and install new version of my CameraExtension. "systemextensionsctl list" command shows "[activated enabled]" on my new version. But no daemon process with my CameraExtension is not running. I need to reboot OS to start the daemon process. This issue is new at macOS Sonoma 14.5. I did not see this issue on 14.4.x
0
0
257
3w
How can I cache only the initial few seconds (chunks) of an HLS stream on the disk?
Hi, I am working on an app that is very similar to TikTok in terms of video experience. There is an infinite scroll feed of videos, and I am using HLS URLs as the video source. My requirement is to cache the initial few seconds of each video on the disk while the video is playing. The next time a user views the video, it should play the initial few seconds from the cache, with the subsequent chunks coming from the network. Additionally, when there is no network connection, the video should still play the initial few seconds from the cache. I was able to achieve this with MP4 using AVAssetResourceLoaderDelegate, but the same approach is not possible with HLS. What are some other ways through which I can implement this feature? Thanks.
2
0
229
3w
iOS to Android H264 encoding issue.
I'm trying to cast the screen from an iOS device to an Android device. I'm leveraging ReplayKit on iOS to capture the screen and VideoToolbox for compressing the captured video data into H.264 format using CMSampleBuffers. Both iOS and Android are configured for H.264 compression and decompression. While screen casting works flawlessly within the same platform (iOS to iOS or Android to Android), I'm encountering an error ("not in avi mode") on the Android receiver when casting from iOS. My research suggests that the underlying container formats for H.264 might differ between iOS and Android. Data transmission over the TCP socket seems to be functioning correctly. My question is: Is there a way to ensure a common container format for H.264 compression and decompression across iOS and Android platforms? Here's a breakdown of the iOS sender details: Device: iPhone 13 mini running iOS 17 Development Environment: Xcode 15 with a minimum deployment target of iOS 16 Screen Capture: ReplayKit for capturing the screen and obtaining CMSampleBuffers Video Compression: VideoToolbox for H.264 compression Compression Properties: kVTCompressionPropertyKey_ConstantBitRate: 6144000 (bitrate) kVTCompressionPropertyKey_ProfileLevel: kVTProfileLevel_H264_Main_AutoLevel (profile and level) kVTCompressionPropertyKey_MaxKeyFrameInterval: 60 (maximum keyframe interval) kVTCompressionPropertyKey_RealTime: true (real-time encoding) kVTCompressionPropertyKey_Quality: 1 (lowest quality) NAL Unit Handling: Custom header is added to NAL units Android Receiver Details: Device: RedMi 7A running Android 10 Video Decoding: MediaCodec API for receiving and decoding the H.264 stream
0
0
264
May ’24