Integrate video and other forms of moving visual media into your apps.

Posts under Video tag

91 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Why AvAssetWriter adds pts drift when writing fMP4
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames. For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0. However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither. Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
0
0
82
1d
Decode HLS livestream with VideoToolbox
Hi, I'm trying to decode a HLS livestream with VideoToolbox. The CMSampleBuffer is successfully created (OSStatus == noErr). When I enqueue the CMSampleBuffer to a AVSampleBufferDisplayLayer the view isn't displaying anything and the status of the AVSampleBufferDisplayLayer is 1 (rendering). When I use a VTDecompressionSession to convert the CMSampleBuffer to a CVPixelBuffer the VTDecompressionOutputCallback returns a -8969 (bad data error). What do I need to fix in my code? Do I incorrectly parse the data from the segment for the CMSampleBuffer? let segmentData = try await downloadSegment(from: segment.url) let (sps, pps, idr) = try parseH264FromTSSegment(tsData: segmentData) if self.formatDescription == nil { self.formatDescription = try CMFormatDescription(h264ParameterSets: [sps, pps]) } if let sampleBuffer = try createSampleBuffer(from: idr, segment: segment) { try self.decodeSampleBuffer(sampleBuffer) } func parseH264FromTSSegment(tsData: Data) throws -> (sps: Data, pps: Data, idr: Data) { let tsSize = 188 var pesData = Data() for i in stride(from: 0, to: tsData.count, by: tsSize) { let tsPacket = tsData.subdata(in: i..<min(i + tsSize, tsData.count)) guard let payload = extractPayloadFromTSPacket(tsPacket) else { continue } pesData.append(payload) } let nalUnits = parseNalUnits(from: pesData) var sps: Data? var pps: Data? var idr: Data? for nalUnit in nalUnits { guard let firstByte = nalUnit.first else { continue } let nalType = firstByte & 0x1F switch nalType { case 7: // SPS sps = nalUnit case 8: // PPS pps = nalUnit case 5: // IDR idr = nalUnit default: break } if sps != nil, pps != nil, idr != nil { break } } guard let validSPS = sps, let validPPS = pps, let validIDR = idr else { throw NSError() } return (validSPS, validPPS, validIDR) } func extractPayloadFromTSPacket(_ tsPacket: Data) -> Data? { let syncByte: UInt8 = 0x47 guard tsPacket.count == 188, tsPacket[0] == syncByte else { return nil } let payloadStart = (tsPacket[1] & 0x40) != 0 let adaptationFieldControl = (tsPacket[3] & 0x30) >> 4 var payloadOffset = 4 if adaptationFieldControl == 2 || adaptationFieldControl == 3 { let adaptationFieldLength = Int(tsPacket[4]) payloadOffset += 1 + adaptationFieldLength } guard adaptationFieldControl == 1 || adaptationFieldControl == 3 else { return nil } let payload = tsPacket.subdata(in: payloadOffset..<tsPacket.count) return payloadStart ? payload : nil } func parseNalUnits(from h264Data: Data) -> [Data] { let startCode = Data([0x00, 0x00, 0x00, 0x01]) var nalUnits: [Data] = [] var searchRange = h264Data.startIndex..<h264Data.endIndex while let range = h264Data.range(of: startCode, options: [], in: searchRange) { let nextStart = h264Data.range(of: startCode, options: [], in: range.upperBound..<h264Data.endIndex)?.lowerBound ?? h264Data.endIndex let nalUnit = h264Data.subdata(in: range.upperBound..<nextStart) nalUnits.append(nalUnit) searchRange = nextStart..<h264Data.endIndex } return nalUnits } private func createSampleBuffer(from data: Data, segment: HLSSegment) throws -> CMSampleBuffer? { var blockBuffer: CMBlockBuffer? let alignedData = UnsafeMutableRawPointer.allocate(byteCount: data.count, alignment: MemoryLayout<UInt8>.alignment) data.copyBytes(to: alignedData.assumingMemoryBound(to: UInt8.self), count: data.count) let blockStatus = CMBlockBufferCreateWithMemoryBlock( allocator: kCFAllocatorDefault, memoryBlock: alignedData, blockLength: data.count, blockAllocator: nil, customBlockSource: nil, offsetToData: 0, dataLength: data.count, flags: 0, blockBufferOut: &blockBuffer ) guard blockStatus == kCMBlockBufferNoErr, let validBlockBuffer = blockBuffer else { alignedData.deallocate() throw NSError() } var sampleBuffer: CMSampleBuffer? var timing = [calculateTiming(for: segment)] var sampleSizes = [data.count] let sampleStatus = CMSampleBufferCreate( allocator: kCFAllocatorDefault, dataBuffer: validBlockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: 1, sampleTimingEntryCount: 1, sampleTimingArray: &timing, sampleSizeEntryCount: sampleSizes.count, sampleSizeArray: &sampleSizes, sampleBufferOut: &sampleBuffer ) guard sampleStatus == noErr else { alignedData.deallocate() throw NSError() } return sampleBuffer } private func decodeSampleBuffer(_ sampleBuffer: CMSampleBuffer) throws { guard let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) else { throw NSError() } if decompressionSession == nil { try setupDecompressionSession(formatDescription: formatDescription) } guard let session = decompressionSession else { throw NSError() } let flags: VTDecodeFrameFlags = [._EnableAsynchronousDecompression, ._EnableTemporalProcessing] var flagOut = VTDecodeInfoFlags() let status = VTDecompressionSessionDecodeFrame( session, sampleBuffer: sampleBuffer, flags: flags, frameRefcon: nil, infoFlagsOut: nil) if status != noErr { throw NSError() } } private func setupDecompressionSession(formatDescription: CMFormatDescription) throws { self.formatDescription = formatDescription if let session = decompressionSession { VTDecompressionSessionInvalidate(session) self.decompressionSession = nil } var decompressionSession: VTDecompressionSession? var callback = VTDecompressionOutputCallbackRecord( decompressionOutputCallback: decompressionOutputCallback, decompressionOutputRefCon: Unmanaged.passUnretained(self).toOpaque()) let status = VTDecompressionSessionCreate( allocator: kCFAllocatorDefault, formatDescription: formatDescription, decoderSpecification: nil, imageBufferAttributes: nil, outputCallback: &callback, decompressionSessionOut: &decompressionSession ) if status != noErr { throw NSError() } self.decompressionSession = decompressionSession } let decompressionOutputCallback: VTDecompressionOutputCallback = { ( decompressionOutputRefCon, sourceFrameRefCon, status, infoFlags, imageBuffer, presentationTimeStamp, presentationDuration ) in guard status == noErr else { print("Callback: \(status)") return } if let imageBuffer = imageBuffer { } }
1
0
139
1w
iPhone Video Upload Error - reason unknown
Hi everyone, I'm developing a customization tool in which our customer can upload a mp3 or mp4 file that will be scannable through our AR application. On desktop and Android, this is working perfectly. For some reason however, on iPhone we're unable to load most of the video files. I've checked the clips, and they are .mov/h.264 files which are supported by iPhone. We're currently not sure how we can fix this issue to allow customers that own an iPhone to upload clips to our website. Any tips in the right direction are more than welcome. thanks in advance!
1
0
151
2w
Video Volume Issue on Ipad
we are using angular and Html5 to develop our application, in our application we play videos that are placed on s3. Video when played on desktop borwser are adequatley audible but when played on iPad their volume is too low to be audible. I have tried video.volume =1 but it does not work for iPad because this property is only readable for ios devices. I have tried using javascript audioContext. It worked for my local machine. But when code is deployed on some hosted environments, it just does not work. Did anyone face the same issue? Any help regarding it will be appreciated.
0
0
161
2w
M3 chip reverse video playback performance
We have developed a simple video player Swift application for macOS, which uses the AVFoundation Framework. A special feature of this app is the ability to play the video backward with speeds like -0.25x, -0.5x, and -1.0x. MP4 video file is played directly from the local file system, video codec is h.264, and audio AAC. Video files are huge, like 10 GB, and a length of 3 hours. Playing video in reverse direction works well on a Macbook Air with M1 or M2 chip. When we run the same app with the same video on a Macbook Air with M3 chip the reverse playback is much worse. Playback might stutter badly, especially in the latter part of the video. This same behavior also happens in Apple's Quicktime video player when playing in the reverse direction with -1x speed. What's even more strange is that at one point of a time, the video playback is totally smooth, but again, after a while, the playback is stuttering. For example, this morning reverse playback worked 100 % smoothly, then I rebooted the Mac and tried again: the result was stuttering. After this the Mac stayed idle for several hours and I tried to reverse play video again: smooth performance! My conclusion: M3 playback works fine if the stars in the sky are aligned correctly. :-) So it's not only our app, but also Quicktime player is having exactly the same behavior. And only with the M3 chip. The same symptom appears with another similar M3 Mac, so it can't be a single fault. At the same time, open-source video player iina can reverse play the video well on the same Mac. All Macs have otherwise identical configuration: 16 GB RAM and macOS 15.1.1. Have you experienced the same problem? Any chance to solve this problem? I really hope that the M4 chip Mac is behaving better here.
3
0
245
2w
iOS 18.0 and above systems, Control Center Video Effects bug
My app is not a VOIP application. I use devices that support character centering, such as iPad 10 or iPad 13.18. The system is iOS 18.0, 18.1, or 18.1.1. When entering live classes, the "Character Centering" button does not appear in the control center, as shown in the following picture. However, if Voice over IP is selected for Background Modes in the project and the app is run again, it will not be reproduced, even if it is uninstalled or reinstalled. Could you please help me investigate the reason? thank you!
0
0
166
3w
This video scrubbing is ******* worthless
I don’t know what you felt was wrong with video scrubbing that you needed to **** it up this badly. I can’t scrub within the video, only within several seconds of the pause or worse it restarts the whole damn video. used to be an Easy and enjoyable process to harvest photos from a video and you’ve turned it into the most frustrating part of operating my phone. release me a patch to optionally enable the old scrubbing behavior.
0
0
196
Nov ’24
Decode video frames in lower resolution before processing
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export). For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that. However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example: let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in let input = request.sourceImage // <- this still has the video's original size // ... }) composition.renderSize = CGSize(width: 1280, heigth: 720) // for example So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler. Is there a way to tell AVFoundation to load smaller video frames?
0
1
221
Nov ’24
Rendering YCbCr input using Metal
I would like to take YCbCr CVPixelBuffers from AVCaptureVideoDataOutput, apply some processing in RGB space, render to an MTKView, and pass to AVAssetWriter for recording. Right now, I'm doing this all manually – deswing the incoming data if necessary, choose the right matrix to convert to RGB, apply processing, etc. I also have to convert back to YCbCr before feeding the frames to AVAssetWriter because encoding performs much better if I do. Is there any efficient, built-in way to achieve the same? I can't use AVCaptureVideoPreviewLayer, since I need to do some further processing before display. I can't use AVCaptureVideoDataOutput's videoSettings to get automatic BGRA conversion because that would lose bit depth for 10 bit video formats (and isn't available on all formats anyway). I see these Accelerate functions, but they seemingly don't use the GPU, nor do they support all the formats and bit depths I'd need. I found reference to some undocumented MTLPixelFormats that seem to do exactly what I want, but I don't want to rely on something like this unless it's explicitly endorsed. This would also incur an RGB/YCbCr conversion on every texture read and write, right? Is there anything I'm missing here?
0
0
275
Nov ’24
WkWebView breaks with isElementEnabled preference set to true on fullscreen pause
I have an webview that loads videos in it, we would like to be able to fullscreen our videos, so we use the fullscreen preference in the documentation however when it is set to true, upon fullscreening a video then pausing it, the entire video player will disappear. You can exit fullscreen and attempt to fullscreen the video player once again, however upon doing this the entire app view will now disappear and you'll see your desktop background (or whatever is currently behind your app). This behavior seems consistent across multiple websites with the current app. I have setup a sample project you can test here The Main error that seems to trigger to the console is this. I have not been able to find a solution to, maybe I am simply missing something here. I am on Sequoia 15.2 for Mac. Attempting to update all DD element frames, but the bounds or contentsRect are invalid. Bounds: X: 0.00 Y: 0.00, W: 0.00 H: 0.00, contentsRect: X: 0.00 Y: 0.00, W: 1.00 H: 1.00 , skipping
2
0
369
Nov ’24
FaceTime camera returns promise before solving it
The strange_behaviour.mp4 video attached shows how when running a list of statements to open (startCamera) and close (closeCamera) the camera against a MacBook Pro 2019 using the FaceTime camera, these return their value almost immediately, when in reality the camera is still opening and closing. We believe that there might be a queue for statements to run against the camera and it finishes the awaits when all the statements are inserted in the queue instead of when they are actually solved. We also attached the expected_behaviour.mp4 video where we replicate the same flow but using an external camera instead of the FaceTime camera. In this video, the promises take a bit longer to return but they do once the camera has already been opened and closed the requested amount of times. The project used in the videos is attached as project.tar.xz. We would like to know if the behaviour in strange_behaviour.mp4 is replicable on your side and if there is a way to access the cameras to make it behave like when using an external camera (expected_behaviour.mp4). Attachments: https://drive.google.com/drive/folders/1cOeFb5GMbh4mPOeZiZyyevk3N778Kn1M?usp=sharing
0
0
196
Nov ’24
Issue with WebM Video Transparency Rendering on iOS
I'm experiencing a persistent issue with transparent WebM videos rendered via WKWebView in an iOS Capacitor app. The videos play normal, however, they display a black background frame, which does not occur in the web version of the app. I've tried: Playing with the css setting, Enabling experimental WKWebView features, Adjusting meta tags for inline video playback and hardware acceleration. That my code: showThumbs={false} showStatus={false} showIndicators={true} showArrows={false} infiniteLoop={true} autoPlay={true} interval={5000} // Change slide every 5 seconds onChange={(index) => { if (playerRefs.current[index]) { playerRefs.current[index]?.seekTo(0); playerRefs.current[index]?.getInternalPlayer()?.play(); } }} > {videos.map((video, index) => ( <div key={index} className="video-slide"> <ReactPlayer ref={(player) => (playerRefs.current[index] = player)} url={video.src} playing={isLoaded[index]} // Play only when video is loaded loop muted width="100%" onReady={() => handleVideoReady(index)} // Set loaded state when video is ready style={{ backgroundColor: 'transparent' }} config={{ file: { attributes: { playsInline: true, }, }, }} /> <p className="description">{video.description}</p> </div> ))} </Carousel> Working with React, capacitor. The videos work perfect when I test it on the web app, the problem occurs just on my ios app
2
0
218
Nov ’24
Playback problem on AVPlayer with MPEGTS streams
We are experiencing an issue with our HLS MPEG-TS streams on Apple devices, where the AVPlayer in our iOS app and Safari jumps back to the start when the player automatically changes quality. This occurs despite the stream still indicating that it is live and there is no change in the seekbar. After testing our streams with the Apple HLS Validator, the only problem that occured was an "Measured peak bitrate compared to multivariant playlist declared value exceeds error tolerance"-Error. On Chrome and on our Android-App this playback bug does not happen. Has someone else experienced similar issues with the AVPlayer?
1
0
244
Nov ’24
H.264 License
I’m using AVFoundation in my iPhone application to encode a video in MP4 format with H.264, which can then be shared or exported. Do I need to pay a license for using the H.264 format to MPEG LA? Or are these fees already covered by Apple? I’ve read articles suggesting that Apple covers these fees when encoding is done through its native APIs (or via its dedicated encoding hardware components), but I haven’t found any explicit confirmation of this point in the various documentation or contracts... Did I miss something?
1
0
315
Oct ’24
AVPlayer HLS Restirect Stream Resulotion
Hello, we have HLS stream app and we use AVPlayer for HLS stream. We want to implement dynamic resulotion feature as user's selection. For example, if user want to watch only 1080p user has to watch only 1080p but we have tried to implement "preferredMaximumResolution" and "preferredPeakBitRate" parameters and but AVPlayer does not force it which means that setting preferredMaximumResolution= CGSize(width: 1920, height: 1080) player does not only force to play 1080p profile, player drops resulotion to 720p but we do not want 720p stream if user selected 1080p resulotion. Is there any method to force it even if stream stalls? Thank you in advance
0
0
221
Oct ’24
Core Image Tiling and ROI
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc. The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions: return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); } arguments:@[src] ]; The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate: float4 backwards(sampler image, destination dest) { float2 dc = dest.coord(); dc.x = image.size().x - dc.x; return image.sample(image.transform(dc))); } When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large. All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing. I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet. Any thoughts/input appreciated, thanks!
3
0
374
Oct ’24