Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

Video Hardware Acceleration on Mac
I observe significant performance differences when encoding a video in mp4 format (H264). The code I use is standard (using AVAssetWriter, AVAssetWriterInput...). Here is what I notice when I run the same code on different platforms: On an iPhone, the video is encoded in 3 seconds (iPhone 13, 14, 15, 16, Pro...). On a Mac equipped with an M2 Pro, the video is encoded in 50 seconds. On a Mac equipped with an Intel processor (2,3 GHz Intel Xeon W 18 cœurs), the video is encoded in 2 minutes. The encoding on an iPhone is very fast due to hardware acceleration. However, I don’t understand why I don’t get similar performance with a Mac M2 Pro, which is equipped with a dedicated component for hardware acceleration (H264 media engine)? Is hardware acceleration disabled on a Mac?
0
0
445
Oct ’24
Overlapping Video Frames in RPBroadcastSampleHandler with ReplayKit
I am recording video on iOS using ReplayKit and found that after copying data in the processSampleBuffer:withType: callback using memcpy, the data changes. This occurs particularly frequently when the screen content changes rapidly, making it look like the frames are overlapping. I found that the values starting from byte 672 in the video data on my device often change. Here is the test demo: - (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType { switch (sampleBufferType) { case RPSampleBufferTypeVideo: { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); int ret = 0; uint8_t *oYData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); size_t oYSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0); uint8_t *oUVData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); size_t oUVSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1); if (oYSize <= 672) { return; } uint8_t tempValue = oYData[672]; uint8_t *tYData = malloc(oYSize); memcpy(tYData, oYData, oYSize); if (tYData[672] != oYData[672]) { NSLog(@"$$$$$$$$$$$$$$$$------ t:%d o:%d temp:%d", tYData[672], oYData[672], tempValue); } free(tYData); CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); break; } default: { break; } } } Output: $$$$$$$$$$$$$$$$------ t:110 o:124 temp:110 $$$$$$$$$$$$$$$$------ t:111 o:133 temp:111 $$$$$$$$$$$$$$$$------ t:124 o:138 temp:124 $$$$$$$$$$$$$$$$------ t:133 o:144 temp:133 $$$$$$$$$$$$$$$$------ t:138 o:151 temp:138 $$$$$$$$$$$$$$$$------ t:144 o:156 temp:144 $$$$$$$$$$$$$$$$------ t:151 o:135 temp:151 $$$$$$$$$$$$$$$$------ t:156 o:78 temp:156 $$$$$$$$$$$$$$$$------ t:135 o:76 temp:135 $$$$$$$$$$$$$$$$------ t:78 o:77 temp:78 $$$$$$$$$$$$$$$$------ t:76 o:80 temp:76 $$$$$$$$$$$$$$$$------ t:77 o:80 temp:77 $$$$$$$$$$$$$$$$------ t:80 o:79 temp:80 $$$$$$$$$$$$$$$$------ t:79 o:80 temp:79
0
0
501
Oct ’24
AirPlay fails when app is also installed on the receiver
We have a universal iOS/tvOS app that also supports iOS App on Mac. In our AVPlayer-based video player we support AirPlay with AVRouteDetector and AVRoutePickerView. We play HLS streams. When we try to AirPlay from an iOS device to an Apple TV or a Mac that has our app installed, it doesn't work. The receiver is marked as active in the route picker UI but the video doesn't show up on the receiver and playback stops. When our app isn't installed on the receiver device, everything works as expected. Has anyone encountered the same issue? Any solutions available for this?
0
0
570
Nov ’24
AVKit Video Player not working
Hey. I am trying to create a present view with a bunch of media (images/videos). Right now I am using a ZStack to render each media and change opacity based on the index selected using a scrollView. The issue seems to be that sometimes, videos don't seem to load in the main slide. There is a slide created as the video exists, the Player shows controls too but doesn't play anything. Present View Z-Stack ZStack { ForEach(presentation.slides.indices, id: .self) { index in if let media = mediaCacheManager.mediaCache[index] { if let player = media as? AVPlayer { PlayerView(player: player) .aspectRatio(16/10, contentMode: .fit ) .frame(width: UIScreen.main.bounds.width * 0.8) .background(Color.gray.opacity(0.2)) .clipShape(RoundedRectangle(cornerRadius: 40)) .overlay( RoundedRectangle(cornerRadius: 40) .stroke(Color.gray.opacity(0.5), lineWidth: 1) ) .onDisappear { player.pause() } .opacity(appModel.currentSlide == index ? 1 : 0) } else if let image = media as? Image { image .resizable() .scaledToFit() .frame(width: UIScreen.main.bounds.width * 0.8) .background(Color.gray.opacity(0.2)) .clipShape(RoundedRectangle(cornerRadius: 40)) .overlay( RoundedRectangle(cornerRadius: 40) .stroke(Color.gray.opacity(0.5), lineWidth: 1) ) .padding(.vertical, 10) .opacity(appModel.currentSlide == index ? 1 : 0) } } } } The PlayerView public class PlayerUIView: UIView { let playerVC = AVPlayerViewController() let gravity: AVLayerVideoGravity let manageAudio: Bool override init(frame: CGRect) { self.gravity = .resizeAspectFill self.manageAudio = true super.init(frame: frame) } deinit { if manageAudio { try? AVAudioSession.sharedInstance().setActive(false) } } init(player: AVPlayer?, gravity: AVLayerVideoGravity, manageAudio: Bool = true) { self.gravity = gravity self.manageAudio = manageAudio super.init(frame: .zero) guard let player = player else { return } self.playerSetup(player: player) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } public override func layoutSubviews() { super.layoutSubviews() playerVC.view.frame = bounds playerVC.view.backgroundColor = .clear playerVC.allowsVideoFrameAnalysis = false } private func playerSetup(player: AVPlayer) { playerVC.updatesNowPlayingInfoCenter = true playerVC.player = player playerVC.showsPlaybackControls = true playerVC.view.backgroundColor = .clear playerVC.exitsFullScreenWhenPlaybackEnds = true playerVC.videoGravity = gravity self.addSubview(playerVC.view) } }
0
0
503
Nov ’24
Firewire video OUTPUT to hardware 2024
Looking to output dv video to my JVC SR-VS30 video deck. I used to be able to do this, but with most firewire stuff being deprecated, I'm not sure how to go about this. I found this old developer sample code that seems to do exactly what I'd like. Surely this could be rolled or updated for current macOS? https://developer.apple.com/library/archive/samplecode/SimpleVideoOut/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000809-Intro-DontLinkElementID_2
0
0
340
Nov ’24
[AVPlayer][5G] The buffer duration (preferredForwardBufferDuration) configuration property of AVPlayerItem does not work on a 5G network
I tried configuring the preferredForwardBufferDuration on devices using 4G and Wi-Fi, and in these cases, AVPlayer works correctly according to the configured buffer duration. However, when the device is connected to a 5G network, the configuration value no longer works. For example, if I set preferredForwardBufferDuration to 30 seconds, AVPlayer preloads with a buffer of over 100 seconds. I’m not sure how to resolve this, as it’s causing issues with my system.
0
0
614
Nov ’24
Decode video frames in lower resolution before processing
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export). For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that. However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example: let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in let input = request.sourceImage // <- this still has the video's original size // ... }) composition.renderSize = CGSize(width: 1280, heigth: 720) // for example So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler. Is there a way to tell AVFoundation to load smaller video frames?
0
1
577
Nov ’24
Why doesn't sometimes recommendedVideoSettings have recommend settings?
I am talking about AVCaptureVideoDataOutput.recommendedVideoSettings. I found sometimes it return nil, there is my test result. hevc .mov with activeColorSpace sRGB 60FPS -> ok 120FPS -> ok hevc .mov with activeColorSpace displayP3_HLG 60FPS -> nil 120FPS -> nil h264 .mov 30FPS -> ok 60FPS -> nil 120FPS -> nil so, if you don't give a recommend setting, and you don't give a document, how does developer to use it?
0
0
381
Nov ’24
iOS 18.0 and above systems, Control Center Video Effects bug
My app is not a VOIP application. I use devices that support character centering, such as iPad 10 or iPad 13.18. The system is iOS 18.0, 18.1, or 18.1.1. When entering live classes, the "Character Centering" button does not appear in the control center, as shown in the following picture. However, if Voice over IP is selected for Background Modes in the project and the app is run again, it will not be reproduced, even if it is uninstalled or reinstalled. Could you please help me investigate the reason? thank you!
0
0
432
Nov ’24
How to save 4K60 ProRes Log Video Internally on iPhone internally?
Hello Apple Engineers, Specific Issue: I am working on a video recording feature in my SwiftUI app, and I am trying to record 4K60 video in ProRes Log format using the iPhone's internal storage. Here's what I have tried so far: I am using AVCaptureSession with AVCaptureMovieFileOutput and configuring the session to support 4K resolution and ProRes codec. The sessionPreset is set to .inputPriority, and the video device is configured with settings such as disabling HDR to prepare for Log. However, when attempting to record 4K60 ProRes video, I get the error: "Capturing 4k60 with ProRes codec on this device is supported only on external storage device." This error seems to imply that 4K60 ProRes recording is restricted to external storage devices. But I am trying to achieve this internally on devices such as the iPhone 15 Pro Max, which has native support for ProRes encoding. Here are my questions: Is it technically possible to record 4K60 ProRes Log video internally on supported iPhones (for example: iPhone 15 Pro Max)? There are some 3rd apps (i.e. Blackmagic 👍🏻) that can save 4K60 ProRes Log video on iPhone internally. If internal saving is supported, what additional configuration is needed for the AVCaptureSession or other technique to bypass this limitation? If anyone has successfully saved 4K60 ProRes Log video on iPhone internal storage, your guidance would be highly appreciated. Thank you for your help!
0
0
628
Nov ’24
Debug MediaExtension plugin in system exectutable?
I am developing a macOS 15 MediaExtension plugin to enable additional codecs and container formats in AVFoundation My Plugin is sort of working, but i'd like to debug the XPC process that AVFoundation 'hoists' for me from the calling app (ie - the process hosting my plugin instance that is managing the MESampleBuffer protocol calls for example) Is there a method to configure XCode for interactive attaching to this background process for interactive debugging? Right now I have to use Console + Print which is not fun or productive. Does Apple have a working example of a MediaExtension anywhere? This is an exciting API that is very under-documented. I'm willing to spend a Code Review 'credit' for this, but my issues are not quite focused. Any assistance is highly appreciated!
0
0
494
Dec ’24
Why AvAssetWriter adds pts drift when writing fMP4
Hi I'm working on a project that require video frame PTS to be consistent between original video and a transcoded one. It's working fairly well on regular mp4, however if I set preferredOutputSegmentInterval to have generate a fMP4 output, even I specified the initialSegmentStartTime as 0, it always add one frame pts offset to all frames. For example: if I use the code sample provided by Apple: https://developer.apple.com/videos/play/wwdc2020/10011/?time=406, useffprobe -select_streams v:0 -show_entries packet=pts_time -of csv ~/Downloads/fmp4/prog_index.m3u8 to display the pts of the output, it doesn't start from 0, but has some one frame pts offset. I also tried open with MP4Box, it also shows the first frames dts and cts are not start from 0. However, if I use AVAssetReader to read the same output video, and get the PTS from 1st frame, it's returning 0. So I can't use it to calculate the pts difference between 2 videos neither. Can I get some help to understand why there is difference between AVAssetWriter/Reader fMP4's pts and others like ffprobe?
0
0
566
Dec ’24
Custom Share Desination stopped working in FCP X 11
We integrate with FCP X using a custom share destination and the Apple Script interface. This has been working fine until the the recent version 11 update of FCP X. With this update we are no longer receiving the open event when the export has completed. We get the apple event to creat the Asset and the file is exported to the location we set in the response. There is just no open event after that. I suspect something is wrong with our scripting support but I have no idea what or how to troubleshoot. This works fine in 10.8.1 and below.
0
0
355
Dec ’24
Capacitor app on iOS inline video stops
Hello, I’m experiencing an issue with video playback in my Javascript (SvelteKit) application using Capacitor. The video plays and loops correctly on Android and web browsers (including Safari) but stops unexpectedly after a few iterations on iOS native App. <video src={videoPath} autoplay muted loop playsinline class="h-auto w-full max-w-full object-cover"></video> Has anyone encountered a similar issue or have insights into what might be causing this behavior on iOS? Any suggestions or workarounds would be greatly appreciated. Maybe it has something to do with the iOS power saving policy? Thank you in advance for your help!
0
0
444
Jan ’25
4k 120fps Showing Black Screen on iPhone 16
Hey - I am developing an app that uses the camera for recording video. I put the ability to choose a framerate and resolution and all combinations work perfectly fine, except for 4k 120fps for the new iPhone 16 pro. This just shows black on the preview. I tried to record even though the preview was black, but the recording is also just a black screen. Is there anything special that needs to be done in the camera setup for 4k 120fps to work? I have my camera setup code attached. Is it possible this is a bug in Apple's code, since this works with every other combination (1080p up to 240fps and 4k up to 60fps)? Thanks so much for the help. class CameraManager: NSObject { enum Errors: Error { case noCaptureDevice case couldNotAddInput case unsupportedConfiguration } enum Resolution { case hd1080p case uhd4K var preset: AVCaptureSession.Preset { switch self { case .hd1080p: return .hd1920x1080 case .uhd4K: return .hd4K3840x2160 } } var dimensions: CMVideoDimensions { switch self { case .hd1080p: return CMVideoDimensions(width: 1920, height: 1080) case .uhd4K: return CMVideoDimensions(width: 3840, height: 2160) } } } enum CameraType { case wide case ultraWide var captureDeviceType: AVCaptureDevice.DeviceType { switch self { case .wide: return .builtInWideAngleCamera case .ultraWide: return .builtInUltraWideCamera } } } enum FrameRate: Int { case fps60 = 60 case fps120 = 120 case fps240 = 240 } let orientationManager = OrientationManager() let captureSession: AVCaptureSession let previewLayer: AVCaptureVideoPreviewLayer let movieFileOutput = AVCaptureMovieFileOutput() let videoDataOutput = AVCaptureVideoDataOutput() private var videoCaptureDevice: AVCaptureDevice? override init() { self.captureSession = AVCaptureSession() self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession) super.init() self.previewLayer.videoGravity = .resizeAspect } func configureSession(resolution: Resolution, frameRate: FrameRate, stabilizationEnabled: Bool, cameraType: CameraType, sampleBufferDelegate: AVCaptureVideoDataOutputSampleBufferDelegate?) throws { assert(Thread.isMainThread) captureSession.beginConfiguration() defer { captureSession.commitConfiguration() } captureSession.sessionPreset = resolution.preset if captureSession.canAddOutput(movieFileOutput) { captureSession.addOutput(movieFileOutput) } else { throw Errors.couldNotAddInput } videoDataOutput.setSampleBufferDelegate(sampleBufferDelegate, queue: DispatchQueue(label: "VideoDataOutputQueue")) if captureSession.canAddOutput(videoDataOutput) { captureSession.addOutput(videoDataOutput) // Set the video orientation if needed if let connection = videoDataOutput.connection(with: .video) { //connection.videoOrientation = .portrait } } else { throw Errors.couldNotAddInput } guard let videoCaptureDevice = AVCaptureDevice.default(cameraType.captureDeviceType, for: .video, position: .back) else { throw Errors.noCaptureDevice } let useDimensions = resolution.dimensions guard let format = videoCaptureDevice.formats.first(where: { format in let dimensions = CMVideoFormatDescriptionGetDimensions(format.formatDescription) let isRes = dimensions.width == useDimensions.width && dimensions.height == useDimensions.height let frameRates = format.videoSupportedFrameRateRanges return isRes && frameRates.contains(where: { $0.maxFrameRate >= Float64(frameRate.rawValue) }) }) else { throw Errors.unsupportedConfiguration } self.videoCaptureDevice = videoCaptureDevice do { let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice) if captureSession.canAddInput(videoInput) { captureSession.addInput(videoInput) } else { throw Errors.couldNotAddInput } try videoCaptureDevice.lockForConfiguration() videoCaptureDevice.activeFormat = format videoCaptureDevice.activeVideoMinFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: CMTimeScale(frameRate.rawValue)) videoCaptureDevice.activeMaxExposureDuration = CMTime(seconds: 1.0 / 960, preferredTimescale: 1000000) videoCaptureDevice.exposureMode = .locked videoCaptureDevice.unlockForConfiguration() } catch { throw error } configureStabilization(enabled: stabilizationEnabled) }`
0
0
431
Jan ’25
Transparent overlay changes color in HDR video
Overlay changes color in HDR video When I’m using trying to add an overlay to an image with AVMutableVideoComposition, When the video is in HDR the overlay colors are changing and white becomes grey screen shot from original HDR video result from the code with the wrong overlay colorthe result when reducing to SDR (the right overlay color) the distorted colorsthe way it should look(sdr) Im creating the overlay with a CGContext class CustomHdrCompositor: NSObject, AVVideoCompositing { private let coreImageContext = CIContext(options: [CIContextOption.cacheIntermediates: false]) let combinedFilter = CIFilter(name: "CISourceOverCompositing")! var sourcePixelBufferAttributes: [String: Any]? = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var requiredPixelBufferAttributesForRenderContext: [String: Any] = [String(kCVPixelBufferPixelFormatTypeKey): [kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange]] var supportsWideColorSourceFrames = true var supportsHDRSourceFrames = true func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) { return } func startRequest(_ request: AVAsynchronousVideoCompositionRequest) { guard let outputPixelBuffer = request.renderContext.newPixelBuffer() else { print("No valid pixel buffer found. Returning.") request.finish(with: CustomCompositorError.ciFilterFailedToProduceOutputImage) return } guard let requiredTrackIDs = request.videoCompositionInstruction.requiredSourceTrackIDs, !requiredTrackIDs.isEmpty else { print("No valid track IDs found in composition instruction.") return } let sourceCount = requiredTrackIDs.count if sourceCount > 1 { request.finish(with: CustomCompositorError.notSupportingMoreThanOneSources) return } if sourceCount == 1 { let sourceID = requiredTrackIDs[0] let sourceBuffer = request.sourceFrame(byTrackID: sourceID.value(of: Int32.self)!)! let sourceCIImage = CIImage(cvPixelBuffer: sourceBuffer) var textImage = TextLayerPlayer.instance.getTextLayerAtTimesStamp(ts:request.compositionTime.seconds) combinedFilter.setValue(textImage, forKey: "inputImage") if let outputImage = combinedFilter.outputImage { let renderDestination = CIRenderDestination(pixelBuffer: outputPixelBuffer) do { try coreImageContext.startTask(toRender: outputImage, to: renderDestination) } catch { } } } request.finish(withComposedVideoFrame: outputPixelBuffer) } } func regularCompositionHdr(asset: AVAsset) -> AVVideoComposition { self.isHdr = checkHdr(asset: asset) let avComposition = AVMutableComposition() let composition = AVMutableVideoComposition() composition.colorPrimaries = AVVideoColorPrimaries_ITU_R_2020 composition.colorTransferFunction = AVVideoTransferFunction_ITU_R_2100_HLG composition.colorYCbCrMatrix = AVVideoYCbCrMatrix_ITU_R_2020 composition.renderSize = assetSize composition.frameDuration = CMTime(value: 1, timescale: 30) composition.customVideoCompositorClass = CustomHdrCompositor.self composition.perFrameHDRDisplayMetadataPolicy = .propagate return composition } I’m using this function to transfer the transparent CGImage to CIImage that supports HDR func convertToHDRCIImage(from cgImage: CGImage, maxBrightness: CGFloat = 3.0) -> CIImage? { // Create a CIImage from the input CGImage let baseImage = CIImage(cgImage: cgImage) // Create HDR color adjustment filter let colorAdjust = CIFilter(name: "CIColorMatrix")! colorAdjust.setValue(baseImage, forKey: kCIInputImageKey) // Calculate HDR multipliers based on maxBrightness // This will maintain color ratios while increasing brightness colorAdjust.setValue(CIVector(x: maxBrightness, y: 0, z: 0, w: 0), forKey: "inputRVector") colorAdjust.setValue(CIVector(x: 0, y: maxBrightness, z: 0, w: 0), forKey: "inputGVector") colorAdjust.setValue(CIVector(x: 0, y: 0, z: maxBrightness, w: 0), forKey: "inputBVector") // Maintain alpha channel colorAdjust.setValue(CIVector(x: 0, y: 0, z: 0, w: 1), forKey: "inputAVector") guard let adjustedImage = colorAdjust.outputImage else { return nil } // Apply color space transformation using CIImage's colorSpace property let transformedImage = adjustedImage.matchedFromWorkingSpace(to: hdrWorkingSpace)! // Create context with HDR color space let context = CIContext(options: [ .workingColorSpace: hdrColorSpace, .outputColorSpace: hdrColorSpace ]) // Get the image bounds let bounds = transformedImage.extent // Create a new pixel buffer with HDR format var pixelBuffer: CVPixelBuffer? let pixelBufferAttributes = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_64RGBAHalf, kCVPixelBufferMetalCompatibilityKey: true ] as CFDictionary CVPixelBufferCreate(kCFAllocatorDefault, Int(bounds.width), Int(bounds.height), kCVPixelFormatType_64RGBAHalf, pixelBufferAttributes, &pixelBuffer) guard let destinationBuffer = pixelBuffer else { return nil } context.render(transformedImage, to: destinationBuffer, bounds: bounds, colorSpace: hdrColorSpace) // Create final CIImage from the HDR pixel buffer let finalImage = CIImage(cvPixelBuffer: destinationBuffer, options: [.colorSpace: hdrColorSpace]) return finalImage } When reducing the HDR to SDR it keeps the right color of the overlay with, but than it reduces the HDR effect which I want to keep
0
0
333
Jan ’25
Playing fMP4 Raw Chunks in AVPlayer on iOS
Hello Apple Community, We are working on a real-time streaming feature where we receive chunks of raw MP4 data through a custom protocol and store them in a buffer (array). Our goal is to use these data chunks to play a continuous video stream in AVPlayer. What We've Tried: Custom URL Scheme with AVAssetResourceLoaderDelegate: We implemented a custom URL scheme (customscheme://) to serve the buffered data using AVAssetResourceLoaderDelegate. The method shouldWaitForLoadingOfRequestedResource is called only during the initial allocation. It doesn't get triggered when new chunks are appended to the buffer. Despite appending new data to the buffer, AVPlayer doesn’t request further chunks from the delegate. What We Need: We are looking for a solution where: The player continuously fetches data from the buffer as new chunks are added. The playback remains smooth and uninterrupted, even with real-time data being appended. Ideally, this solution works with AVPlayer while adhering to HLS-like behavior without implementing an HLS server. Questions: Is AVAssetResourceLoaderDelegate the right approach for this use case? If so, how can we ensure shouldWaitForLoadingOfRequestedResource is called whenever new data is available in the buffer? Are there alternative APIs or recommended patterns for playing real-time MP4 data chunks in AVPlayer? Would implementing a custom FFmpeg-based player be necessary, or can this be achieved using AVPlayer and its APIs? We appreciate any guidance, suggestions, or examples that can help us achieve this. Thank you!
0
1
509
Jan ’25