Dive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.

Video Documentation

Posts under Video subtopic

Post

Replies

Boosts

Views

Activity

Crash in iOS 18 regarding [AVPlayerController _observeValueForKeyPath:oldValue:newValue:]
There are significant crash reports coming from iOS 18 users regarding AVKit framework that starts from this line [AVPlayerController _observeValueForKeyPath:oldValue:newValue:] which seems to be coming from iOS internal SDK. There are 2 kinds of crash we found: UI modification on background thread From the stack trace it seems like when AVPictureInPictureController is being deallocated and its view is being removed from superview somehow the code is being executed in background thread because there is this line there _AssertAutoLayoutOnAllowedThreadsOnly highlighted before the crash. But I’ve checked our code that plays around AVPictureInPictureController, in the locations where we would deallocate the object it will always be called on main thread which are insideviewDidLoad and deinit inside UIViewController class. From the log, it seems like the crash happened when user try to open another content when PIP player is active resulting in the current PIP instance will be replaced with a new one. My suspect is the observation logic inside AVPlayerController could be the hint to this issue, probably something broken over there since this issue happened across our app versions on iOS 18 users only. Unfortunately, I was unable to reproduce this issue yet but one of my colleagues reproduced it once but haven’t been able to do it again since. The reports keep raising each day up to 1.3k events in the last 30 days now. Over release object This one has lower reports than the first one but I decided to include it since it might have relevant information regarding the first crash since the starting stack trace is similar. The crash timing seems to be similar to the first one, where we deallocate existing AVPictureInPictureController and later replace it with a new one and also found only in iOS 18 users which also refers to [AVPlayerController _observeValueForKeyPath:oldValue:newValue:]. I also was unable to reproduce this issue so far. Oh, and both of the issues happened on both iPhone and iPad. We’d appreciate any advice on what we can do to avoid this in the future and probably any hint on why it could happened. I have reported this issue with bug number: FB15620734 I also attached one sample crash report for each of the crashes here. non ui thread access.crash over release.crash
9
13
2.2k
Jun ’25
Reference White Calculation for HDR Video Rendering in Metal
Our multimedia application Boinx FotoMagico displays media files of various kinds with a Metal rendering engine. At the moment we still use .bgra8Unorm pixel format and sRGB color space and only render in SDR, which is increasingly a problem, as much of the video content is HDR nowadays (e.g. videos shot on an iPhone). For that reason we would like to switch to EDR rendering with .rgba16Float pixel format and extendedLinearDisplayP3 color space. We have already worked out how to do this for HDR image files, but still have a technical problem when rendering HDR video files. We are using AVFoundation to get the video frames as CVPixelBuffers and convert them to MTLTexture using a CVMetalTextureCache. MTLTextures are then further processed in various compute shaders before being rendered to screen. However the pixel values in the texture are not what we expected. Video frames appear too bright/overexposed. In WWDC21 session "Explore HDR rendering with EDR" Ken Greenebaum mentioned: “AVFoundation does not presently decode HDR formats, such as HDR10, to EDR. Consequently, these need to be adapted for use with EDR rendering. This conversion is straightforward and involves two steps. First, converting to linear light by applying the inverse transfer function. And second, dividing by the medium's reference white.” https://developer.apple.com/videos/play/wwdc2021/10161?time=1498 However, the session does not explain, how to get or calculate the correct value for "reference white". We could not find any relevant info on the web. This is why we need DTS assistance. We need the code that calculates the correct value for reference white for any kind of video, whether it is SDR or HDR, and regardless of codec and encoding. I assume that Ken Greenebaum is the best Apple engineer to ask in this case, because he recorded most of the EDR related WWDC sessions in recent years? We have written a small test app that renders a short sample video (HLG encoding). The window contains two views. The upper view uses an AVPlayerLayer and renders the video natively just like QuickTime Player. The video content looks correct here. BTW, the window background is SDR white, so that bright EDR pixels can be clearly identified, e.g. the clouds just above the mountains in the upper left corner of the sample video. You may need to lower display brightness a bit if these clouds do not appear brighter than the white window background. The bottom view uses a CAMetalLayer and low-level Metal rendering. The CVPixelBuffers we receive from AVFoundation still need to be scaled down so that SDR reference white reaches pixel value 1.0. Entering a value of 9.0 to 10.0 for reference white in the text field makes it look about right on my Studio Display. But that is just experimental for this sample video file. We need code to calculate the correct value for reference white for any kind of video file! We have a couple of questions: SDR videos should probably use 1.0 as reference white, as their encoded pixel values can already be used as is? Is this assumption correct? Different video encoding of HDR video (HLG, PQ, etc) will probably lead to different values for reference white? Is the value for reference white constant throughout a video, or can it vary over time, either scene by scene, or even frame by frame? If it can vary, does the CVPixelBuffer of the current video frame contain all the necessary metadata to calculate the correct value? Does the NSScreen.maximumExtendedDynamicRangeColorComponentValue also influence the reference white value? The attached sample project is structured in a way that the only piece of code that needs to be modified is the ViewController.sdrReferenceWhiteValue() function. Please read the comments and the #warning in this function. This is where the code for calculating the reference white value should be inserted. Here is the download link for the sample project: https://www.dropbox.com/scl/fi/4w5gmftav5xhbixu9u6pb/HDRMetalTest.zip?rlkey=n8cm02soux3rx03vplgo6h1lm&dl=0
9
2
629
Feb ’25
AV1 Hardware Decoding
Recently I've been trying to play some AV1-encoded streams on my iPhone 15 Pro Max. First, I check for hardware support: VTIsHardwareDecodeSupported(kCMVideoCodecType_AV1); // YES Then I need to create a CMFormatDescription in order to pass it into a VTDecompressionSession. I've tried the following: { mediaType:'vide' mediaSubType:'av01' mediaSpecific: { codecType: 'av01' dimensions: 394 x 852 } extensions: {{ CVFieldCount = 1; CVImageBufferChromaLocationBottomField = Left; CVImageBufferChromaLocationTopField = Left; CVPixelAspectRatio = { HorizontalSpacing = 1; VerticalSpacing = 1; }; FullRangeVideo = 0; }} } but VTDecompressionSessionCreate gives me error -8971 (codecExtensionNotFoundErr, I assume). So it has something to do with the extensions dictionary? I can't find anywhere which set of extensions is necessary for it to work 😿. VideoToolbox has convenient functions for creating descriptions of AVC and HEVC streams (CMVideoFormatDescriptionCreateFromH264ParameterSets and CMVideoFormatDescriptionCreateFromHEVCParameterSets), but not for AV1. As of today I am using XCode 15.0 with iOS 17.0.0 SDK.
8
3
8.9k
Nov ’24
AVPlayerViewController not displaying playback controls in iOS 18
Hi everyone, I’ve encountered an issue with the showsPlaybackControls property in AVPlayerViewController after updating to iOS 18. Even though it’s set to true, the native playback controls (play, pause, etc.) are no longer appearing as they used to in previous iOS versions. This behavior was consistent and worked perfectly prior to iOS 18. Additionally, I’m seeing the same problem when using the VideoPlayer in SwiftUI. The native controls that should appear by default seem to have vanished after the update. Has anyone else experienced this? Is there any workaround or additional configuration required to restore the native controls? Any help or insights would be appreciated. Thanks! struct CustomPlayerView: UIViewControllerRepresentable { let player: AVPlayer func updateUIViewController(_ playerController: AVPlayerViewController, context: Context) { playerController.player = player playerController.showsPlaybackControls = true player.play() } func makeUIViewController(context: Context) -> AVPlayerViewController { return AVPlayerViewController() } }
8
1
1.6k
Oct ’24
Writing video using AVAssetWriter, AVAssetReader, and AVSPEECHSYNTHESIZER
Hello, First, some version and software details: Software: iOS 18.1 Hardware: iPhone 14 Pro Max and later Xcode: 16.0 Summary: AVAssetReader is not concatenating a video at the beginning of the output video. The output video should contain a scene of me introducing the content, followed by a blue screen with AVSpeechSynthesizer reading out a text that I pasted above the "Generate Video" button. Details: Now, let's talk about the app. Basically, I’m developing an app that generates a video with the following features: My app will create an output video that is split into an opening scene followed by a fully blue screen. The opening scene will be taken from a video I choose from my gallery. I will read the opening video using AVAssetReader as usual. After the opening scene, I will use the content of a text read by AVSpeechSynthesizer.write(). After the opening scene, the synthesized audio will start playing while the blue screen is displayed. All of this is already defined in the attached project. Each project file has a comment at the beginning introducing its content. How to test: Write something in the field above the "Generate Video" button. For example, type "Hello, world!" Then, press the "Library" button and select a video from the gallery, about 30 seconds long. That’s it. Press the "Generate Video" button. The result I’ve experienced is a crash or failure to generate the video. Practical example of what I want to achieve: Suppose I record a 30-second video where I say, "I’m going to tell you the story of Snow White." Then, I paste the "Snow White" story into the field above the "Generate Video" button. The output video should contain me saying, "I’m going to tell you the story of Snow White." After that, the AVSpeechSynthesizer will read the story I pasted, while displaying a blue screen. I look forward to a solution. Thank you very much! convertToCMSampleBuffer.swift convertToPixelBuffer.swift createInputs.swift createVideo.swift test.swift saveVideo.swift TestApp.swift editingVideo.swift sampleReaderProvider.swift misc.swift sampleProvider.swift
8
0
1.1k
Nov ’24
videoCaptureQueue would make the app crashed when I using IOS 18.4.1
Hi All I have some problem when I using the IOS 18.4.1 I have iphone16 pro and ipad Air, both are updated to IOS 18.4.1 I tried to following sample code. However, when I run the app around 30 seconds to 1 minutes, the application would be crashed When I using another Ipad with IOS 17, it would not have the same problem. https://developer.apple.com/documentation/createml/creating-an-action-classifier-model https://developer.apple.com/documentation/createml/detecting_human_actions_in_a_live_video_feed#overview%29,
6
0
105
May ’25
AVAssetExportSession is not working on Iphone 16 pro max.
My App is live on app store , user are using it with iPhone 16 pro max and they are getting Operation Stopped while combining videos and audios only specifically on iPhone 16 pro max , on every other device its working fine. And When i adding AVAssetExportPresetPassthrough it able to combine videos and audios but not respecting the encoding and without audio. NSArray *compatiblePresets = [AVAssetExportSession exportPresetsCompatibleWithAsset:composition]; if ([compatiblePresets containsObject:AVAssetExportPresetHighestQuality]) { presetName = AVAssetExportPresetHighestQuality; } else if ([compatiblePresets containsObject:AVAssetExportPreset1920x1080]) { presetName = AVAssetExportPreset1920x1080; } else if ([compatiblePresets containsObject:AVAssetExportPreset1280x720]) { presetName = AVAssetExportPreset1280x720; } else { presetName = AVAssetExportPresetPassthrough; } } else { presetName = AVAssetExportPreset1280x720; }
5
2
823
Oct ’24
AVASSETREADER and AVAssetWriter: ideal settings
Hello, To create a test project, I want to understand how the video and audio settings would look for a destination video whose content comes from a source video. I obtained the output from the source video in the audio and video tracks as follows: let audioSettings = [ AVFormatIDKey: kAudioFormatLinearPCM, AVSampleRateKey: 44100, AVNumberOfChannelsKey: 2 ] as [String : Any] var audioOutput = AVAssetReaderTrackOutput(track: audioTrack!, outputSettings: audioSettings) // Video let videoSettings = [ kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA, kCVPixelBufferWidthKey: videoTrack!.naturalSize.width, kCVPixelBufferHeightKey: videoTrack!.naturalSize.height ] as [String: Any] var videoOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: videoSettings) With this, I'm obtaining the CMSampleBuffer using AVAssetReader.copyNextSampleBuffer . How can I add it to the destination video? Should I use a while loop, considering I already have the AVAssetWriter set up? Something like this: while let buffer = videoOutput.copyNextSampleBuffer() { if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) { let frame = imgBuffer as CVPixelBuffer let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) adaptor.append(frame, withMediaTime: time) } } Lastly, regarding the destination video. How should the AVAssetWriterInput for audio and PixelBuffer of the destination video be set up? Provide an example, something like: let audioSettings = […] as [String: Any] Looking forward to your response.
5
0
586
Dec ’24
AVAssetWriter & AVTimedMetadataGroup in AVMultiCamPiP
I'm trying to add metadata every second during video capture in the Swift sample App "AVMultiCamPiP". A simple string that changes every second with a write function triggered by a Timer. Can't get it to work, no matter how I arrange it, always ends up with the error "Cannot create a new metadata adaptor with an asset writer input that has already started writing". This is the setup section: // Add a metadata input let assetWriterMetaDataInput = AVAssetWriterInput(mediaType: .metadata, outputSettings: nil, sourceFormatHint: AVTimedMetadataGroup().copyFormatDescription()) assetWriterMetaDataInput.expectsMediaDataInRealTime = true assetWriter.add(assetWriterMetaDataInput) self.assetWriterMetaDataInput = assetWriterMetaDataInput This is the timed metadata creation which gets triggered every second: let newNoteMetadataItem = AVMutableMetadataItem() newNoteMetadataItem.value = "Some string" as (NSCopying & NSObjectProtocol)? let metadataItemGroup = AVTimedMetadataGroup.init(items: [newNoteMetadataItem], timeRange: CMTimeRangeMake( start: CMClockGetTime( CMClockGetHostTimeClock() ), duration: CMTime.invalid )) movieRecorder?.recordMetaData(meta: metadataItemGroup) This function is supposed to add the metadata to the track: func recordMetaData(meta: AVTimedMetadataGroup) { guard isRecording, let assetWriter = assetWriter, assetWriter.status == .writing, let input = assetWriterMetaDataInput, input.isReadyForMoreMediaData else { return } let metadataAdaptor = AVAssetWriterInputMetadataAdaptor(assetWriterInput: input) metadataAdaptor.append(meta) } I have an older code example in objc which works OK, but it uses "AVCaptureMetadataInput appendTimedMetadataGroup" and writes to an identifier called "quickTimeMetadataLocationNote". I'd like to do something similar in the above Swift code ... All suggestions are appreciated !
5
0
529
Dec ’24
CoreMediaErrorDomain -12035 error when playing a Fairplay-protected HLS stream on iOS 18+ through the Apple lightning AV Adapter
Our iOS/AppleTV video content playback app uses AVPlayer to play HLS video streams and supports both custom and system playback UIs. The Fairplay content key is retrieved using AVContentKeySession. AirPlay is supported too. When the iPhone is connected to a TV through the lightning Apple Digital AV Adapter (A1438), the app is mirrored as expected. Problem: when using an iPhone or iPad on iOS 18.1.1, FairPlay-protected HLS streams are not played and a CoreMediaErrorDomain -12035 error is received by the AVPlayerItem. Also, once the issue has occurred, the mirroring freezes (the TV indefinitely displays the app playback screen) although the app works fine on the iOS device. The content key retrieval works as expected (I can see that 2 content key requests are made by the system by the way, probably one for the local playback and one for the adapter, as when AirPlaying) and the error is thrown after providing the AVContentKeyResponse. Unfortunately, and as far as I know, there is not documentation on CoreMediaErrorDomain errors so I don't know what -12035 means. The issue does not occur: on an iPhone on iOS 17.7 (even with FairPlay-protected HLS streams) when playing DRM-free video content (whatever the iOS version) when using the USB-C AV Adapter (whatever the iOS version) Also worth noting: the issue does not occur with other video playback apps such as Apple TV or Netflix although I don't have any details on the kind of streams these apps play and the way the FairPlay content key is retrieved (if any) so I don't know if it is relevant.
5
2
1.3k
Feb ’25
Alternative for crashing API MPMediaItemArtwork
When setting the now playing info for playing media in MPNowPlayingInfoCenter we can set artwork. But it seems the Apple API for creating the artwork is crashing on iOS 18 (FB15145734). On iOS 17 this gave the warning that the completion handler was not run on the main thread. I've tried to seek help here: https://stackoverflow.com/questions/78989543/swift-data-race-with-appkit-mpmediaitemartwork-function/78990231?noredirect=1#comment139277425_78990231 but it seems that it's not possible to override the completion handler and therefor it's up to Apple to fix this issue. .task { await MainActor.run { let nowPlayingInfoCenter = MPNowPlayingInfoCenter.default() var nowPlayingInfo = [String: Any]() let image = NSImage(named: "image")! // warning: data race detected: @MainActor function at MPMediaItemArtwork/ContentView.swift:22 was not called on the main thread nowPlayingInfo[MPMediaItemPropertyArtwork] = MPMediaItemArtwork(boundsSize: image.size, requestHandler: { _ in // Not on main thread here! return image }) nowPlayingInfoCenter.nowPlayingInfo = nowPlayingInfo } } I'm wondering if there is an alternative method to set the now playing artwork?
4
0
863
Feb ’25
M3 chip reverse video playback performance
We have developed a simple video player Swift application for macOS, which uses the AVFoundation Framework. A special feature of this app is the ability to play the video backward with speeds like -0.25x, -0.5x, and -1.0x. MP4 video file is played directly from the local file system, video codec is h.264, and audio AAC. Video files are huge, like 10 GB, and a length of 3 hours. Playing video in reverse direction works well on a Macbook Air with M1 or M2 chip. When we run the same app with the same video on a Macbook Air with M3 chip the reverse playback is much worse. Playback might stutter badly, especially in the latter part of the video. This same behavior also happens in Apple's Quicktime video player when playing in the reverse direction with -1x speed. What's even more strange is that at one point of a time, the video playback is totally smooth, but again, after a while, the playback is stuttering. For example, this morning reverse playback worked 100 % smoothly, then I rebooted the Mac and tried again: the result was stuttering. After this the Mac stayed idle for several hours and I tried to reverse play video again: smooth performance! My conclusion: M3 playback works fine if the stars in the sky are aligned correctly. :-) So it's not only our app, but also Quicktime player is having exactly the same behavior. And only with the M3 chip. The same symptom appears with another similar M3 Mac, so it can't be a single fault. At the same time, open-source video player iina can reverse play the video well on the same Mac. All Macs have otherwise identical configuration: 16 GB RAM and macOS 15.1.1. Have you experienced the same problem? Any chance to solve this problem? I really hope that the M4 chip Mac is behaving better here.
4
0
770
Dec ’24
AVSampleBufferDisplayLayerContentLayer memory leaks.
I noticed that AVSampleBufferDisplayLayerContentLayer is not released when the AVSampleBufferDisplayLayer is removed and released. It is possible to reproduce the issue with the simple code: import AVFoundation import UIKit class ViewController: UIViewController { var displayBufferLayer: AVSampleBufferDisplayLayer? override func viewDidLoad() { super.viewDidLoad() let displayBufferLayer = AVSampleBufferDisplayLayer() displayBufferLayer.videoGravity = .resizeAspectFill displayBufferLayer.frame = view.bounds view.layer.insertSublayer(displayBufferLayer, at: 0) self.displayBufferLayer = displayBufferLayer DispatchQueue.main.asyncAfter(deadline: .now() + 1) { self.displayBufferLayer?.flush() self.displayBufferLayer?.removeFromSuperlayer() self.displayBufferLayer = nil } } } In my real project I have mutliple AVSampleBufferDisplayLayer created and removed in different view controllers, this is problematic because the amount of leaked AVSampleBufferDisplayLayerContentLayer keeps increasing. I wonder that maybe I should use a pool of AVSampleBufferDisplayLayer and reuse them, however I'm slightly afraid that this can also lead to strange bugs. Edit: It doesn't cause leaks on iOS 18 device but leaks on iPad Pro, iOS 17.5.1
4
1
496
Mar ’25
AVContentKeySession key renewal on Airplay
Our streaming app uses FairPlay-protected video streams, which previously worked fine when using AVAssetResourceLoaderDelegate to provide CKCs. Recently, we migrated to AVContentKeySession, and while everything works as expected during regular playback, we encountered an issue with AirPlay. Our CKC has a 120-second expiry, so we renew it by calling renewExpiringResponseData.. This trigger the didProvideRenewingContentKeyRequest delegate and we respond with updated CKC. However, when streaming via AirPlay, both video and audio freeze exactly after 120 seconds. To validate the issue, I tested with AVAssetResourceLoaderDelegate and found that I can reproduce the same freeze if I do not renew the key. This suggests that AirPlay is not accepting the renewed CKC when using AVContentKeySession. Additional Details: This issue occurs across different iOS versions and various AirPlay devices. The same content plays without issues when played directly on the device. The renewal process is successful, and segments continue to load, but playback remains frozen. Tried renewing the CKC bit early (100s). I also tried setting player.usesExternalPlaybackWhileExternalScreenIsActive = true, but the issue persists. We don't use persistentKey. Is there anything else that needs to be considered for proper key renewal when AirPlaying? Any help on how to fix this or confirmation if this is a known issue would be greatly appreciated.
4
2
536
Mar ’25
WideFOV - APMP - Stereo
Does anyone have a template of an Apple Projected Media Profile Format Description or a File of a Stereo wideFOV video? Use case I have 2 compatible cameras that I stereo sync and I want to move the projection information from the compatible video to the Spatial video that combines them. Every version I can come up with crashes the AVP and when viewing as Spatial in Tahoe I just get a black screen.
4
0
141
Jun ’25
After playing an HDR video on iPhone for a while, the HDR effect disappears and the screen brightness decrease
When i use AVPlayer to obtain the video frame CVPixelBufferRef of an HDR video, and use AVSampleBufferDisplayLayer to display it on the screen, after a period of time, the HDR video content and screen gradually darken, losing the HDR effect. Steps to reproduce: Create an AVPlayer to loop an HDR video, specify the video frame format as kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange Create a timer to get the video frame CVPixelBufferRef at 30 frames per second Use AVSampleBufferDisplayLayer to display CVPixelBufferRef on the screen Don't operate the phone, wait for a period of time (such as 40 minutes), the HDR effect disappears and the screen darkens Note: You need to use an iPhone device, iOS 18.5 and below operating system You need to ensure that the HDR video is played in a loop, that is, to ensure that the screen continues to display HDR content, wait for a period of time, depending on different devices, you need to wait for 20-40 minutes. In the iPhone Photos app,the same problem will occur after playing HDR video in a loop for a long time Expected Results: When rendering HDR content for a long time, it is guaranteed that there is always an HDR effect, and the HDR content and screen will not be darkened. Current Results: After about 20-40 minutes, the HDR effect disappears and the screen darkens.
4
0
557
Jul ’25
Sending '$0' risks causing data races
I had no luck to compile a sample code provided by apple with Xcode 16.0 beta 5. ScreenCaptureKit demo (https://developer.apple.com/documentation/screencapturekit/capturing_screen_content_in_macos) The part it is failling is, streamOutput.capturedFrameHandler = { continuation.yield($0) } And the error message is Sending '$0' risks causing data races Task-isolated '$0' is passed as a 'sending' parameter; Uses in callee may race with later task-isolated uses Please enlighten me why this is an issue and how to avoid? Thanks in advance!
3
2
2.6k
Oct ’24
Selecting an appropriate AVCaptureDeviceFormat
My app currently captures video using an AVCaptureSession set with the AVCaptureSessionPreset1920x1080 preset. However, I'd like to update this behavior, such that video can be recorded at a range of different resolutions. There isn't a preset aligning to each desired resolution, so I thought I'd instead directly set the AVCaptureDeviceFormat. For any desired resolution, I would find the format that is closest without going under the desired resolution, and then crop it down as a post-processing step. However, what I've observed is that there can be a range of available formats for a device at each resolution, with various differing settings. Presumably there is logic within AVCaptureSession that selects a reasonable default based on all these different settings, but since I am applying the format directly, I think I don't have a way to make use of that default logic? And it is undocumented? Does this mean that the only way to select a format is to implement a comparison function that considers all different values of all different properties on AVCaptureDeviceFormat, and then sort the formats according to this comparator? If so, what if some new property is added to AVCaptureDeviceFormat in the future? The sort would not take this new property into account, and the function might select a format with some new undesired property. Are there any guarantees about what types for formats will be supported on a device? For example, can I take for granted that a '420v' format will exist at each resolution? If so I could filter the formats down only to those with this setting without risking filtering out all of the supported formats. I suspect I may be missing something obvious. Any help would be greatly appreciated!
3
0
723
Dec ’24
AVPlayer unpredictable range requests on iOS when streaming *.mov file
Hi all, I'm trying to diagnose and resolve an issue with stuttering video playback using the standard AVPlayer. The video in question is a 4K, 39-second file in *.mov format, being played on an iOS device. It's served via a local HTTP server that proxies requests to a backend to fetch and process the content. The project uses end-to-end encrypted storage, which necessitates the proxy for handling data processing. While playback in offline scenarios is smooth, we are encountering issues with smooth playback during streaming. The same video streams smoothly on other platforms using the same connection, so network limitations are not a factor. On iOS, playback is consistently choppy, with pauses every 1-3 seconds. The video does not appear to buffer adequately for smooth playback. One particularly curious aspect is the seemingly random pattern of Content-Range requests made by the AVPlayer when streaming the video. Below is an example of the range requests:
3
2
418
Apr ’25