I'm using an AVCaptureSession to send video and audio samples to an AVAssetWriter. When I play back the resultant video, sometimes there is a significant lag between the audio compared with the video, so they're just not in sync. But sometimes they are, with the same code.
If I look at the very first presentation time stamps of the buffers being sent to the delegate, via
func captureOutput(_: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
I see something like this:
Adding audio samples for pts time 227711.0855328798,
Adding video samples for pts time 227710.778785374
That is, the clock for audio vs video is behind: the first audio sample I receive is at 11.08 something, while the video video sample is earlier in time, at 10.778 something. The times are the presentation time stamps of the buffer, and the outputPresentationTimeStamp is the exact same number.
It feels like "video" vs the "audio" clock are just mismatched.
This doesn't always happen: sometimes they're synced. Sometimes they're not.
Any ideas? The device I'm recording is a webcam, on iPadOS, connected via the usb-c port.
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Short summary
When setting exposureMode to .locked or .custom the brightness of a video stream still changes depending on the composition and contrast of the visible scene. These changes seem to come from contrast enhancements or dynamic range optimizations and totally break any analysis of the image that requires to assess absolute luminance. While exposure lock seems to indeed lock the physical exposure parameters of the camera (shutter speed and ISO), I cannot find any way to control these "soft" modifiers.
Details
Background
I am the developer of the app "phyphox", an educational app that makes the phone's sensors accessible to students as measurement tools in science experiments. Currently I am working on implementing photometric measurements through the camera and one very important aspect of it is luminance measurements.
This is particularly relevant since the light sensor of the phone has no publicly accessible API and the camera could to some extend make experiments available to Apple users that are otherwise only possible on Android devices.
Implementation
The app uses AVFoundation and explicitly picks individual cameras since camera groups do not support custom exposure settings. This means that it handles camera switching during zoom by itself and even implements its own auto exposure routines to optimize for the use in experiments. Therefore it always stays in custom exposure mode. The app uses YUV420 color space and the individual frames are analyzed in Metal using compute shaders.
However, the effects discussed here still occur if I remove all code to control the camera and replace it with a simple sequence of setting the exposure mode to custom, setting custom exposure values, setting a fixed white balance and then setting the exposure mode to locked as suggested on stackoverflow. This neither helps on an iPhone 14 Pro nor on an iPhone 8 despite a report on the developer forums that it would resolve the issue for older devices.
The app is open source, so the code can be seen in our current development branch (without the changes for the tests here, though) on github.
The videos below use the implementation with the suggestion from stackoverflow, but they can be reproduced in the same way with "professional" camera apps that promise manual control over the camera (like the Blackmagic cam to quote a reputable company) as well as the stock camera app after pressing and holding on the preview to enable AE/AF lock.
Demonstration
These examples were captured on an iPhone 14 Pro. The central part of the image (highlighted by the app using metal shaders after capture) should not change with fixed exposure settings, but significant changes are noticable if there are changes at the edge of the frame when I move a black piece of cardboard in from above:
https://share.icloud.com/photos/0b1f_3IB6yAQG-qSH27pm6oDQ
The graph above the camera preview is the average luminance (gamma corrected and weighted based on sRGB) across the highlighted central area and as mentioned before it should not change because of something happening at the side of the frame (worst case it should get a bit darker because of the cardboard's shadow).
In my opinion, the iPhone changes its mind on the ideal contrast as soon as it has a different exposure histogram because of the dark image part from the cardboard, but that's just me guessing.
For completeness here is the same effect in the stock camera app with AE/AF lock enabled:
https://share.icloud.com/photos/0cd7QM8ucBZKwPwE9mybnEowg
Here you can also see that the iPhone "ramps" the changes. The brightness of the gray area does not change immediately but transitions smoothly, so this is clearly deliberate postprocessing.
So...
Any suggestion on how to prevent this behavior would be highly appreciated.
Capturing more than one display is no longer working with macOS Sequoia.
We have a product that allows users to capture up to 2 displays/screens. Our application is using gstreamer which in turn is based on AVFoundation.
I found a quick way to replicate the issue by just running 2 captures from separate terminals. Assuming display 1 has device index 0, and display 2 has device index 1, here are the steps:
install gstreamer with
brew install gstreamer
Then open 2 terminal windows and launch the following processes:
terminal 1 (device-index:0):
gst-launch-1.0 avfvideosrc -e device-index=0 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink
terminal 2 (device-index:1):
gst-launch-1.0 avfvideosrc -e device-index=1 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink
The first process that is launched will show the screen, the second process launched will not.
Testing this on macOS Ventura and Sonoma works as expected, showing both screens.
I submitted the same issue on Feedback Assistant: FB15900976
Hi all,
I'm trying to diagnose and resolve an issue with stuttering video playback using the standard AVPlayer. The video in question is a 4K, 39-second file in *.mov format, being played on an iOS device. It's served via a local HTTP server that proxies requests to a backend to fetch and process the content. The project uses end-to-end encrypted storage, which necessitates the proxy for handling data processing. While playback in offline scenarios is smooth, we are encountering issues with smooth playback during streaming. The same video streams smoothly on other platforms using the same connection, so network limitations are not a factor.
On iOS, playback is consistently choppy, with pauses every 1-3 seconds. The video does not appear to buffer adequately for smooth playback.
One particularly curious aspect is the seemingly random pattern of Content-Range requests made by the AVPlayer when streaming the video. Below is an example of the range requests:
Topic:
Media Technologies
SubTopic:
Video
I am trying to achieve an animated gradient effect that changes values over time based on the current seconds. I am also using AVPlayer and AVMutableVideoComposition along with custom instruction and class to generate the effect. I didn't want to load any video file, but rather generate a custom video with my own set of instructions. I used Metal Compute shaders to generate the effects and make the video to be 20 seconds.
However, when I run the code, I get a frozen player with the gradient applied, but when I try to play the video, I get this warning in the console :- Visual isTranslatable: NO; reason: observation failure: noObservations
Here is the screenshot :-
My entire code :-
import AVFoundation
import Metal
class GradientVideoCompositorTest: NSObject, AVVideoCompositing {
var sourcePixelBufferAttributes: [String: Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
]
var requiredPixelBufferAttributesForRenderContext: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
]
private var renderContext: AVVideoCompositionRenderContext?
private var metalDevice: MTLDevice!
private var metalCommandQueue: MTLCommandQueue!
private var metalLibrary: MTLLibrary!
private var metalPipeline: MTLComputePipelineState!
override init() {
super.init()
setupMetal()
}
func setupMetal() {
guard let device = MTLCreateSystemDefaultDevice(),
let queue = device.makeCommandQueue(),
let library = try? device.makeDefaultLibrary(),
let function = library.makeFunction(name: "gradientShader") else {
fatalError("Metal setup failed")
}
self.metalDevice = device
self.metalCommandQueue = queue
self.metalLibrary = library
self.metalPipeline = try? device.makeComputePipelineState(function: function)
}
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
renderContext = newRenderContext
}
func startRequest(_ request: AVAsynchronousVideoCompositionRequest) {
guard let outputPixelBuffer = renderContext?.newPixelBuffer(),
let metalTexture = createMetalTexture(from: outputPixelBuffer) else {
request.finish(with: NSError(domain: "com.example.gradient", code: -1, userInfo: nil))
return
}
var time = Float(request.compositionTime.seconds)
renderGradient(to: metalTexture, time: time)
request.finish(withComposedVideoFrame: outputPixelBuffer)
}
private func createMetalTexture(from pixelBuffer: CVPixelBuffer) -> MTLTexture? {
var texture: MTLTexture?
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: .bgra8Unorm,
width: width,
height: height,
mipmapped: false
)
textureDescriptor.usage = [.shaderWrite, .shaderRead]
CVPixelBufferLockBaseAddress(pixelBuffer, .readOnly)
if let textureCache = createTextureCache(), let cvTexture = createCVMetalTexture(from: pixelBuffer, cache: textureCache) {
texture = CVMetalTextureGetTexture(cvTexture)
}
CVPixelBufferUnlockBaseAddress(pixelBuffer, .readOnly)
return texture
}
private func renderGradient(to texture: MTLTexture, time: Float) {
guard let commandBuffer = metalCommandQueue.makeCommandBuffer(),
let commandEncoder = commandBuffer.makeComputeCommandEncoder() else { return }
commandEncoder.setComputePipelineState(metalPipeline)
commandEncoder.setTexture(texture, index: 0)
var mutableTime = time
commandEncoder.setBytes(&mutableTime, length: MemoryLayout<Float>.size, index: 0)
let threadsPerGroup = MTLSize(width: 16, height: 16, depth: 1)
let threadGroups = MTLSize(
width: (texture.width + 15) / 16,
height: (texture.height + 15) / 16,
depth: 1
)
commandEncoder.dispatchThreadgroups(threadGroups, threadsPerThreadgroup: threadsPerGroup)
commandEncoder.endEncoding()
commandBuffer.commit()
}
private func createTextureCache() -> CVMetalTextureCache? {
var cache: CVMetalTextureCache?
CVMetalTextureCacheCreate(kCFAllocatorDefault, nil, metalDevice, nil, &cache)
return cache
}
private func createCVMetalTexture(from pixelBuffer: CVPixelBuffer, cache: CVMetalTextureCache) -> CVMetalTexture? {
var cvTexture: CVMetalTexture?
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
CVMetalTextureCacheCreateTextureFromImage(
kCFAllocatorDefault,
cache,
pixelBuffer,
nil,
.bgra8Unorm,
width,
height,
0,
&cvTexture
)
return cvTexture
}
}
class GradientCompositionInstructionTest: NSObject, AVVideoCompositionInstructionProtocol {
var timeRange: CMTimeRange
var enablePostProcessing: Bool = true
var containsTweening: Bool = true
var requiredSourceTrackIDs: [NSValue]? = nil
var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid
init(timeRange: CMTimeRange) {
self.timeRange = timeRange
}
}
func createGradientVideoComposition(duration: CMTime, size: CGSize) -> AVMutableVideoComposition {
let composition = AVMutableComposition()
let instruction = GradientCompositionInstructionTest(timeRange: CMTimeRange(start: .zero, duration: duration))
let videoComposition = AVMutableVideoComposition()
videoComposition.customVideoCompositorClass = GradientVideoCompositorTest.self
videoComposition.renderSize = size
videoComposition.frameDuration = CMTime(value: 1, timescale: 30) // 30 FPS
videoComposition.instructions = [instruction]
return videoComposition
}
#include <metal_stdlib>
using namespace metal;
kernel void gradientShader(texture2d<float, access::write> output [[texture(0)]],
constant float &time [[buffer(0)]],
uint2 id [[thread_position_in_grid]]) {
float2 uv = float2(id) / float2(output.get_width(), output.get_height());
// Animated colors based on time
float3 color1 = float3(sin(time) * 0.8 + 0.1, 0.6, 1.0);
float3 color2 = float3(0.12, 0.99, cos(time) * 0.9 + 0.3);
// Linear interpolation for gradient
float3 gradientColor = mix(color1, color2, uv.y);
output.write(float4(gradientColor, 1.0), id);
}
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image.
I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off.
I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors.
On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues.
My code for the conversion is also attached.
If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it.
Many Thanks
Conversion_Logs.txt
ConversionCode.swift
When I play an m3u8 video using AVPlayer, it can play smoothly at 2x speed. However, when I set it to 3x speed, the playback is not smooth and there is no sound.
Topic:
Media Technologies
SubTopic:
Video
I was advised to post here by a Code-Level Support representative. Below will be a copy of my initial issue report, and my minimally reproductive test project can be found at the following GitHub repository URL...
https://github.com/PierceLBrooks/vtUudSeiNalCmake
DESCRIPTION OF PROBLEM
When encoding H264 video codec data using the VTCompressionSession API facilities available through the VideoToolbox framework on MacOS, the resultant bitstream will invariably include Unregistered User Data SEI NAL units that carry the UUID "47564adc-5c4c-433f-94ef-c5113cd143a8".
The proprietary decoders we are working with currently struggle with filtering out these NAL units.
Can you explain what purpose this serves, what the meaning of the byte-wise unit payloads are, and which configuration settings the VideoToolbox encoder instance specifically depends upon for triggering the insertion of them?
STEPS TO REPRODUCE
1. Invoke the instantiation of a new VideoToolbox H264 encoder object by calling VTCompressionSessionCreate with appropriate configuration flags.
2. Push frames through the encoder, receiving their encoded byte buffer counterparts through an asynchronous callback.
3. Write that encoded data to some buffer which will contain the totality of the encoder's output.
4. Inspect the NAL units of the initial portion of this output bitstream buffer.
5. Observe the presence of at least one Unregistered User Data SEI NAL unit carrying the "47564adc-5c4c-433f-94ef-c5113cd143a8" UUID near the beginning of the output segment.
Dear Developers and DTS team,
This is writing to seek your expert guidance on a persistent memory leak issue I've discovered while implementing video playback in a SwiftUI application.
Environment Details:
iOS 17+, Swift (SwiftUI, AVKit), Xcode 16.2
Target Devices:
iPhone 15 Pro (iOS 18.3.2)
iPhone 16 Plus (iOS 18.3.2)
Detailed Issue Description:
I am experiencing consistent memory leaks when using UIViewControllerRepresentable with AVPlayerViewController for FullscreenVideoPlayer and native VideoPlayer during video playback termination.
Code Context:
I have implemented the following approaches:
Added static func dismantleUIViewController(: coordinator:)
Included deinit in Coordinator
Utilized both UIViewControllerRepresentable and native VideoPlayer
/// A custom AVPlayer integrated with AVPlayerViewController for fullscreen video playback.
///
/// - Parameters:
/// - videoURL: The URL of the video to be played.
struct FullscreenVideoPlayer: UIViewControllerRepresentable {
// @Binding something for controlling fullscreen
let videoURL: URL?
func makeUIViewController(context: Context) -> AVPlayerViewController {
let controller = AVPlayerViewController()
controller.delegate = context.coordinator
print("AVPlayerViewController created: \(String(describing: controller))")
return controller
}
/// Updates the `AVPlayerViewController` with the provided video URL and playback state.
///
/// - Parameters:
/// - uiViewController: The `AVPlayerViewController` instance to update.
/// - context: The SwiftUI context for updates.
func updateUIViewController(_ uiViewController: AVPlayerViewController, context: Context) {
guard let videoURL else {
print("Invalid videoURL")
return
}
// Initialize AVPlayer if it's not already set
if uiViewController.player == nil || uiViewController.player?.currentItem == nil {
uiViewController.player = AVPlayer(url: videoURL)
print("AVPlayer updated: \(String(describing: uiViewController.player))")
}
// Handle playback state
}
func makeCoordinator() -> Coordinator {
Coordinator(parent: self)
}
static func dismantleUIViewController(_ uiViewController: AVPlayerViewController, coordinator: Coordinator) {
uiViewController.player?.pause()
uiViewController.player?.replaceCurrentItem(with: nil)
uiViewController.player = nil
print("dismantleUIViewController called for \(String(describing: uiViewController))")
}
}
extension FullscreenVideoPlayer {
class Coordinator: NSObject, AVPlayerViewControllerDelegate {
var parent: FullscreenVideoPlayer
init(parent: FullscreenVideoPlayer) {
self.parent = parent
}
deinit {
print("Coordinator deinitialized")
}
}
}
struct ContentView: View {
private let videoURL: URL? = URL(string: "https://interactive-examples.mdn.mozilla.net/media/cc0-videos/flower.mp4")
var body: some View {
NavigationStack {
Text("My Userful View")
List {
Section("VideoPlayer") {
NavigationLink("FullscreenVideoPlayer") {
FullscreenVideoPlayer(videoURL: videoURL)
.frame(height: 500)
}
NavigationLink("Native VideoPlayer") {
VideoPlayer(player: .init(url: videoURL!))
.frame(height: 500)
}
}
}
}
}
}
Reproducibility Steps:
Run application on target devices
Scenario A - FullscreenVideoPlayer:
Tap FullscreenVideoPlayer
Play video to completion
Repeat process 5 times
Scenario B - VideoPlayer:
Navigate back to main screen
Tap Video Player
Play video to completion
Repeat process 5 times
Observed Memory Leak Characteristics:
Per Iteration (Debug Memory Graph):
4 instances of NSMutableDictionary (Storage) leaked
4 instances of __NSDictionaryM leaked
4 × 112-byte malloc blocks leaked
Cumulative Effects:
Debug console prints: "dismantleUIViewController called for <AVPlayerViewController: 0x{String}> Coordinator deinitialized" when navigate back to main screen
After multiple iterations, leak instances double
Specific Questions:
What underlying mechanisms are causing these memory leaks in UIViewControllerRepresentable and VideoPlayer?
What are the recommended strategies to comprehensively prevent and resolve these memory management issues?
I am working on a project for macOS where I am taking an AVCaptureSession's CVPixelBuffer and I need to convert it into a MTLTexture for rendering. On macOS the pixel format is 2vuy, there does not seem to be a clear format conversion while converting to a metal texture. I have been able to convert it to a texture but the color space seems to be off as it is rendering distorted colors with a double image.
I believe 2vuy is a single pane color space and I have tried to account for that, but I am unaware of what is off.
I have attached The CVPixelBuffer and The distorted MTLTexture along with a laundry list of errors.
On iOS my conversions are fine, it is only the macOS 2vuy pixel format that seems to have issues.
My code for the conversion is also attached.
If there are any suggestions or guidance on how to properly convert a 2vuy CVPixelBuffer to a MTLTexture I would greatly appreciate it.
Many Thanks
Conversion_Logs.txt
ConversionCode.swift
I noticed that AVSampleBufferDisplayLayerContentLayer is not released when the AVSampleBufferDisplayLayer is removed and released.
It is possible to reproduce the issue with the simple code:
import AVFoundation
import UIKit
class ViewController: UIViewController {
var displayBufferLayer: AVSampleBufferDisplayLayer?
override func viewDidLoad() {
super.viewDidLoad()
let displayBufferLayer = AVSampleBufferDisplayLayer()
displayBufferLayer.videoGravity = .resizeAspectFill
displayBufferLayer.frame = view.bounds
view.layer.insertSublayer(displayBufferLayer, at: 0)
self.displayBufferLayer = displayBufferLayer
DispatchQueue.main.asyncAfter(deadline: .now() + 1) {
self.displayBufferLayer?.flush()
self.displayBufferLayer?.removeFromSuperlayer()
self.displayBufferLayer = nil
}
}
}
In my real project I have mutliple AVSampleBufferDisplayLayer created and removed in different view controllers, this is problematic because the amount of leaked AVSampleBufferDisplayLayerContentLayer keeps increasing.
I wonder that maybe I should use a pool of AVSampleBufferDisplayLayer and reuse them, however I'm slightly afraid that this can also lead to strange bugs.
Edit: It doesn't cause leaks on iOS 18 device but leaks on iPad Pro, iOS 17.5.1
I have an app that has a WKWebView for watching YouTube videos. When the videos are windowed the audio seems fine, positionally as well. All perfectly.
When I fullscreen the video and it goes into the native visionOS video player the audio messes up.
It will suddenly sound like it is in your ears, or maybe even just one ear channel, or the position will be wrong. It might be fine for a moment but the second I touch the controls or move the window the sound jumps across the room, away from the window, or switches to stereo.
Sometimes exiting windows entirely you will still hear the videos playing. Even if you open the window back up and go to another screen and open another video, now you hear 2 videos playing at the same time with no way to stop the first one in the background, requiring to force restart the app.
It is all sorts of glitchy. I haven't the slightest clue what is happening here. I am strongly feeling this is a visionOS bug.
I tried using AVAudioSession to change some of the sound settings, and that makes zero difference in behavior.
Multiple testers have also reported this behavior and it has been seen on both visionOS 2.3 and 2.4 betas.
Thanks for the help! This is driving me mad! It is extremely consistent behavior!
I'm working on an application that uses the iPhone camera for scientific purposes - and, as a result would like to receive video in as unprocessed format as possible.
In particular, I'm interested in getting pixel buffers that contain pretty much the bayer data as the sensor sees it - with the minimum processing of color possible.
Currently we configure the AVCaptureDevice to fix the focus and exposure, use a low ISO with no gain and set the white balance gains to 1. AVCaptureVideoDataOutput is using 32BGRA.
What I'd like to do is remove any additional color and brightness processing such that the data is effectively processed with a linear transfer function (i.e. gamma function is 1).
I thought that this might be down to using the AVCaptureDevice activeColorSpace - we currently use P3_D65 for this. But there only seems to be a few choices (e.g. sRGB, HLG_BT2020) all of which I think affect the gamma.
So:
is it possible to control or specify the gamma / transfer function when using CaptureVideoDelegate?
if not, does one of the color space settings have a defined gamma function that I can effectively reverse it from the pixel data without losing too much information?
or is there a better way to capture video-ish speed images (15-30fps) from the camera sensor that skips processing like this?
Many thanks for any suggestions.
I'm using an iPhone 15 Pro, which has switched from Lightning to USB Type-C. My iOS version is 18.3. According to Apple's documentation, AVCaptureDevice.DeviceType should support external device types.
🔗 Apple's Official Documentation:
https://developer.apple.com/documentation/avfoundation/avcapturedevice/devicetype-swift.struct/external
The documentation clearly states that iPadOS 17.0+ and iOS 17.0+ support external devices. However, in my actual tests:
On iPhone, discoverySession does not detect any external devices.
On iPad, discoverySession can detect external devices without any issues.
My Question:
Does iPhone USB-C actually support external devices (e.g., UVC cameras)?
If not, why does Apple's documentation claim that iOS 17 supports external devices instead of specifying iPadOS 17 only?
Are serialized parameters already available inside -pluginInstanceAddedToDocument via FxParameterRetrievalAPI or are they being read later?
Our streaming app uses FairPlay-protected video streams, which previously worked fine when using AVAssetResourceLoaderDelegate to provide CKCs.
Recently, we migrated to AVContentKeySession, and while everything works as expected during regular playback, we encountered an issue with AirPlay.
Our CKC has a 120-second expiry, so we renew it by calling renewExpiringResponseData..
This trigger the didProvideRenewingContentKeyRequest delegate and we respond with updated CKC.
However, when streaming via AirPlay, both video and audio freeze exactly after 120 seconds.
To validate the issue, I tested with AVAssetResourceLoaderDelegate and found that I can reproduce the same freeze if I do not renew the key. This suggests that AirPlay is not accepting the renewed CKC when using AVContentKeySession.
Additional Details:
This issue occurs across different iOS versions and various AirPlay devices.
The same content plays without issues when played directly on the device.
The renewal process is successful, and segments continue to load, but playback remains frozen.
Tried renewing the CKC bit early (100s).
I also tried setting player.usesExternalPlaybackWhileExternalScreenIsActive = true, but the issue persists.
We don't use persistentKey.
Is there anything else that needs to be considered for proper key renewal when AirPlaying?
Any help on how to fix this or confirmation if this is a known issue would be greatly appreciated.
Hello,
Is there a way to handle 403 error returned by the server, eg token expired ?
Cannot find any information about this and everything that I tried wasn't working (addObserver, NotificationCenter with .AVPlayerItemNewErrorLogEntry, AVPlayerItemPlaybackStalled, ...)
Thank you very much.
Topic:
Media Technologies
SubTopic:
Video
No video is playing and the same error is coming. I have tried everything by resetting. Please get an update as soon as possible.
error-The operation couldn't be completed. (CoreMediaErrorDomain error -42709.)
When we use AVPlayer to play DRM encrypted streams, it will not play normally under iOS 17.6.1 system version, and there is a high probability of system restart. This is the relevant core error log:
error 14:47:53.323369+0800 audiomxd [AirPlayError] carManager_copyProperty_block_invoke:499: got error -12784/0xFFFFCE10 kCMBaseObjectError_PropertyNotFound
error 14:47:53.323414+0800 audiomxd [SPEndpointManagerFactory] SidePlay Endpoint Manager creation failed with -72390/0xFFFEE53A
error 14:47:53.364949+0800 audiomxd [APBrowserCarSessionHelper] [0xF6AA] [Bonjour/WiFi] Unrecognized ConnectivityHelper event 101
error 14:47:53.375313+0800 audiomxd AddInstanceForFactory: No factory registered for id <CFUUID 0xa5c5118c0> F8BB1C28-BAE8-11D6-9C31-00039315CD46
We're experiencing significant issues with AVPlayer when attempting to play partially downloaded HLS content in offline mode. Our app downloads HLS video content for offline viewing, but users encounter the following problems:
Excessive Loading Delay: When offline, AVPlayer attempts to load resources for up to 60 seconds before playing the locally available segments
Asset Loss: Sometimes AVPlayer completely loses the asset reference and fails to play the video on subsequent attempts
Inconsistent Behavior: The same partially downloaded asset might play immediately in one session but take 30+ seconds in another
Network Activity Despite Offline Settings: Despite configuring options to prevent network usage, AVPlayer still appears to be attempting network connections
These issues severely impact our offline user experience, especially for users with intermittent connectivity.
Technical Details
Implementation Context
Our app downloads HLS videos for offline viewing using AVAssetDownloadTask. We store the downloaded content locally and maintain a dictionary mapping of file identifiers to local paths. When attempting to play these videos offline, we experience the described issues.
Current Implementation
Here's our current implementation for playing the videos:
- (void)presentNativeAvplayerForVideo:(Video *)video navContext:(NavContext *)context {
NSString *localPath = video.localHlsPath;
if (localPath) {
NSURL *videoURL = [NSURL URLWithString:localPath];
NSDictionary *options = @{
AVURLAssetPreferPreciseDurationAndTimingKey: @YES,
AVURLAssetAllowsCellularAccessKey: @NO,
AVURLAssetAllowsExpensiveNetworkAccessKey: @NO,
AVURLAssetAllowsConstrainedNetworkAccessKey: @NO
};
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoURL options:options];
AVPlayerViewController *playerViewController = [[AVPlayerViewController alloc] init];
NSArray *keys = @[@"duration", @"tracks"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset];
AVPlayer *player = [AVPlayer playerWithPlayerItem:playerItem];
playerViewController.player = player;
[player play];
});
}];
playerViewController.modalPresentationStyle = UIModalPresentationFullScreen;
[context presentViewController:playerViewController animated:YES completion:nil];
}
}
Attempted Solutions
We've tried several approaches to mitigate these issues:
Modified Asset Options:
NSDictionary *options = @{
AVURLAssetPreferPreciseDurationAndTimingKey: @NO, // Changed to NO
AVURLAssetAllowsCellularAccessKey: @NO,
AVURLAssetAllowsExpensiveNetworkAccessKey: @NO,
AVURLAssetAllowsConstrainedNetworkAccessKey: @NO,
AVAssetReferenceRestrictionsKey: @(AVAssetReferenceRestrictionForbidRemoteReferenceToLocal)
};
Skipped Asynchronous Key Loading:
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:asset automaticallyLoadedAssetKeys:nil];
Modified Player Settings:
player.automaticallyWaitsToMinimizeStalling = NO;
[playerItem setPreferredForwardBufferDuration:2.0];
Added Network Resource Restrictions:
playerItem.canUseNetworkResourcesForLiveStreamingWhilePaused = NO;
Used File URLs Instead of HTTP URLs where possible
Despite these attempts, the issues persist.
Expected vs. Actual Behavior
Expected Behavior:
AVPlayer should immediately begin playback of locally available HLS segments
When offline, it should not attempt to load from network for more than a few seconds
Once an asset is successfully played, it should be reliably available for future playback
Actual Behavior:
AVPlayer waits 10-60 seconds before playing locally available segments
Network activity is observed despite all network-restricting options
Sometimes the player fails completely to play a previously available asset
Behavior is inconsistent between playback attempts with the same asset
Questions:
What is the recommended approach for playing partially downloaded HLS content offline with minimal delay?
Is there a way to force AVPlayer to immediately use available local segments without attempting to load from the network?
Are there any known issues with AVPlayer losing references to locally stored HLS assets?
What diagnostic steps would you recommend to track down the specific cause of these delays?
Does AVFoundation have specific timeouts for offline HLS playback that could be configured?
Any guidance would be greatly appreciated as this issue is significantly impacting our user experience.
Device Information
iOS Versions Tested: 14.5 - 18.1
Device Models: iPhone 12, iPhone 13, iPhone 14, iPhone 15
Xcode Version: 15.3-16.2.1