Hi everyone,
We're encountering an unexpected issue with our iPhone-only camera app:
👉 TimeMark - Photo Proof
https://apps.apple.com/us/app/timemark-photo-proof/id6446071834
Problem Description:
Our app uses a full-screen camera view via AVCaptureSession. In some cases reported by users, the camera fails immediately upon app launch, and we receive this interruption reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
According to the Apple documentation https://developer.apple.com/documentation/avfoundation/avcapturesession/interruptionreason/videodevicenotavailablewithmultipleforegroundapps?language=objc , this interruption typically occurs when the app is running in a multi-app layout such as Slide Over, Split View, or Picture in Picture — all of which are iPad-only features.
However, this issue is being reported on iPhones, and our app does not support iPad at all.
Also noted in the documentation:
"Given your present AVCaptureSession configuration, the session may only be run if your app occupies the full screen."
Additional Context:
The issue occurs immediately on app launch, before the user can interact with the camera.
We don’t enable multitaskingCameraAccessEnabled.
We are 100% sure this is happening on iPhone, not iPad.
It’s hard to reproduce; users report it happening sporadically.
Locally, we tried playing Picture-in-Picture videos (e.g., Safari/YouTube) before launching our app, but we could not reproduce the issue.
Questions:
Why is this interruption reason occurring on iPhone, which doesn’t officially support Slide Over or Split View?
Could this be caused by some system-level multitasking or resource contention (e.g., Picture in Picture from FaceTime or Safari)?
Would enabling multitaskingCameraAccessEnabled help prevent this issue on iPhone, even though it's designed for iPad?
Enabling multitaskingCameraAccessEnabled seems to require enabling UIBackgroundModes → voip.
Would adding this background mode cause any App Store review risk or rejection if our app doesn't actually use VoIP functionality?
Any help, insight, or suggestions would be greatly appreciated. Thanks in advance!
AVFoundation
RSS for tagWork with audiovisual assets, control device cameras, process audio, and configure system audio interactions using AVFoundation.
Posts under AVFoundation tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
When i use AVPlayer to obtain the video frame CVPixelBufferRef of an HDR video, and use AVSampleBufferDisplayLayer to display it on the screen, after a period of time, the HDR video content and screen gradually darken, losing the HDR effect.
Steps to reproduce:
Create an AVPlayer to loop an HDR video, specify the video frame format as kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange
Create a timer to get the video frame CVPixelBufferRef at 30 frames per second
Use AVSampleBufferDisplayLayer to display CVPixelBufferRef on the screen
Don't operate the phone, wait for a period of time (such as 40 minutes), the HDR effect disappears and the screen darkens
Note:
You need to use an iPhone device, iOS 18.5 and below operating system
You need to ensure that the HDR video is played in a loop, that is, to ensure that the screen continues to display HDR content, wait for a period of time, depending on different devices, you need to wait for 20-40 minutes.
In the iPhone Photos app,the same problem will occur after playing HDR video in a loop for a long time
Expected Results:
When rendering HDR content for a long time, it is guaranteed that there is always an HDR effect, and the HDR content and screen will not be darkened.
Current Results:
After about 20-40 minutes, the HDR effect disappears and the screen darkens.
Hello,
I'm Soonwon.
We’re currently developing a UVC camera device and trying to stream MJPEG video via AVFoundation on macOS. However, we’re running into a problem with custom resolutions.
When we try to use AVFoundation on macOS to capture MJPEG video at 1000x6000, the stream is not accepted or simply doesn’t work. Lower resolutions work fine.
(Interestingly, using the same device on iPadOS, we can capture the 1000x6000 MJPEG stream successfully by using AVCaptureSessionPresetInputPriority.)
Is there any way to receive custom-resolution MJPEG streams (like 1000x6000) from a UVC device using AVFoundation on macOS?
Are there specific session presets, entitlements, or known limitations that affect MJPEG handling at custom resolutions on macOS?
Does macOS handle MJPEG differently from iPadOS in AVFoundation?
Any insight or guidance would be greatly appreciated. Thank you!
NSError *error = nil;
if ([selectedDevice lockForConfiguration:&error]) {
[session beginConfiguration];
session.sessionPreset = AVCaptureSessionPresetHigh;
bool foundFormat = false;
for (AVCaptureDeviceFormat *format in selectedDevice.formats) {
CMVideoDimensions dims = CMVideoFormatDescriptionGetDimensions(format.formatDescription);
FourCharCode pixelFormat = CMFormatDescriptionGetMediaSubType(format.formatDescription);
foundFormat = true;
if (dims.width == 1000 && dims.height == 6000) {
selectedDevice.activeFormat = format;
foundFormat = true;
break;
}
}
if(foundFormat == false)
{
NSLog(@"Failed to foundFormat : ");
[session commitConfiguration];
return false;
}
NSError* error = nil;
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:selectedDevice error:&error];
if (error || ![session canAddInput:input])
{
NSLog(@"Failed to add video input: %@", error.localizedDescription);
[session commitConfiguration];
return false;
}
[session addInput:input];
AVCaptureVideoDataOutput* output = [[AVCaptureVideoDataOutput alloc] init];
output.alwaysDiscardsLateVideoFrames = YES;
output.videoSettings = @{ (NSString*)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) };
[output setSampleBufferDelegate:delegate queue:queue];
if ([session canAddOutput:output])
{
[session addOutput:output];
}
[session commitConfiguration];
[selectedDevice unlockForConfiguration];
} else {
NSLog(@"Failed to lock device for configuration: %@", error.localizedDescription);
}
// start~
PLATFORM AND VERSION :iOS 18.5
I wanted to bring to your attention a critical issue some of our production users are experiencing with the CoinOut app. Specifically, users are encountering a problem when attempting to capture photos of receipts using the app's customized camera feature. The camera, which utilizes AVCaptureVideoPreviewLayer and AVCaptureDevice, occasionally fails to load the preview, resulting in a black screen instead of the expected camera view.
This camera blackout issue is significantly impacting the user experience as it prevents them from snapping photos of their receipts, which is a core functionality of the CoinOut app.
Any help/suggestion to this issue would be greatly appreciated.
STEPS TO REPRODUCE
Open the app and click on camera icon.
It will display camera to capture photo.
Camera shows black for few production user's.
class ViewController: UIViewController {
@IBOutlet private weak var captureButton: UIButton!
private var fillLayer: CAShapeLayer!
private var previewLayer : AVCaptureVideoPreviewLayer!
private var output: AVCapturePhotoOutput!
private var device: AVCaptureDevice!
private var session : AVCaptureSession!
private var highResolutionEnabled: Bool = false
private let sessionQueue = DispatchQueue(label: "session queue")
override func viewDidLoad() {
super.viewDidLoad()
setupCamera()
customiseUI()
}
@IBAction func startCamera(sender: UIButton) {
didTapTakePhoto()
}
private func setupCamera() {
let session = AVCaptureSession()
session.sessionPreset = AVCaptureSession.Preset.high
previewLayer = AVCaptureVideoPreviewLayer(session: session)
output = AVCapturePhotoOutput()
device = AVCaptureDevice.default(.builtInWideAngleCamera, for: AVMediaType.video, position: .back)
if let device = self.device{
do{
let input = try AVCaptureDeviceInput(device: device)
if session.canAddInput(input){ session.addInput(input)}
else { print("\(#fileID):\(#function):\(#line) : Session Input addition failed") }
if session.canAddOutput(output){
output.isHighResolutionCaptureEnabled = self.highResolutionEnabled
session.addOutput(output)
} else { print("\(#fileID):\(#function):\(#line) : Session Input high resolution failed") }
previewLayer.videoGravity = .resizeAspectFill
previewLayer.session = session
sessionQueue.async { session.startRunning() }
self.session = session
self.session.accessibilityElementIsFocused()
try device.lockForConfiguration()
if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.autoWhiteBalance) {
device.whiteBalanceMode = .autoWhiteBalance
} else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") }
if device.isWhiteBalanceModeSupported(AVCaptureDevice.WhiteBalanceMode.continuousAutoWhiteBalance) {
device.whiteBalanceMode = .continuousAutoWhiteBalance
} else { print("\(#fileID):\(#function):\(#line) : isWhiteBalanceModeSupported no supported") }
if device.isFocusModeSupported(.continuousAutoFocus) { device.focusMode = .continuousAutoFocus}
else if device.isFocusModeSupported(.autoFocus) { device.focusMode = .autoFocus }
device.unlockForConfiguration()
} catch { print("\(#fileID):\(#function):\(#line) : \(error.localizedDescription)") }
} else { print("\(#fileID):\(#function):\(#line) : Device found as nil") }
}
private func customiseUI() {
let path = UIBezierPath(roundedRect: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height), cornerRadius: 0)
let rectangleWidth = view.frame.width - (view.frame.width * 0.16)
let x = (view.frame.width - rectangleWidth) / 2
let rectangleHeight = view.frame.height - (view.frame.height * 0.16)
let y = (view.frame.height - rectangleHeight) / 2
let roundRect = UIBezierPath(roundedRect: CGRect(x: x, y: y, width: rectangleWidth, height: rectangleHeight), byRoundingCorners:.allCorners, cornerRadii: CGSize(width: 0, height: 0))
roundRect.move(to: CGPoint(x: self.view.center.x , y: self.view.center.y))
path.append(roundRect)
path.usesEvenOddFillRule = true
fillLayer = CAShapeLayer()
fillLayer.path = path.cgPath
fillLayer.fillRule = .evenOdd
fillLayer.opacity = 0.4
previewLayer.addSublayer(fillLayer)
previewLayer.frame = view.bounds
view.layer.addSublayer(previewLayer)
view.bringSubviewToFront(captureButton)
}
private func didTapTakePhoto() {
let settings = self.getSettings(camera: self.device)
if device.isAdjustingFocus {
do {
try device.lockForConfiguration()
device.focusMode = .continuousAutoFocus
device.unlockForConfiguration()
device.addObserver(self, forKeyPath: "adjustingFocus", options: [.new], context: nil)
} catch { print(error) }
} else { output.capturePhoto(with: settings, delegate: self) }
}
func getSettings(camera: AVCaptureDevice) -> AVCapturePhotoSettings {
var settings = AVCapturePhotoSettings()
if let rawFormat = output.availableRawPhotoPixelFormatTypes.first {
settings = AVCapturePhotoSettings(rawPixelFormatType: OSType(rawFormat))
}
settings.isHighResolutionPhotoEnabled = self.highResolutionEnabled
let previewPixelType = settings.availablePreviewPhotoPixelFormatTypes.first!
let previewFormat = [kCVPixelBufferPixelFormatTypeKey as String: previewPixelType] as [String : Any]
settings.previewPhotoFormat = previewFormat
return settings
}
}
extension ViewController: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, willCapturePhotoFor resolvedSettings: AVCaptureResolvedPhotoSettings) {
AudioServicesDisposeSystemSoundID(1108)
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let data = photo.fileDataRepresentation() else { return }
let image = UIImage(data: data)!
showImage(cropped: image)
}
func showImage(cropped: UIImage) {
let vc = self.storyboard?.instantiateViewController(withIdentifier: "ImagePreviewViewController") as? ImagePreviewViewController
vc?.captured = cropped
self.present(vc!, animated: true)
}
}```
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face.
After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices.
The following devices are experiencing this issue.
iPad (9th Gen)
iPad air (4th Gen)
iPhone 15
This issue has not occur on any other devices I have.
I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. [https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces]
Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
I'm creating an app that uses AVCaptureSession to pass camera input to AVCaptureMetadataOutput type set [metaout setMetadataObjectTypes:@[AVMetadataObjectTypeFace]] and scan Face.
After updating to OS 26 Beta2 and iOS 26 Beta2, an issue has occurred where the delegate method of AVCaptureMetadataOutputObjectsDelegate is not called on some devices. The following devices are experiencing this issue.
iPad (9th Gen)
iPad air (4th Gen)
iPhone 15
This issue has not occur on any other devices I have.
I tried running the AVFoundation sample code on the Apple Developer site on the above device. The same problem still occurs. https://developer.apple.com/documentation/avfoundation/capture_setup/avcambarcode_detecting_barcodes_and_faces
Are any additional settings required after OS 26 beta and iOS 26 beta? Or is there some problem on the OS side?
Issue:
In iOS 26 (tested on Developer Beta), AVCaptureMetadataOutputObjectsDelegate no longer receives callbacks when using .face detection.
metadataOutput.metadataObjectTypes = [.face]
Hi, I'm developing an application for macos and ios that has to run DetectHumanBodyPose3DRequest model in real time for retrieving the 3d skeleton from the camera.
I'm experiencing a memory leak every time the model is used (when i comment that line, the memory stays constant). After a minute it uses about 1GB of ram running with mac catalyst.
I attached a minimal project that has this problem
Code
Camera View
import SwiftUI
import Combine
import Vision
struct CameraView: View {
@StateObject private var viewModel = CameraViewModel()
var body: some View {
HStack {
ZStack {
GeometryReader { geometry in
if let image = viewModel.currentFrame {
Image(decorative: image, scale: 1)
.resizable()
.scaledToFill()
.frame(width: geometry.size.width,
height: geometry.size.height)
.clipped()
} else {
ProgressView()
}
}
}
}
}
}
class CameraViewModel: ObservableObject {
@Published var currentFrame: CGImage?
@Published var frameRate: Double = 0
@Published var currentVisionBodyPose: HumanBodyPose3DObservation? // Store current body pose
@Published var currentImageSize: CGSize? // Store current image size
private var cameraManager: CameraManager?
private var humanBodyPose = HumanBodyPose3DDetector()
private var lastClassificationTime = Date()
private var frameCount = 0
private var lastFrameTime = Date()
private let classificationThrottleInterval: TimeInterval = 1.0
private var lastPoseSendTime: Date = .distantPast
init() {
cameraManager = CameraManager()
startPreview()
startClassification()
}
private func startPreview() {
Task {
guard let previewStream = cameraManager?.previewStream else { return }
for await frame in previewStream {
let size = CGSize(width: frame.width, height: frame.height)
Task { @MainActor in
self.currentFrame = frame
self.currentImageSize = size
self.updateFrameRate()
}
}
}
}
private func startClassification() {
Task {
guard let classificationStream = cameraManager?.classificationStream else { return }
for await pixelBuffer in classificationStream {
self.classifyFrame(pixelBuffer: pixelBuffer)
}
}
}
private func classifyFrame(pixelBuffer: CVPixelBuffer) {
humanBodyPose.runHumanBodyPose3DRequestOnImage(pixelBuffer: pixelBuffer) { [weak self] observation in
guard let self = self else { return }
DispatchQueue.main.async {
if let observation = observation {
self.currentVisionBodyPose = observation
print(observation)
} else {
self.currentVisionBodyPose = nil
}
}
}
}
private func updateFrameRate() {
frameCount += 1
let now = Date()
let elapsed = now.timeIntervalSince(lastFrameTime)
if elapsed >= 1.0 {
frameRate = Double(frameCount) / elapsed
frameCount = 0
lastFrameTime = now
}
}
}
HumanBodyPose3DDetector
import Foundation
import Vision
class HumanBodyPose3DDetector: NSObject, ObservableObject {
@Published var humanObservation: HumanBodyPose3DObservation? = nil
private let queue = DispatchQueue(label: "humanbodypose.queue")
private let request = DetectHumanBodyPose3DRequest()
private struct SendablePixelBuffer: @unchecked Sendable {
let buffer: CVPixelBuffer
}
public func runHumanBodyPose3DRequestOnImage(pixelBuffer: CVPixelBuffer, completion: @escaping (HumanBodyPose3DObservation?) -> Void) {
let sendableBuffer = SendablePixelBuffer(buffer: pixelBuffer)
queue.async { [weak self] in
Task { [weak self, sendableBuffer] in
do {
guard let self = self else { return }
let result = try await self.request.perform(on: sendableBuffer.buffer)
//process result
DispatchQueue.main.async {
if result.isEmpty {
completion(nil)
} else {
completion(result[0])
}
}
} catch {
DispatchQueue.main.async {
completion(nil)
}
}
}
}
}
}
Hi everyone,
I’m trying to use AVAssetResourceLoaderDelegate to handle a live radio stream (e.g. Icecast/HTTP stream). My goal is to have access to the last 30 seconds of audio data during playback, so I can analyze it for specific audio patterns in near-real-time.
I’ve implemented a custom resource loader that works fine for podcasts and static files, where the file size and content length are known. However, for infinite live streams, my current implementation stops receiving new loading requests after the first one is served. As a result, the playback either stalls or fails to continue.
Has anyone successfully used AVAssetResourceLoaderDelegate with a continuous radio stream? Or maybe you can suggest betterapproach for buffering and analyzing live audio?
Any tips, examples, or advice would be appreciated. Thanks!
Hi everyone,
I've encountered a rare and strange crash in my app that I can't consistently reproduce. The crash seems to occur deep within Apple's internal frameworks, and I can't pinpoint which line of my own code is causing it. Here's the crash stack trace:
#44 AXSpeech
SIGSEGV
SEGV_ACCERR
0 CoreFoundation ___CFCheckCFInfoPACSignature + 4
1 CoreFoundation _CFRunLoopSourceSignal + 28
2 Foundation _performQueueDequeue + 492
3 Foundation ___NSThreadPerformPerform + 88
4 CoreFoundation ___CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 28
5 CoreFoundation ___CFRunLoopDoSource0 + 176
6 CoreFoundation ___CFRunLoopDoSources0 + 340
7 CoreFoundation ___CFRunLoopRun + 828
8 CoreFoundation _CFRunLoopRunSpecific + 608
9 Foundation -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212
10 TextToSpeech _TTSCFAttributedStringCreateStringByBracketingAttributeWithString + 776
11 Foundation ___NSThread__start__ + 732
12 libsystem_pthread.dylib __pthread_start + 136
Sometimes, instead of line 10 referencing _TTSCFAttributedStringCreateStringByBracketingAttributeWithString, it shows:
10 TextToSpeech LogWarning(char const*, ...) + 7288
Has anyone experienced a similar issue or know what might be triggering this crash? Any guidance on how to investigate or resolve this would be greatly appreciated. Thank you!
We use several UIKit and AVFoundation APIs in our project, including:
setAlternateIconName(_:completionHandler:)
getAllTasks(completionHandler:)
loadMediaSelectionGroup(for:completionHandler:)
Moreover, we use the Swift Concurrency versions for these APIs:
@MainActor
func setAlternateIconName(_ alternateIconName: String?) async throws
var allTasks: [URLSessionTask] { get async }
func loadMediaSelectionGroup(for mediaCharacteristic: AVMediaCharacteristic) async throws -> AVMediaSelectionGroup?
Everything worked well with these APIs in Xcode 16.2 and earlier, but starting from Xcode 16.3 (and in 16.4), they cause crashes. We've rewritten the APIs to use completion blocks instead of async/await, and this approach works.
Stack traces:
setAlternateIconName(_:completionHandler:)
var allTasks: [URLSessionTask] { get async }
loadMediaSelectionGroup(for:completionHandler:)
Also, I attached some screenshots from Xcode 16.4.
XCode 26 beta 2 is taking more than 20 seconds to start AVCaptureSession when AVCaptureVideoDataOutput and AVCaptureAudioDataOutput are added. The problem occurs only during debugging and is clearly seen with Cinematic Capture sample code by Apple. I am using iPhone 14 Pro running iOS 26 beta 2 for reference.
I have this build error with Xcode 26 beta 2:
var asset:AVURLAsset?
func loadAsset() {
let assetURL = URL.documentsDirectory
.appendingPathComponent("sample.mov")
asset = AVURLAsset(url: assetURL, options: [AVURLAssetPreferPreciseDurationAndTimingKey: true])
/*Error: Type of expression is ambiguous without a type annotation */
if let result = try? await asset?.load(.tracks, .isPlayable, .isComposable) {
}
}
Is there an issue with try? in the new Swift compiler?
Error: Type of expression is ambiguous without a type annotation
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
Our users are all from iPhones, no one is using an iPad. Just to be sure we have set
session.isMultitaskingCameraAccessEnabled = true
but it does not seem to make any difference.
Another weird interruption we are seeing
We are facing a strange issue where a small portion of our large userbase can not start the capture session in our app, as it gets interrupted with the following reason:
AVCaptureSessionInterruptionReasonVideoDeviceNotAvailableWithMultipleForegroundApps
Our users are all from iPhones, no one is using an iPad. Just to be sure we have set
session.isMultitaskingCameraAccessEnabled = true
but it does not seem to make any difference.
Another weird scenario we are seeing on an even smaller number of users is that the following call:
AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)
returns nil. A quick look at our error reports show this happening on iPhone XR, 13 and 14 models. They should all support this device type.
Any help on investigating these issue would be greatly appreciated!
Hello, I'm trying to subscribe to AVPlayerItem status updates using Combine and it's bridge to Swift Concurrency – .values.
This is my sample code.
struct ContentView: View {
@State var player: AVPlayer?
@State var loaded = false
var body: some View {
VStack {
if let player {
Text("loading status: \(loaded)")
Spacer()
VideoPlayer(player: player)
Button("Load") {
Task {
let item = AVPlayerItem(
url: URL(string: "https://sample-videos.com/video321/mp4/360/big_buck_bunny_360p_5mb.mp4")!
)
player.replaceCurrentItem(with: item)
let publisher = player.publisher(for: \.status)
for await status in publisher.values {
print(status.rawValue)
if status == .readyToPlay {
loaded = true
break
}
}
print("we are out")
}
}
}
else {
Text("No video selected")
}
}
.task {
player = AVPlayer()
}
}
}
After I click on the "load" button it prints out 0 (as the initial status of .unknown) and nothing after – even when the video is fully loaded.
At the same time this works as expected (loading status is set to true):
struct ContentView: View {
@State var player: AVPlayer?
@State var loaded = false
@State var cancellable: AnyCancellable?
var body: some View {
VStack {
if let player {
Text("loading status: \(loaded)")
Spacer()
VideoPlayer(player: player)
Button("Load") {
Task {
let item = AVPlayerItem(
url: URL(string: "https://sample-videos.com/video321/mp4/360/big_buck_bunny_360p_5mb.mp4")!
)
player.replaceCurrentItem(with: item)
let stream = AsyncStream { continuation in
cancellable = item.publisher(for: \.status)
.sink {
if $0 == .readyToPlay {
continuation.yield($0)
continuation.finish()
}
}
}
for await _ in stream {
loaded = true
cancellable?.cancel()
cancellable = nil
break
}
}
}
}
else {
Text("No video selected")
}
}
.task {
player = AVPlayer()
}
}
}
Is this a bug or something?
I'm seeking to a specific sync frame in a video file (HEVC, recorded on iPad). When I feed the buffers from that sync frame on to VTDecompressionSession it consistently drops the 2.,3.,4. buffer with a kVTVideoDecoderReferenceMissingErr (or no error but no buffer on the simulator). If I feed all the buffers from the penultimate sync frame prior to the desired frame the buffers come out fine but that would just create a massive overhead to always do it. Tried multiple OS versions, devices etc. Seems a consistent problem.
Here's a sample project with the offending video (disregard memory handling etc):
https://github.com/marcuseckert/vtSample
I've filed a radar FB18228296 but would appreciate any feedback on circumventing or at least detecting this behavior prior to decoding.
We are seeing logs were on iOS devices we see some keyframes request.
but on safari browser don’t see any request like this. could you please explain what is it.
/d8ceb9244ff889b42b82eb807327531-c27dbcb10e0bbf3cde6c-1/d8ceb9244ff88e9b42b82eb807327531-c27dbcb10e0bbf3cde6c-1/keyframes/hls/.
TL;DR How to solve possible racing issue of EXT-X-SESSION-KEY request and encrypted media segment request?
I'm having trouble using custom AVAssetResourceLoaderDelegate with video manifest containing VideoProtectionKey(VPK). My master manifest contains rendition manifest url and VPK url. When not using custom resource delegate, everything works fine.
My custom resource delegate is implemented in way where it first append prefix to scheme of the master manifest url before creating the asset. And during handling master manifest, it puts back original scheme, make the request, modify the scheme for rendition manifest url in the response content by appending the same prefix again, so that rendition manifest request also goes into custom resource loader delegate. Same goes for VPK request. The AES-128 key is stored in memory within custom resource loader delegate object. So far so good.
The VPK is requested before segment request. But the problem comes where the media segment requests happen. The media segment request url from rendition manifest goes into custom resource loader as well and those are encrypted. I can see segment request finish first then the related VPK requests kick in after a few seconds. The previous VPK value is cached in memory so it is not network causing the delay but some mechanism that I'm not aware of causing this.
So could anyone tell me what would be the proper way of handling this situation? The native library is handling it well so I just want to know how. Thanks in advance!
I developed a driverkit extension based on overriding-the-default-usb-video-class-extension, but the link didn’t give the details of realization. I asked DTS who gave two tips:
1, Do you also have a CMIO extension to load in place of the default overriding-the-default-usb-video-class-extension
2, Your DriverKit extension’s info.plist is also missing the CameraAssistantBundleID.
I want to know why a driverkit extension needs a CMIO extension, what’s the data and control flow?