should an AVPlayer work in a Camera Extension?

My goal is to implement a moving background in a virtual camera, implemented as a Camera Extension, on macOS 13 and later. The moving background is available to the extension as a H.264 file in its bundle.

I thought i could create an AVAsset from the movie's URL, make an AVPlayerItem from the asset, attach an AVQueuePlayer to the item, then attach an AVPlayerLooper to the queue player. I make an AVPlayerVideoOutput and add it to each of the looper's items, and set a delegate on the video output.

This works in a normal app, which I use as a convenient environment to debug my extension code. In my camera video rendering loop, I check self.videoOutput.hasNewPixelBuffer , it returns true at regular intervals, I can fetch video frames with the video output's copyPixelBuffer and composite those frames with the camera frames.

However, it doesn't work in an extension - hasNewPixelBuffer is never true. The looping player returns 'failed', with an error which simply says "the operation could not be completed". I've tried simplifying things by removing the AVPlayerLooper and using an AVPlayer instead of an AVQueuePlayer, so the movie would only play once through. But still, I never get any frames in the extension.

Could this be a sandbox thing, because an AVPlayer usually renders to a user interface, and camera extensions don't have UIs?

My fallback solution is to use an AVAssetImageGenerator which I attempt to drive by firing off a Task for each frame each time I want to render one, I ask for another frame to keep the pipeline full. Unfortunately the Tasks don't finish in the same order they are started so I have to build frame-reordering logic into the frame buffer (something which a player would fix for me). I'm also not sure whether the AVAssetImageGenerator is taking advantage of any hardware acceleration, and it seems inefficient because each Task is for one frame only, and cannot maintain any state from previous frames.

Perhaps there's a much simpler way to do this and I'm just missing it? Anyone?

I believe you would need to run the AVPlayer instance in the application where the Camera Extension is hosted and then send the video frames using a custom property to the sandboxed camera extension. You can do the compositing inside the camera extension but my recommendation would be to do all the video processing inside the application and use the extension only to vend CVPixelBuffers to the camera clients.

You're right (an AVPlayer would have to run in the application). Thank you for reminding of this post.

After much experimentation, I ended up using two AVAssetReaders. Unlike the AVAssetImageGenerator, an AVAssetReader can maintain state. Unlike AVPlayer, an AVAssetReader doesn't need to run in an application context.

I use two so I can ping-pong between them to enable seamless transitions from the last frame of the movie back to the first.

I want to do the compositing inside the extension because the composited video is the output of the virtual camera, which should work without running the hosting app.

should an AVPlayer work in a Camera Extension?
 
 
Q