I'm writing some camera functionality that uses AVCaptureVideoDataOutput.
I've set it up so that it calls my AVCaptureVideoDataOutputSampleBufferDelegate on a background thread, by making my own dispatch_queue and configuring the AVCaptureVideoDataOutput.
My question is then, if I configure my AVCaptureSession differently, or even stop it altogether, is this guaranteed to flush all pending jobs on my background thread?
I have a more practical example below, showing how I am accessing something from the foreground thread from the background thread, but I wonder when/how it's safe to clean up that resource.
I have setup similar to the following:
// Foreground thread logic
dispatch_queue_t queue = dispatch_queue_create("avf_camera_queue", nullptr);
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
setupInputDevice(captureSession); // Connects the AVCaptureDevice...
// Store some arbitrary data to be attached to the frame, stored on the foreground thread
FrameMetaData frameMetaData = ...;
MySampleBufferDelegate *sampleBufferDelegate = [MySampleBufferDelegate alloc];
// Capture frameMetaData by reference in lambda
[sampleBufferDelegate setFrameMetaDataGetter: [&frameMetaData]() { return &frameMetaData; }];
AVCaptureVideoDataOutput *captureVideoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[captureVideoDataOutput setSampleBufferDelegate:sampleBufferDelegate
queue:queue];
[captureSession addOutput:captureVideoDataOutput];
[captureSession startRunning];
[captureSession stopRunning];
// Is it now safe to destroy frameMetaData, or do we need manual barrier?
And then in MySampleBufferDelegate:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// Invokes the callback set above
FrameMetaData *frameMetaData = frameMetaDataGetter();
emitSampleBuffer(sampleBuffer, frameMetaData);
}
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I have an app that displays tens of short (<1mb) mp4 videos stored in a remote server in a vertical UICollectionView that has horizontally scrollable sections.
I'm caching all mp4 files on disk after downloading, and I also have a in-memory cache that holds a limited number (around 30) of players. The players I'm using are simple views that wrap an AVPlayerLayer and its AVPlayerItem, along with a few additional UI components.
The scrolling performance was good before iOS 26, but with the release of iOS 26, I noticed that there is significant stuttering during scrolling while creating players with a fileUrl. It happens even if use the same video file cached on disk for each cell for testing.
I also started getting this kind of log messages after the players are deinitialized:
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1107
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095
<<<< PlayerRemoteXPC >>>> signalled err=-12785 at <>:1095
There's also another log message that I see occasionally, but I don't know what triggers it.
<< FigXPC >> signalled err=-16152 at <>:1683
Is there anyone else that experienced this kind of problem with the latest release?
Also, I'm wondering what's the best way to resolve the issue. I could increase the size of the memory cache to something large like 100, but I'm not sure if it is an acceptable solution because:
1- There will be 100 player instance in memory at all times.
2- There will still be stuttering during the initial loading of the videos from the web.
Any help is appreciated!
Good day.
A video I created via iOS AVAssetWriter with the following settings:
let videoWriterInput = AVAssetWriterInput(
mediaType: .video,
outputSettings: [
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoWidthKey: 1080, AVVideoHeightKey: 1920,
AVVideoCompressionPropertiesKey: [
AVVideoAverageBitRateKey: 2_000_000,
AVVideoMaxKeyFrameIntervalKey: 30
],
]
)
let audioWriterInput = AVAssetWriterInput(
mediaType: .audio,
outputSettings: [
AVFormatIDKey: kAudioFormatMPEG4AAC,
AVNumberOfChannelsKey: 2,
AVSampleRateKey: 44100,
AVEncoderBitRateKey: 128000
]
)
When It is split into fMP4 HLS format using ffmpeg, the video is unable to be played in iOS with the following error:
CoreMediaErrorDomain error -12848
However, the video is played normally in Android, Browser HLS players, and also VLC Media Player.
Please assist. Thank you.
Topic:
Media Technologies
SubTopic:
Video
Hi, I downloaded and ran https://developer.apple.com/documentation/realitykit/rendering-stereoscopic-video-with-realitykit
and noticed that memory usage grows linearly.
I replaced the sample video with a different 8k side by side video, and the app crashed almost immediately due to memory leak.
it looks like the culprit is from makeMutablePixelBuffer() function and the allocated pixelBuffers are not recycled after being used.
screenshot is from a physical device.
Our app plays TS files on an iPhone.
The app fragments the TS files, creates an M3U8 playlist, converts them to HLS(HTTP Live Streaming), and then uses AVPlayer to play the video content.
On a device running iOS 26, after starting playback and seeking, restarting playback causes the video and audio to be out of sync (by about 2-3 seconds depending on the situation).
This also occurs on iPadOS/macOS 26.
This issue was not observed prior to iOS 18.
We are trying to fix this issue on the app side, but we have the following questions:
The behavior of AVPlayer is different between iOS 26 and previous versions. Has there been any change that could be considered? Or is it a bug?
We tried pausing before seeking, but it didn’t seem to have any effect. Are there any APIs or workarounds that can improve this?
We would appreciate it if you could tell us any other helpful documents or URLs.
Currently, I am using the Broadcast UploadExtension function to obtain samplebuffer data through APP Group and IPC (based on the local Unix Domain Socket) The screen recording data transmission method of the domain socket is transmitted to the APP. However, when the APP goes back to the background to view videos in the album or other audio and video, the data transmission stops and the APP cannot obtain the screen recording data. I would like to ask how to solve this problem. I suspect that the system has suspended the extended screen recording
Topic:
Media Technologies
SubTopic:
Video
We have been using passthrough in screen capture since visionOS26 with broadcast upload extension which was working in visionOS2.2 but now with visionOS26 it doesn't update. It fails with Invalid Broadcast session started, after a few seconds of starting the broadcast session.
Is there a bug filed for it? or is it a known bug for it?
iPhoneで撮影した映像をブラウザのアプリへ送信して画面に映す機能を持ったアプリを開発しています。
iPhone 17 Pro, 17 Pro Maxでこのアプリを利用するとブラウザ側に表示される映像が緑一色や、緑がメインのカラフルな映像になってしまいます。
調べてみると17Proと17ProMaxで超広角カメラと望遠カメラの画素数が変更になっている(1200万画素→4800万画素)ためエンコーディングで失敗しているのではないかと疑っています。
なんでも情報下さい。
環境情報
WebRTCライブラリ: GoogleWebRTC バージョン 1.1 (CocoaPodsで導入)
シグナリングサーバー: AWS Kinesis Video Streams
問題が発生するデバイス:
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,1, OS: 26.1
問題が発生しないデバイス:
iPhone17,5 以前の多数のモデル
モデル名: iPhone18,1, OS: 26.0
モデル名: iPhone18,3, OS: 26.0
I'm at my wit's end with a problem I'm facing while developing an app. The app is designed to send video captured on an iPhone to a browser application for real-time display.
While it works on many older iPhone models, whenever I test it on an iPhone 17 Pro or 17 Pro Max, the video displayed in the browser becomes a solid green screen, or a colorful, garbled image that's mostly green.
I've been digging into this, and my main suspicion is an encoding failure. It seems the resolution of the ultra-wide and telephoto cameras was significantly increased on the 17 Pro and Pro Max (from 12MP to 48MP), and I think this might be overwhelming the encoder.
I'm really hoping someone here has encountered a similar issue or has any suggestions. I'm open to any information or ideas you might have. Please help!
Environment Information:
WebRTC Library: GoogleWebRTC Version 1.1 (via CocoaPods)
Signaling Server: AWS Kinesis Video Streams
Problem Occurs on:
Model: iPhone18,1, OS: 26.0
Model: iPhone18,1, OS: 26.1
Works Fine on:
Many models before iPhone17,5
Model: iPhone18,1, OS: 26.0
Model: iPhone18,3, OS: 26.0
SBS ViewPacking add a half a frame to the opposite eye.
Meaning if you look all the way right you can see an extra half frame with left eye and vice versa.
OU doesn't work at all, the preview just doesn't show a thumbnail and the video doesn't play.
Any hints on how to fix this? I submitted a bug report but haven't heard anything.
Is there sample code on how to use VTFrameProcessorOpticalFlow?
Topic:
Media Technologies
SubTopic:
Video
How to USE iphone16 pro to capture HDR dolby_vision and use videotoolbox to encode with dolby_vision metadata in bitstream!
I know apple ios system can achieve this target,but I want to get source code how to do that ?
Topic:
Media Technologies
SubTopic:
Video
context:Explore low-latency video encoding with VideoToolbox https://developer.apple.com/videos/play/wwdc2021/10158/
I see above post said HEVC VideoToolbox can support SVC, temporal layering of streams in low-latency mode with all P frames.
my question is that Does HEVC VideoToolbox support temporal layering of streams with B-frames ?? thanks
Does videotoolbox hevc encoder on iphone16 pro encode with 10bit hdr vui info in bitstream ?
I want to encode 4k 10bit hdr bitstream with vui info use hevc videotoolbox .but I don't know how to encode vui info in bitstream ? please give me some sample code ?
I’m implementing FairPlay offline streaming on iOS and ran into a question about DRM expiration handling.
As far as I understand, when issuing a FairPlay offline license, there are typically two time windows:
1. The period during which the user can start offline playback (the longer “rental window”).
2. Once playback starts, the duration allowed to complete playback (the shorter “playback window”).
I’d like to display this information (the remaining validity or expiration time) in the app’s UI next to each downloaded asset.
My question is:
👉 Is there a way to programmatically check or retrieve the expiration time for a FairPlay offline asset on the client side (via AVFoundation or AVContentKeySession)?
Any guidance or best practices for surfacing DRM expiration info in the UI would be greatly appreciated.
it seems that video toolbox in ios26 is having trouble decoding h.264 streams I have filed a bug report under https://feedbackassistant.apple.com/feedback/20741484
Topic:
Media Technologies
SubTopic:
Video
Is there limits on the supported dimension for VTLowLatencyFrameInterpolationConfiguration. Querying VTLowLatencyFrameInterpolationConfiguration.maximumDimensions and VTLowLatencyFrameInterpolationConfiguration.minimumDimensions returns nil. When I try the WWDC sample project EnhancingYourAppWithMachineLearningBasedVideoEffects with a 4k video this statement try frameProcessor.startSession(configuration: configuration) executes but try await frameProcessor.process(parameters: parameters) throws error Error Domain=VTFrameProcessorErrorDomain Code=-19730 "Processor is not initialized" UserInfo={NSLocalizedDescription=Processor is not initialized}.
Also, why is VTLowLatencyFrameInterpolationConfiguration able to run while app is backgrounded but VTFrameRateConversionParameters can't (due to gpu usage)?
方法不执行,当我点击avplayerviewcontroller 全屏点击返回按钮 时 func playerViewController(_ playerViewController: AVPlayerViewController,
willEndFullScreenPresentationWithAnimationCoordinator coordinator: UIViewControllerTransitionCoordinator) 没有执行
Topic:
Media Technologies
SubTopic:
Video
I'm building a Swift video editor with AVFoundation and a custom compositor. Despite setting AVVideoComposition.frameDuration to 60 FPS, I'm seeing significant frame skipping during playback.
Console Output Shows Frame Skipping
Frame #0 at 0.0 ms (fps: 60.0)
Frame #2 at 33.333333333333336 ms (fps: 60.0)
Frame #6 at 100.0 ms (fps: 60.0)
Frame #10 at 166.66666666666666 ms (fps: 60.0)
Frame #32 at 533.3333333333334 ms (fps: 60.0)
Frame #62 at 1033.3333333333335 ms (fps: 60.0)
Frame #96 at 1600.0 ms (fps: 60.0)
Instead of frames every ~16.67ms (60 FPS), I'm getting irregular intervals, sometimes 33ms, 67ms, or hundreds of milliseconds apart.
Renderer.swift (Key Parts)
@MainActor
class Renderer: ObservableObject {
@Published var playerItem: AVPlayerItem?
private let assetManager: ProjectAssetManager?
private let compositorId: String
func buildComposition() async {
// ... load mouse moves/clicks data ...
let composition = AVMutableComposition()
let videoTrack = composition.addMutableTrack(
withMediaType: .video,
preferredTrackID: kCMPersistentTrackID_Invalid
)
var currentTime = CMTime.zero
var layerInstructions: [AVMutableVideoCompositionLayerInstruction] = []
// Insert video segments
for videoURL in videoURLs {
let asset = AVAsset(url: videoURL)
let tracks = try await asset.loadTracks(withMediaType: .video)
let assetVideoTrack = tracks.first
let duration = try await asset.load(.duration)
try videoTrack.insertTimeRange(
CMTimeRange(start: .zero, duration: duration),
of: assetVideoTrack,
at: currentTime
)
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
let transform = try await assetVideoTrack.load(.preferredTransform)
layerInstruction.setTransform(transform, at: currentTime)
layerInstructions.append(layerInstruction)
currentTime = CMTimeAdd(currentTime, duration)
}
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration = CMTime(value: 1, timescale: 60) // 60 FPS
// Set render size from first video
if let firstURL = videoURLs.first {
let firstAsset = AVAsset(url: firstURL)
let firstTrack = try await firstAsset.loadTracks(withMediaType: .video).first
let naturalSize = try await firstTrack.load(.naturalSize)
let transform = try await firstTrack.load(.preferredTransform)
videoComposition.renderSize = CGSize(
width: abs(naturalSize.applying(transform).width),
height: abs(naturalSize.applying(transform).height)
)
}
let instruction = CompositorInstruction()
instruction.timeRange = CMTimeRange(start: .zero, duration: currentTime)
instruction.layerInstructions = layerInstructions
instruction.compositorId = compositorId
videoComposition.instructions = [instruction]
videoComposition.customVideoCompositorClass = CustomVideoCompositor.self
let playerItem = AVPlayerItem(asset: composition)
playerItem.videoComposition = videoComposition
self.playerItem = playerItem
}
}
class CompositorInstruction: NSObject, AVVideoCompositionInstructionProtocol {
var timeRange: CMTimeRange = .zero
var enablePostProcessing: Bool = false
var containsTweening: Bool = false
var requiredSourceTrackIDs: [NSValue]?
var passthroughTrackID: CMPersistentTrackID = kCMPersistentTrackID_Invalid
var layerInstructions: [AVVideoCompositionLayerInstruction] = []
var compositorId: String = ""
}
class CustomVideoCompositor: NSObject, AVVideoCompositing {
var sourcePixelBufferAttributes: [String : Any]? = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
var requiredPixelBufferAttributesForRenderContext: [String : Any] = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32BGRA)
]
func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {}
func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) {
guard let sourceTrackID = asyncVideoCompositionRequest.sourceTrackIDs.first?.int32Value,
let sourcePixelBuffer = asyncVideoCompositionRequest.sourceFrame(byTrackID: sourceTrackID),
let outputBuffer = asyncVideoCompositionRequest.renderContext.newPixelBuffer() else {
asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoCompositor", code: -1))
return
}
let videoComposition = asyncVideoCompositionRequest.renderContext.videoComposition
let frameDuration = videoComposition.frameDuration
let fps = Double(frameDuration.timescale) / Double(frameDuration.value)
let compositionTime = asyncVideoCompositionRequest.compositionTime
let seconds = CMTimeGetSeconds(compositionTime)
let frameInMilliseconds = seconds * 1000
let frameNumber = Int(round(seconds * fps))
print("Frame #\(frameNumber) at \(frameInMilliseconds) ms (fps: \(fps))")
asyncVideoCompositionRequest.finish(withComposedVideoFrame: outputBuffer)
}
func cancelAllPendingVideoCompositionRequests() {}
}
VideoPlayerViewModel
@MainActor
class VideoPlayerViewModel: ObservableObject {
let player = AVPlayer()
private let renderer: Renderer
func loadVideo() async {
await renderer.buildComposition()
if let playerItem = renderer.playerItem {
player.replaceCurrentItem(with: playerItem)
}
}
}
What I've Tried
Frame skipping is consistent—exact same timestamps on every playback
Issue persists even with minimal processing (just passing through buffers)
Occurs regardless of compositor complexity
Please note that I need every frame at exact millisecond intervals for my application. Frame loss or inconsistent frameInMillisecond values are not acceptable.
Hey everyone,
I'm stuck on a really frustrating AVFoundation problem. I'm building a video editor that uses a custom AVVideoCompositor to add effects, and I need the final output to be 60 FPS.
So basically, I create an AVMutableComposition to sequence my video clips. I create an AVMutableVideoComposition and set the frame rate to 60 FPS: videoComposition.frameDuration = CMTime(value: 1, timescale: 60)
I assign my CustomVideoCompositor class to the videoComposition.
I create an AVPlayerItem with the composition and video composition.
The Problem:
Playback Works: When I play the AVPlayerItem in an AVPlayer, it's perfect. It plays at a smooth 60 FPS, and my custom compositor's startRequest method is called 60 times per second.
Export Fails: When I try to export the exact same composition and video composition using AVAssetExportSession, the final .mp4 file is always 30 FPS (or 29.97).
I've logged inside my custom compositor during the export, and it's definitely being called 30 times per second, so it's generating the 30 frames. It seems like AVAssetExportSession is just dropping every other frame when it encodes the video.
My source videos are screen recordings which I recorded using ScreenCaptureKit itself with the minimum frame interval to be 60.
Here is my export function. I'm using the AVAssetExportPresetHighestQuality preset :-
func exportVideo(to outputURL: URL) async throws {
guard let composition = composition,
let videoComposition = videoComposition else {
throw VideoCompositionError.noValidVideos
}
try? FileManager.default.removeItem(at: outputURL)
guard let exportSession = AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetHighestQuality // Is this the problem?
) else {
throw VideoCompositionError.trackCreationFailed
}
exportSession.outputFileType = .mp4
exportSession.videoComposition = videoComposition // This has the 60fps setting
try await exportSession.export(to: outputURL, as: .mp4)
}
I've created a bare bones sample project that shows this exact bug in action. The resulting video is 60fps during playback, but only 30fps during the export. https://github.com/zaidbren/SimpleEditor
My Question:
Why is AVAssetExportSession ignoring my 60 FPS frameDuration and defaulting to 30 FPS, even though AVPlayer respects it?