Our iOS/AppleTV video content playback app uses AVPlayer to play HLS video streams and supports both custom and system playback UIs. The Fairplay content key is retrieved using AVContentKeySession. AirPlay is supported too.
When the iPhone is connected to a TV through the lightning Apple Digital AV Adapter (A1438), the app is mirrored as expected.
Problem: when using an iPhone or iPad on iOS 18.1.1, FairPlay-protected HLS streams are not played and a CoreMediaErrorDomain -12035 error is received by the AVPlayerItem. Also, once the issue has occurred, the mirroring freezes (the TV indefinitely displays the app playback screen) although the app works fine on the iOS device.
The content key retrieval works as expected (I can see that 2 content key requests are made by the system by the way, probably one for the local playback and one for the adapter, as when AirPlaying) and the error is thrown after providing the AVContentKeyResponse.
Unfortunately, and as far as I know, there is not documentation on CoreMediaErrorDomain errors so I don't know what -12035 means.
The issue does not occur:
on an iPhone on iOS 17.7 (even with FairPlay-protected HLS streams)
when playing DRM-free video content (whatever the iOS version)
when using the USB-C AV Adapter (whatever the iOS version)
Also worth noting: the issue does not occur with other video playback apps such as Apple TV or Netflix although I don't have any details on the kind of streams these apps play and the way the FairPlay content key is retrieved (if any) so I don't know if it is relevant.
Video
RSS for tagDive into the world of video on Apple platforms, exploring ways to integrate video functionalities within your iOS,iPadOS, macOS, tvOS, visionOS or watchOS app.
Post
Replies
Boosts
Views
Activity
Hi everyone,
I'm developing a customization tool in which our customer can upload a mp3 or mp4 file that will be scannable through our AR application. On desktop and Android, this is working perfectly. For some reason however, on iPhone we're unable to load most of the video files. I've checked the clips, and they are .mov/h.264 files which are supported by iPhone.
We're currently not sure how we can fix this issue to allow customers that own an iPhone to upload clips to our website.
Any tips in the right direction are more than welcome.
thanks in advance!
when I played a local video(I downloaded it to the sandbox),KVO the AVPlayerItem status is AVPlayerItemStatusFailed and error is Error Domain=AVFoundationErrorDomain Code=-11800 "这项操作无法完成" UserInfo={NSLocalizedFailureReason=发生未知错误(24), NSLocalizedDescription=这项操作无法完成, NSUnderlyingError=0x3004137e0 {Error Domain=NSPOSIXErrorDomain Code=24 "Too many open files"}}
why?
Hello Apple Engineers,
Specific Issue:
I am working on a video recording feature in my SwiftUI app, and I am trying to record 4K60 video in ProRes Log format using the iPhone's internal storage. Here's what I have tried so far:
I am using AVCaptureSession with AVCaptureMovieFileOutput and configuring the session to support 4K resolution and ProRes codec.
The sessionPreset is set to .inputPriority, and the video device is configured with settings such as disabling HDR to prepare for Log.
However, when attempting to record 4K60 ProRes video, I get the error: "Capturing 4k60 with ProRes codec on this device is supported only on external storage device."
This error seems to imply that 4K60 ProRes recording is restricted to external storage devices. But I am trying to achieve this internally on devices such as the iPhone 15 Pro Max, which has native support for ProRes encoding.
Here are my questions:
Is it technically possible to record 4K60 ProRes Log video internally on supported iPhones (for example: iPhone 15 Pro Max)?
There are some 3rd apps (i.e. Blackmagic 👍🏻) that can save 4K60 ProRes Log video on iPhone internally. If internal saving is supported, what additional configuration is needed for the AVCaptureSession or other technique to bypass this limitation?
If anyone has successfully saved 4K60 ProRes Log video on iPhone internal storage, your guidance would be highly appreciated.
Thank you for your help!
We have developed a simple video player Swift application for macOS, which uses the AVFoundation Framework. A special feature of this app is the ability to play the video backward with speeds like -0.25x, -0.5x, and -1.0x. MP4 video file is played directly from the local file system, video codec is h.264, and audio AAC. Video files are huge, like 10 GB, and a length of 3 hours.
Playing video in reverse direction works well on a Macbook Air with M1 or M2 chip. When we run the same app with the same video on a Macbook Air with M3 chip the reverse playback is much worse. Playback might stutter badly, especially in the latter part of the video. This same behavior also happens in Apple's Quicktime video player when playing in the reverse direction with -1x speed. What's even more strange is that at one point of a time, the video playback is totally smooth, but again, after a while, the playback is stuttering. For example, this morning reverse playback worked 100 % smoothly, then I rebooted the Mac and tried again: the result was stuttering. After this the Mac stayed idle for several hours and I tried to reverse play video again: smooth performance! My conclusion: M3 playback works fine if the stars in the sky are aligned correctly. :-)
So it's not only our app, but also Quicktime player is having exactly the same behavior. And only with the M3 chip. The same symptom appears with another similar M3 Mac, so it can't be a single fault. At the same time, open-source video player iina can reverse play the video well on the same Mac.
All Macs have otherwise identical configuration: 16 GB RAM and macOS 15.1.1.
Have you experienced the same problem? Any chance to solve this problem?
I really hope that the M4 chip Mac is behaving better here.
I want to use AVAssetWriter to output video files with a constant FPS (CFR) or variable bitrate (VBR).
As far as I could search, I couldn't find out.
How do I implement the settings?
i use navigator.mediaDevices.getUserMedia to wake up camera, when i use constraint with '{video:{facingMode: {exact: 'user'}}}',occur a error"invalid constraint"
I tried running the latest CaptureSample, selected local/canonical HDR, add to screen output, then stop capture. It is supposed to save the file, but no file is added at all.
It works for SDR, just not for HDR. How do I get HDR working?
Hi, Im working on a app with a infinite scrollable video similar to Tiktok or instagram reels. I initially thought it would be a good idea to cache videos in the file system but after reading this post it seems like it is not recommended to cache videos on the file system: https://forums.developer.apple.com/forums/thread/649810#:~:text=If%20the%20videos%20can%20be%20reasonably%20cached%20in%20RAM%20then%20we%20would%20recommend%20that.%20Regularly%20caching%20video%20to%20disk%20contributes%20to%20NAND%20wear
The reason I am hesitant to cache videos to memory is because this will add up pretty quickly and increase memory pressure for my app.
After seeing the amount of documents and data storage that instagram stores, its obvious they are caching videos on the file system. So I was wondering what is the updated best practice for caching for these kind of apps?
I'm trying to add metadata every second during video capture in the Swift sample App "AVMultiCamPiP". A simple string that changes every second with a write function triggered by a Timer. Can't get it to work, no matter how I arrange it, always ends up with the error "Cannot create a new metadata adaptor with an asset writer input that has already started writing".
This is the setup section:
// Add a metadata input
let assetWriterMetaDataInput = AVAssetWriterInput(mediaType: .metadata, outputSettings: nil, sourceFormatHint: AVTimedMetadataGroup().copyFormatDescription())
assetWriterMetaDataInput.expectsMediaDataInRealTime = true
assetWriter.add(assetWriterMetaDataInput)
self.assetWriterMetaDataInput = assetWriterMetaDataInput
This is the timed metadata creation which gets triggered every second:
let newNoteMetadataItem = AVMutableMetadataItem()
newNoteMetadataItem.value = "Some string" as (NSCopying & NSObjectProtocol)?
let metadataItemGroup = AVTimedMetadataGroup.init(items: [newNoteMetadataItem], timeRange: CMTimeRangeMake( start: CMClockGetTime( CMClockGetHostTimeClock() ), duration: CMTime.invalid ))
movieRecorder?.recordMetaData(meta: metadataItemGroup)
This function is supposed to add the metadata to the track:
func recordMetaData(meta: AVTimedMetadataGroup) {
guard isRecording,
let assetWriter = assetWriter,
assetWriter.status == .writing,
let input = assetWriterMetaDataInput,
input.isReadyForMoreMediaData else {
return
}
let metadataAdaptor = AVAssetWriterInputMetadataAdaptor(assetWriterInput: input)
metadataAdaptor.append(meta)
}
I have an older code example in objc which works OK, but it uses "AVCaptureMetadataInput appendTimedMetadataGroup" and writes to an identifier called "quickTimeMetadataLocationNote". I'd like to do something similar in the above Swift code ...
All suggestions are appreciated !
My app is not a VOIP application. I use devices that support character centering, such as iPad 10 or iPad 13.18. The system is iOS 18.0, 18.1, or 18.1.1. When entering live classes, the "Character Centering" button does not appear in the control center, as shown in the following picture. However, if Voice over IP is selected for Background Modes in the project and the app is run again, it will not be reproduced, even if it is uninstalled or reinstalled. Could you please help me investigate the reason? thank you!
My app currently captures video using an AVCaptureSession set with the AVCaptureSessionPreset1920x1080 preset. However, I'd like to update this behavior, such that video can be recorded at a range of different resolutions.
There isn't a preset aligning to each desired resolution, so I thought I'd instead directly set the AVCaptureDeviceFormat. For any desired resolution, I would find the format that is closest without going under the desired resolution, and then crop it down as a post-processing step.
However, what I've observed is that there can be a range of available formats for a device at each resolution, with various differing settings. Presumably there is logic within AVCaptureSession that selects a reasonable default based on all these different settings, but since I am applying the format directly, I think I don't have a way to make use of that default logic? And it is undocumented?
Does this mean that the only way to select a format is to implement a comparison function that considers all different values of all different properties on AVCaptureDeviceFormat, and then sort the formats according to this comparator?
If so, what if some new property is added to AVCaptureDeviceFormat in the future? The sort would not take this new property into account, and the function might select a format with some new undesired property.
Are there any guarantees about what types for formats will be supported on a device? For example, can I take for granted that a '420v' format will exist at each resolution? If so I could filter the formats down only to those with this setting without risking filtering out all of the supported formats.
I suspect I may be missing something obvious. Any help would be greatly appreciated!
Since iOS/iPadOs/tvOS 18 then we have run into a new problem with streaming of FairPlay encrypted video. On the affected streams then the audio plays perfectly but the video freezes for periods of a few seconds, so it will freeze for 5s or so, then be OK for a few seconds then freeze again.
It is entirely reproducible when all the following are true
the video streams were produced by a particular encoder (or particular settings, not sure on that)
the video must be encrypted
device is running some variety of iOS 18 (or iPadOS or tvOS)
the device is an affected device
Known devices are
AppleTV 4K 2nd Gen
iPad Pro 11" 1st and 2nd gen
Devices known not to show the problem are
all other AppleTV models
iPhone 13 Pro and 16 Pro
If we stream the same content, but unencrypted, then it plays perfectly, or if you play the encrypted stream on, say, tvOS 17.
When the freezing occurs then we can see in the console logs repeating blocks of lines like the following
default 18:08:46.578582+0000 videocodecd AppleAVD: AppleAVDDecodeFrameResponse(): Frame# 5771 DecodeFrame failed with error 0x0000013c
default 18:08:46.578756+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): failed - error: 316
default 18:08:46.579018+0000 videocodecd AppleAVD: AppleAVDDecodeFrameInternal(): avdDec - Frame# 5771, DecodeFrame failed with error: 0x13c
default 18:08:46.579169+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 5771 with err -12909 - internalStatus: 315
also more relevant looking lines:
default 18:17:39.122019+0000 kernel AppleAVD: avdOutbox0ISR(): FRM DONE (cid: 2.0, fno: 10970, codecT: 1) FAILED!!
default 18:17:39.122155+0000 videocodecd AppleAVD: AppleAVDDisplayCallback(): Asking fig to drop frame # 10970 with err -12909 - internalStatus: 315
default 18:17:39.122221+0000 kernel AppleAVD: ## client[ 2.0] @ frm 10970, errStatus: 0x10
default 18:17:39.122338+0000 kernel AppleAVD: decodeFailIdentify(): VP error bit 4 has EP3B0 error
default 18:17:39.122401+0000 kernel AppleAVD: processHWResponse(): clientID 2.0 frameNumber 10970 error 315, offsetIndex 10, isHwErr 1
So it would seem to me that one of the following must be happening:
When these particular HLS files are encrypted then the data is being corrupted in some way that played back on iOS 17 and earlier but now won't on 18+, or
There's a regression in iOS 18 that means that this particular format of video data is corrupted on decryption
If anyone has seen similar behaviour, or has any ideas how to identify which of the two scenarios it is, please say.
Unfortunately we don't have control of the servers so can't make changes there unless we can identify they are definitely the cause of the problem.
Thanks, Simon.
Capturing more than one display is no longer working with macOS Sequoia.
We have a product that allows users to capture up to 2 displays/screens. Our application is using gstreamer which in turn is based on AVFoundation.
I found a quick way to replicate the issue by just running 2 captures from separate terminals. Assuming display 1 has device index 0, and display 2 has device index 1, here are the steps:
install gstreamer with
brew install gstreamer
Then open 2 terminal windows and launch the following processes:
terminal 1 (device-index:0):
gst-launch-1.0 avfvideosrc -e device-index=0 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink
terminal 2 (device-index:1):
gst-launch-1.0 avfvideosrc -e device-index=1 capture-screen=true ! queue ! videoscale ! video/x-raw,width=640,height=360 ! videoconvert ! osxvideosink
The first process that is launched will show the screen, the second process launched will not.
Testing this on macOS Ventura and Sonoma works as expected, showing both screens.
I submitted the same issue on Feedback Assistant: FB15900976
I am talking about AVCaptureVideoDataOutput.recommendedVideoSettings.
I found sometimes it return nil, there is my test result.
hevc .mov with activeColorSpace sRGB
60FPS -> ok
120FPS -> ok
hevc .mov with activeColorSpace displayP3_HLG
60FPS -> nil
120FPS -> nil
h264 .mov
30FPS -> ok
60FPS -> nil
120FPS -> nil
so, if you don't give a recommend setting, and you don't give a document, how does developer to use it?
Hello,
To create a test project, I want to understand how the video and audio settings would look for a destination video whose content comes from a source video.
I obtained the output from the source video in the audio and video tracks as follows:
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
var audioOutput = AVAssetReaderTrackOutput(track: audioTrack!,
outputSettings: audioSettings)
// Video
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoTrack!.naturalSize.width,
kCVPixelBufferHeightKey: videoTrack!.naturalSize.height
] as [String: Any]
var videoOutput = AVAssetReaderTrackOutput(track: videoTrack!, outputSettings: videoSettings)
With this, I'm obtaining the
CMSampleBuffer
using
AVAssetReader.copyNextSampleBuffer
.
How can I add it to the destination video?
Should I use a while loop, considering I already have the
AVAssetWriter
set up?
Something like this:
while let buffer = videoOutput.copyNextSampleBuffer() {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let frame = imgBuffer as CVPixelBuffer
let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
adaptor.append(frame, withMediaTime: time)
}
}
Lastly, regarding the destination video.
How should the
AVAssetWriterInput
for audio and PixelBuffer of the destination video be set up?
Provide an example, something like:
let audioSettings = […] as [String: Any]
Looking forward to your response.
We are processing videos with Core Image filters in our apps, using an AVMutableVideoComposition (for playback/preview and export).
For older devices, we want to limit the resolution at which the video frames are processed for performance and memory reasons. Ideally, we would tell AVFoundation to give us video frames with a defined maximum size into our composition. We thought setting the renderSize property of the composition to the desired size would do that.
However, this only changes the size of output frames, not the size of the source frames that come into the composition's handler block. For example:
let composition = AVMutableVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
let input = request.sourceImage // <- this still has the video's original size
// ...
})
composition.renderSize = CGSize(width: 1280, heigth: 720) // for example
So if the user selects a 4K video, our filter chain gets 4K input frames. Sure, we can scale them down inside our pipeline, but this costs resources and especially a lot of memory. It would be way better if AVFoundation could decode the video frames in the desired size already before passing it into the composition handler.
Is there a way to tell AVFoundation to load smaller video frames?
Hello,
I.m deaf-blind programmer.
I'm experiencing memory issues in my app. Essentially, I'm writing a video.
In this output video, I get content from two sources.
The first source is an already recorded video of 18 seconds (just for testing). It will be shown at the beginning of the output video.
The second source is an array with photos and another array with audio buffers from AVSpeechSynthesizer.write(). The photos will be added along with the audio buffers to the output video, right after adding the 18-second video.
So, in the end, the output video should be:
18-second video + array of photos as video images and, for audio, the buffers from AVSpeechSynthesizer.write().
However, my app crashes as soon as I start the first process.
I'm using AVAssetWriter to write the video and AVAssetReader to read the video.
Below, I'll show the code where
I get the CMSampleBuffer.
I'd like an example of how to add the 18-second video to the beginning of the output video.
It doesn't need to be a big piece of code.
Here it is:
// Variables
var audioReaderBuffers = [CMSAMPLEBUFFER]()
var videoReaderBuffers = [(frame: CVPixelBuffer, time: CMTIME)]()
// Get CMSampleBuffer of a video asset
if let videoURL = videoURL {
let videoAsset = AVAsset(url: videoURL)
Task {
let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first!
let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first!
let reader = try AVAssetReader(asset: videoAsset)
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width,
kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height
] as [String: Any]
let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings)
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack,
outputSettings: audioSettings)
reader.add(readerVideoOutput)
reader.add(readerAudioOutput)
reader.startReading()
// Video CMSampleBuffer
while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() {
autoreleasepool {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let pixBuf = imgBuffer as CVPixelBuffer
let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
videoReaderBuffers.append((frame: pixBuf, time: pTime))
}
}
}
if let videoURL = videoURL {
let videoAsset = AVAsset(url: videoURL)
Task {
let videoAssetTrack = try await videoAsset.loadTracks(withMediaType: .video).first!
let audioTrack = try await videoAsset.loadTracks(withMediaType: .audio).first!
let reader = try AVAssetReader(asset: videoAsset)
let videoSettings = [
kCVPixelBufferPixelFormatTypeKey: kCVPixelFormatType_32BGRA,
kCVPixelBufferWidthKey: videoAssetTrack.naturalSize.width,
kCVPixelBufferHeightKey: videoAssetTrack.naturalSize.height
] as [String: Any]
let readerVideoOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoSettings)
let audioSettings = [
AVFormatIDKey: kAudioFormatLinearPCM,
AVSampleRateKey: 44100,
AVNumberOfChannelsKey: 2
] as [String : Any]
let readerAudioOutput = AVAssetReaderTrackOutput(track: audioTrack,
outputSettings: audioSettings)
reader.add(readerVideoOutput)
reader.add(readerAudioOutput)
reader.startReading()
while let sampleBuffer = readerVideoOutput.copyNextSampleBuffer() {
autoreleasepool {
if let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
let pixBuf = imgBuffer as CVPixelBuffer
let pTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
}
I tried configuring the preferredForwardBufferDuration on devices using 4G and Wi-Fi, and in these cases, AVPlayer works correctly according to the configured buffer duration. However, when the device is connected to a 5G network, the configuration value no longer works.
For example, if I set preferredForwardBufferDuration to 30 seconds, AVPlayer preloads with a buffer of over 100 seconds. I’m not sure how to resolve this, as it’s causing issues with my system.
Looking to output dv video to my JVC SR-VS30 video deck. I used to be able to do this, but with most firewire stuff being deprecated, I'm not sure how to go about this. I found this old developer sample code that seems to do exactly what I'd like. Surely this could be rolled or updated for current macOS?
https://developer.apple.com/library/archive/samplecode/SimpleVideoOut/Introduction/Intro.html#//apple_ref/doc/uid/DTS10000809-Intro-DontLinkElementID_2