I am configuring an AVCaptureDevice's format correctly to capture 1080p 240FPS video with an AVCaptureMovieFileOutput (which seems to basically be an AVCaptureVideoDataOutput with recommendedVideoSettingsForAssetWriter(writingTo: .mov)). However, setting its activeColorSpace to .P3_D65 does not elicit change. The resulting video is in YUV colorspace, but the AVCaptureDevice's color space remains .sRGB. So, I'm not sure if the video is actually wide color or just being encoded as wide color, if that is possible.
Several errors in AV as it stands:
- manually choosing a capture format does not set the session to input priority as suggested here and elsewhere - you must set the preset to
inputPriorityor you may have 1080p but at 30FPS, not 240FPS - a session does not
automaticallyConfiguresCaptureDeviceForWideColorwhen possible if this attribute is true, the device'sactiveColorSpacedoes not change even when set by hand as aforementioned
To better understand what is happening here the relationships between the following should be elucidated in the AVFoundation documentation:
- an
AVCaptureDevice'sactiveColorSpace - a session's
automaticallyConfiguresCaptureDeviceForWideColorattribute - an
AVCaptureVideoDataOutputand itskCVPixelBufferPixelFormatTypeKeysuch askCVPixelFormatType_420YpCbCr8BiPlanarFullRange - an AVCaptureMovieFileOutput whose documentation whose documentation states "If the capture session’s
automaticallyConfiguresCaptureDeviceForWideColorproperty is true, the session selects sRGB as the video colorspace in your movie [??? sRGB is not wide color]. You can override this behavior by adding anAVCapturePhotoOutputto your session and configuring its photo format or photo preset [session preset?] for a photo output." - is
AVVideoAllowWideColorKeyrelevant toAVCaptureVideoDataOutput? when is it relevant? - when is Setting Color Properties for a Specific Resolution relevant?
From the above, there seems to be a strange unreliability of the automaticallyConfiguresCaptureDeviceForWideColor attribute, i.e., it's not setting the device's activeColorSpace to .P3_D65 as one would expect. Additionally, the AVCaptureMovieFileOutput's relationship with it seems very strange and possibly incorrect--sRGB is not wide color as the AVCaptureMovieFileOutput's documentation suggests.
Some example code (outputs from iPhone XS):
session.beginConfiguration()
session.sessionPreset = .inputPriority
device // builtInWideAngleCamera for video
format // 240FPS 1080p supports P3_D65
try device.lockForConfiguration()
device.activeFormat = format
device.activeColorSpace = .P3_D65
device.unlockForConfiguration()
guard let videoInput = try? AVCaptureDeviceInput(device: device) else { return }
session.addInput(videoInput)
session.addOutput(output) // AVCaptureMovieFileOutput or AVCaptureVideoDataOutput where AVAssetWriterInput configured with recommendedVideoSettingsForAssetWriter(writingTo: .mov)
session.commitConfiguration()
session.startRunning()
print(device.activeColorSpace == .P3_D65) // false
print(device.activeFormat)
// <AVCaptureDeviceFormat: 0x282660350 'vide'/'420f' 1920x1080, { 1-240 fps}, fov:69.654, binned, supports vis, max zoom:67.50 (upscales @1.00), AF System:1, ISO:24.0-960.0, SS:0.000045-1.000000, supports wide color>
print(device.activeFormat.formatDescription)
// <CMVideoFormatDescription 0x282b7d5f0
// [0x1f43e9860]> {
// mediaType:'vide'
// mediaSubType:'420f'
// mediaSpecific: {
// codecType: '420f' dimensions: 1920 x 1080
// }
// extensions: {(null)}
// }
// also tried adding this to the above per the AVCaptureMovieFileOutput documentation with the same result
let photoOutput = AVCapturePhotoOutput()
let settings = AVCapturePhotoSettings(format: [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarFullRange])
photoOutput.setPreparedPhotoSettingsArray([settings], completionHandler: nil)
session.addOutput(photoOutput)
It could be as simple as the device always records in .P3_D65 in this format, but the activeColorSpace property is wrong.
Thanks for reading! I hope to talk to an engineer at WWDC21 about AVFoundation and "how to do iOS". Aside from the specific above issues, some general developer issues with the camera on iOS:
- AVFoundation is very encapsulated/black box and lacks examples
- lack of camera documentation per device since the iPhone 8 and prior (you literally have to check with a
DiscoverySessionand then print formats)
These issues can also be extended to CoreMotion, AVKit, even SwiftUI (a big thank you to the folks at hackingwithswift! I digress but it took us like a month to figure out how to do a navigation stack), etc. Not having reliable documentation and examples and not knowing the hardware constraints of an iPhone, particularly sensors, makes development difficult.