iOS 11 - Having trouble getting AVCaptureDepthDataOutput to work

I am trying to use AVCaptureDepthDataOutput but am running into some errors.


I first choose a device that supports depth (7p).


AVCaptureDevice.default(.builtInDualCamera, for: AVMediaType.depthData, position: .back)


Then I set compatible formats:


Set format <AVCaptureDeviceFormat: 0x1c4008fc0 'vide'/'420v'  640x 480, { 3- 30 fps}, HRSI:4032x3024, fov:58.975, max zoom:189.00 (upscales @6.30), AF System:2, ISO:22.0-1408.0, SS:0.000005-0.333333, supports depth>

Set depth format <AVCaptureDeviceFormat: 0x1c0006e00 'dpth'/'fdis'  320x 240, { 3- 24 fps}, HRSI: 768x 576, fov:58.975>


Finally, I try adding the AVCaptureDepthDataOutput:


        depthOutput.setDelegate(self, callbackQueue: self.sessionQueue)
        if session.canAddOutput(depthOutput) {
            print("Adding depth output: \(depthOutput)")
            session.addOutput(depthOutput)
        }


Unfortunately, I immediately get these errors in the console:


FigDerivedFormatDescriptionGetDerivedStorage signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (!desc)
CMVideoFormatDescriptionGetDimensions signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (NULL desc)
FigDerivedFormatDescriptionGetDerivedStorage signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (!desc)
CMVideoFormatDescriptionGetDimensions signalled err=-12710 (kFigFormatDescriptionError_InvalidParameter) (NULL desc)


The session then proceeds to fail:


Capture session runtime error: AVError(_nsError: Error Domain=AVFoundationErrorDomain Code=-11819 "Cannot Complete Action" UserInfo={NSLocalizedDescription=Cannot Complete Action, NSLocalizedRecoverySuggestion=Try again later.})


Has anyone gotten AVCaptureDepthDataOutput to work?

FWIW, I'm also experiencing the same issue. There might be a missing required configuration option.

Please attend / watch Session 507: Capturing Depth in iPhone Photography. Hopefully it will answer all your questions.

It works for me and another dev. Just make sure you use a discovery session to find the device (preferred approach since iOS 10) and first select the capture format, then select the depth format from one of the supported depth formats of the current capture format.

(by works I mean that it provides depth maps — that they are actually usable for advanced tasks, it's another matter)

Also the calibration info doesn't provide extrinsics, so it's a depth map without world positioning

The AVDepthData provided with the tele buffer *does* have extrinsics. When running the dual camera, the tele is the world origin. Please view "Capturing Depth in iPhone Photography" if you have not yet.

Waching right now, waited for days for it to transcode. The dual photo is pretty epic for me!

Accepted Answer

OK, so I think the video clarified my doubts:

- depth accuracy is never absolute with current devices, so not possible to use it for apps that need absolute scale

- dual photo is feasible, but dual video is still restricted

- extrinsics for dual camera are stereo extrinsics and not world pose extrinsics

- intrinsics can be very inaccurate because lenses are moving in all 3 axis


😟

Thank you, I have since gotten it to work!


However, I have not been able to get `hdep` values for AVDepthData. Is this a current limitation of iOS or the devices?

AVDepthData can produce a derivative version of itself by calling:


+depthDataByConvertingToDepthDataType:

iOS 11 - Having trouble getting AVCaptureDepthDataOutput to work
 
 
Q