Getting depth data from custom camera

I've followed Capturing Photos with Depth and went through some similar issues on the forum, however, I'm not able to get any depth data from my custom camera. Here's my latest edit on the code, do you have any suggestion about this issue?

when I tap the camera button, I get:

libc++abi.dylib: terminating with uncaught exception of type NSException

I've reviewed the solutions for that too. They're mostly related to the segue, but I double checked this part of code and storyboard and it seems fine. (I didn't have any issue before adding depth to the code!)


class CameraViewController : UIViewController {
  @IBOutlet weak var cameraButton: UIButton!

  var captureSession = AVCaptureSession()
  var captureDevice: AVCaptureDevice?
  var photoOutput: AVCapturePhotoOutput?
  var cameraPreviewLayer: AVCaptureVideoPreviewLayer?

  var image: UIImage?

  var depthDataMap: CVPixelBuffer?
  var depthData: AVDepthData?

  override func viewDidLoad() {
  super.viewDidLoad()

  setupDevice()
  setupIO()
  setupPreviewLayer()
  startRunningCaptureSession()
  }

  func setupDevice() {
  self.captureDevice = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back)
  }

  func setupIO() {
  guard let captureInputDevice = try? AVCaptureDeviceInput(device: self.captureDevice!),
  self.captureSession.canAddInput(captureInputDevice)
  else { fatalError("Can't add video input.") }
  self.captureSession.beginConfiguration()
  self.captureSession.addInput(captureInputDevice)

  self.photoOutput = AVCapturePhotoOutput()
  self.photoOutput!.isDepthDataDeliveryEnabled = photoOutput!.isDepthDataDeliverySupported
  guard self.captureSession.canAddOutput(photoOutput!)
  else { fatalError("Can't add photo output.") }
  self.captureSession.addOutput(photoOutput!)
  self.captureSession.sessionPreset = .photo
  self.captureSession.commitConfiguration()
  }

  func setupPreviewLayer() {
  self.cameraPreviewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
  self.cameraPreviewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
  self.cameraPreviewLayer?.connection?.videoOrientation = AVCaptureVideoOrientation.portrait
  self.cameraPreviewLayer?.frame = self.view.frame
  self.view.layer.insertSublayer(self.cameraPreviewLayer!, at: 0)  
  }
  func startRunningCaptureSession() {
  self.captureSession.startRunning()
  }

  @IBAction func cameraButtonDidTap(_ sender: Any) {  
  let setting = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
  setting.isDepthDataDeliveryEnabled = self.photoOutput!.isDepthDataDeliverySupported
  self.photoOutput?.capturePhoto(with: setting, delegate: self)
  }

  override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
  if segue.identifier == "showPhoto" {
  let nav = segue.destination as! UINavigationController
  let previewVC = nav.topViewController as! PhotoViewController

  previewVC.image = self.image
  previewVC.depthData = self.depthData
  previewVC.depthDataMap = self.depthDataMap
  }
  }
}

extension CameraViewController: AVCapturePhotoCaptureDelegate{
  func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
  if let imageData = photo.fileDataRepresentation() {
  image = UIImage(data: imageData)
  let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
  let auxiliaryData = CGImageSourceCopyAuxiliaryDataInfoAtIndex(imageSource!, 0, kCGImageAuxiliaryDataTypeDisparity) as? [AnyHashable: Any]

  let depthData = try? AVDepthData(fromDictionaryRepresentation: auxiliaryData!)
  self.depthDataMap = depthData?.depthDataMap

  self.performSegue(withIdentifier: "showPhoto", sender: self)
  }
  }
}
Answered by Media Engineer in 333313022

You're doing things out of order. depthDataDelivery won't be supported unless the photo output is added to a session and the input to the session is properly configured to deliver depth.


1. Set your session preset first:

self.captureSession.sessionPreset = .photo

2. After adding your dual camera input, add you photo output.

guard self.captureSession.canAddOutput(photoOutput!)

3. Now set depth delivery enabled:

self.photoOutput!.isDepthDataDeliveryEnabled = photoOutput!.isDepthDataDeliverySupported

Accepted Answer

You're doing things out of order. depthDataDelivery won't be supported unless the photo output is added to a session and the input to the session is properly configured to deliver depth.


1. Set your session preset first:

self.captureSession.sessionPreset = .photo

2. After adding your dual camera input, add you photo output.

guard self.captureSession.canAddOutput(photoOutput!)

3. Now set depth delivery enabled:

self.photoOutput!.isDepthDataDeliveryEnabled = photoOutput!.isDepthDataDeliverySupported

Thank you for your help.

Now, I’m trying to get the depth data but it only returns 2D photos without any depth. How I can take advantage of iPhoneX portrait mode feature in the custom camera?

Getting depth data from custom camera
 
 
Q