Get a depth map with a photo to create effects like the system camera’s Portrait mode (on compatible devices).
On iOS devices with a back-facing dual camera or a front-facing TrueDepth camera, the capture system can record depth information. A depth map is like an image; however, instead of each pixel providing a color, it indicates distance from the camera to that part of the image (either in absolute terms, or relative to other pixels in the depth map).
You can use a depth map together with a photo to create image-processing effects that treat foreground and background elements of a photo differently, like the Portrait mode in the iOS Camera app. By saving color and depth data separately, you can even apply and change these effects long after a photo has been captured.
You can add depth capture to many of the other photography workflows covered in Capturing Still and Live Photos by adding the following steps.
Prepare for Depth Photo Capture
To capture depth maps, you’ll need to first select a
AVCapture capture device as your session’s video input. Even if an iOS device has a dual camera or TrueDepth camera, selecting the default back- or front-facing camera does not enable depth capture.
Capturing depth also requires an internal reconfiguration of the capture pipeline, briefly delaying capture and interrupting any in-progress captures. Before shooting your first depth photo, make sure you configure the pipeline appropriately by enabling depth capture on your
Once your photo output is ready for depth capture, you can request that any individual photos capture a depth map along with the color image. Create an
AVCapture object, choosing the format for the color image. Then, enable depth capture and depth output (and any other settings you’d like for that photo) and call the
If you plan to use the captured depth data immediately—for example, to display a preview of a depth-based image processing effect—you can find it in the photo object’s
Otherwise, the capture output embeds depth data and depth-related metadata when you use the
file method to produce file data for saving the photo. If you add the resulting file to the Photos library, other apps (including the system Photos app) automatically recognize the depth data within and can apply depth-based image processing effects. (If you need to disable this option, see the
About Disparity, Depth, and Accuracy
When you enable depth capture with the back-facing dual camera on compatible devices (see iOS Device Compatibility Reference), the system captures imagery using both cameras. Because the two parallel cameras are a small distance apart on the back of the device, similar features found in both images show a parallax shift: objects that are closer to the camera shift by a greater distance between the two images. The capture system uses this difference, or disparity, to infer the relative distances from the camera to objects in the image, as shown below.
Each point in a depth map captured by a dual camera device measures disparity in units of 1/meters, and offers
AVDepth accuracy. That is, an individual point isn’t a good estimate of real-world distance, but the variation between points is consistent enough to use for depth-based image processing effects.
The TrueDepth camera projects an infrared light pattern in front of the camera and images that pattern with an infrared camera. By observing how the pattern is distorted by objects in the scene, the capture system can calculate the distance from the camera to each point in the image.
The TrueDepth camera produces disparity maps by default so that the resulting depth data is similar to that produced by a dual camera device. However, unlike a dual camera device, the TrueDepth camera can directly measure depth (in meters) with
AVDepth accuracy. To capture depth instead of disparity, set the
active of the capture device before starting your capture session: