A container for per-pixel distance or disparity information captured by compatible camera devices.
- iOS 11.0+Beta
- macOS 10.13+Beta
- tvOS 11.0+Beta
"Depth Data" is a generic term for a map of per-pixel data containing depth-related information. A depth data object wraps a disparity or depth map and provides conversion methods, focus information, and camera calibration data to aid in using the map for rendering or computer vision tasks.
A depth map describes at each pixel the distance to an object in meters.
A disparity map describes normalized shift values for use in comparing two images. The value for each pixel in the map is in units of 1/meters: (
The capture pipeline generates disparity or depth maps from camera images containing non-rectilinear data. Camera lenses have small imperfections that cause small distortions in their resultant images compared to an ideal pinhole camera model, so
AVDepth maps contain non-rectilinear (non-distortion-corrected) data as well. Their values are warped to match the lens distortion characteristics present in the YUV image pixel buffers captured at the same time.
Because a depth data map is non-rectilinear, you can use an
AVDepth map as a proxy for depth when rendering effects to its accompanying image, but not to correlate points in 3D space. In order to use depth data for computer vision tasks, you should use the data in the
camera property to rectify the depth data.
There are two ways to capture depth data:
You can also create
AVDepth objects using information obtained from image files with the Image I/O framework.
When editing images containing depth information, use the methods listed in Transforming and Processing to generate derivative
AVDepth objects reflecting the edits that have been performed.