The depth map, if any, captured along with the video frame.
- iOS 11.0+
Face-based AR (see
ARFace) uses the front-facing, depth-sensing camera on compatible devices. When running such a configuration, frames vended by the session contain a depth map captured by the depth camera in addition to the color pixel buffer (see
captured) captured by the color camera. This property’s value is always
nil when running other AR configurations.
The depth-sensing camera provides data at a different frame rate than the color camera, so this property’s value can also be
nil if no depth data was captured at the same time as the current color image.