An auxiliary image used to separate foreground from background with high resolution.


class AVPortraitEffectsMatte : NSObject


Before iOS 11, the iPhone camera software used depth maps to render a shallow depth of field (the bokeh effect) into still images taken in Portrait Mode before discarding the maps. Because the effect was part of the photo, you couldn’t access the maps separately, as metadata, for photos taken by devices running iOS 10 or earlier.

Starting in iOS 11, apps accessing the photo library can use images containing embedded auxiliary depth maps to render creative depth effects, such as forced perspective, or image projection from 2D to 3D space. These depth maps are low-resolution compared to the full-resolution RGB image. As such, the depth effects you can render are limited by the resolution and accuracy of the maps. Fine detail, such as hair, is challenging to preserve faithfully at the resolution of these depth maps.

Starting in iOS 12, the portrait effects matte helps achieve this fine-grain level of detail.

Zoomed in photo showing the fine detail in a portrait effects matte image

Camera type

RGB image resolution

Depth map resolution

Portrait effects matte resolution

Rear dual camera

4032 x 3024

768 x 576

2016 x 1512

TrueDepth camera

3088 x 2320

640 x 480

1544 x 1160

Using the auxiliary matte image, you can improve the quality of rendered portrait effects, such as Natural Light, Studio Light, Contour Light, Stage Light, and Stage Light Mono.

Unlike the depth map, the portrait effects matte isn’t intended to faithfully preserve all gradations of depth in the scene. It’s a depth-guided, people-focused segmentation mask generated from a proprietary Apple neural network trained to detect people. It separates an individual in the foreground from whatever is in the background, with greater detail and clarity than with the depth map alone. It achieves this clarity in part because the matte image has higher resolution than the depth map.

For information about capturing the portrait effects matte, see Configuring Camera Capture to Collect a Portrait Effects Matte. To learn how to extract a portrait effects matte from photos previously captured in portrait mode on a device running iOS 12, see Extracting Portrait Effects Matte Image Data from a Photo.


Creating a Portrait Effects Matte

Configuring Camera Capture to Collect a Portrait Effects Matte

Prepare your app to capture a portrait effects matte when taking photos.

init(fromDictionaryRepresentation: [AnyHashable : Any])

Initializes a portrait effects matte instance from auxiliary image information in an image file.

func applyingExifOrientation(CGImagePropertyOrientation) -> Self

Returns a derivative portrait effects matte after applying the specified EXIF orientation.

func replacingPortraitEffectsMatte(with: CVPixelBuffer) -> Self

Returns a portrait effects matte by wrapping the replacement pixel buffer.

Examining a Portrait Effects Matte

Extracting Portrait Effects Matte Image Data from a Photo

Check for portrait effects matte metadata in existing images.

var mattingImage: CVPixelBuffer

The portrait effects matte's internal image, formatted as a pixel buffer.

var pixelFormatType: OSType

The pixel format type of this portrait effects matte's internal image.

func dictionaryRepresentation(forAuxiliaryDataType: AutoreleasingUnsafeMutablePointer<NSString?>?) -> [AnyHashable : Any]?

A dictionary of primitive map information used for writing an image file with a portrait effects matte.


Inherits From

Conforms To

See Also

Depth Data Capture

Capturing Photos with Depth

Get a depth map with a photo to create effects like the system camera’s Portrait mode (on compatible devices).

AVCamFilter: Applying Filters to a Capture Stream

Render a capture stream with rose-colored filtering and depth effects.

Streaming Depth Data from the TrueDepth Camera

Visualize depth data in 2D and 3D from the TrueDepth camera.

Enhancing Live Video by Leveraging TrueDepth Camera Data

Apply your own background to a live capture feed streamed from the front-facing TrueDepth camera.

class AVCaptureDepthDataOutput

A capture output that records scene depth information on compatible camera devices.

class AVDepthData

A container for per-pixel distance or disparity information captured by compatible camera devices.

class AVSemanticSegmentationMatte

An object that wraps a matting image for a particular semantic segmentation.