Information about the pose, topology, and expression of a face detected in a face-tracking AR session.


When you run a face-tracking AR session (see ARFaceTrackingConfiguration), the session automatically adds to its list of anchors an ARFaceAnchor object when it detects the user’s face with the front-facing camera. Each face anchor provides information about the face’s current position and orientation, topology, and facial expression.

Tracking Face Position and Orientation

The inherited transform property describes the face’s current position and orientation in world coordinates; that is, in a coordinate space relative to that specified by the worldAlignment property of the session configuration. Use this transform matrix to position virtual content you want to “attach” to the face in your AR scene.

This transform matrix creates a face coordinates system for positioning other elements relative to the face. Units of face coordinate space are in meters, with the origin centered behind the face as indicated in the figure below.

Figure 1

Origin of the face coordinate system

The coordinate system is right-handed—the positive x direction points to the viewer’s right (that is, the face’s own left), the positive y direction points up (relative to the face itself, not to the world), and the positive z direction points outward from the face (toward the viewer).

Using Face Topology

The geometry property provides an ARFaceGeometry object representing detailed topology for the face, which conforms a generic face model to match the dimensions, shape, and current expression of the detected face.

You can use this model as the basis for overlaying content that follows the shape of the user’s face—for example, to apply virtual makeup or tattoos. You can also use this model to create occlusion geometry—a 3D model that doesn't render any visible content (allowing the camera image to show through), but that obstructs the camera's view of other virtual content in the scene.

Tracking Facial Expressions

The blendShapes property provides a high-level model of the current facial expression, described via a series of many named coefficients that represent the movement of specific facial features relative to their neutral configurations. You can use blend shape coefficients to animate 2D or 3D content, such as a character or avatar, in ways that follow the user’s facial expressions.


Using Face Geometry

var geometry: ARFaceGeometry

A coarse triangle mesh representing the topology of the detected face.

class ARFaceGeometry

A 3D mesh describing face topology used in face-tracking AR sessions.

class ARSCNFaceGeometry

A SceneKit representation of face topology for use with face information provided by an AR session.

Using Blend Shapes

var blendShapes: [ARFaceAnchor.BlendShapeLocation : NSNumber]

A dictionary of named coefficients representing the detected facial expression in terms of the movement of specific facial features.

struct ARFaceAnchor.BlendShapeLocation

Identifiers for specific facial features, for use with coefficients describing the relative movements of those features.


Inherits From

Conforms To

See Also

Face-Based AR Experiences

Creating Face-Based AR Experiences

Place and animate 3D content that follows the user’s face and matches facial expressions, using the TrueDepth camera on iPhone X.

class ARFaceTrackingConfiguration

A configuration that tracks the movement and expressions of the user’s face using the TrueDepth camera.