Choosing Which Camera Feed to Augment

Augment the user's environment through either the front or back camera.


iOS devices come equiped with two cameras, and for each ARKit session you need to choose which camera's feed to augment. Since ARKit 3, you can get data from both cameras that ARKit provides simultaneously, but you still must choose one camera feed to show to the user at a time.

Augmented Reality with the Back Camera

The most common kinds of AR experience display a view from the device's back camera, augmented by other visual content, giving the user a new way to see and interact with the world around them.

ARWorldTrackingConfiguration provides this kind of experience: ARKit tracks the real-world the user inhabits, and matches it with a coordinate space for you to place virtual content. World tracking also offers features to make AR experiences more immersive, like recognizing objects and images in the user's environment and responding to real-world lighting conditions.

Augmented Reality with the Front Camera

For iOS devices that have a TrueDepth camera, ARFaceTrackingConfiguration enables you to augment the front-camera feed, while providing you with real-time tracking for the pose and expression of faces. With that information, that you might, for example, choose to overlay realistic virtual masks. Or, you might omit the camera view and use facial expression data to animate virtual characters, as done by the Animoji app for iMessage.

See Also


Verifying Device Support and User Permission

Check whether your app can use ARKit and respect user privacy at runtime.

Managing Session Lifecycle and Tracking Quality

Keep the user informed on the current session state and recover from interruptions.


The main object you use to control an AR experience.


An object that defines the particular ARKit features enabled in your session at a given time.


A position and orientation of something of interest in the physical environment.