More to Explore with ARKit 5

ARKit 5 brings Location Anchors to London and more cities across the United States, allowing you to create AR experiences for specific places, like the London Eye, Times Square, and even your own neighborhood. ARKit 5 also features improvements to Motion Tracking and support for Face Tracking in the Ultra Wide camera on iPad Pro (5th generation). And with a new App Clip Code anchor, you can pin virtual content from your App Clip or ARKit app to a printed or digital App Clip Code.

Expanded Face Tracking support

Support for Face Tracking extends to the front-facing camera on any device with the A12 Bionic chip and later, including iPhone SE, so even more users can delight in AR experiences using the front-facing camera. Face Tracking is now also supported by the Ultra Wide camera in the latest iPad Pro (5th generation). Track up to three faces at once using the TrueDepth camera to power front-facing camera experiences, such as Memoji and Snapchat.

Location Anchors

Place AR experiences at specific places, such as cities and famous landmarks. Location Anchors lets you anchor your AR creations at a certain latitude, longitude, and altitude. Users can move around virtual objects to see them from different perspectives, exactly as real objects are seen through a camera lens.

Requires iPhone XS, iPhone XS Max, iPhone XR, or later. Available in select cities.

Discover more ARKit features

Depth API

The advanced scene understanding capabilities built into the LiDAR Scanner allow this API to use per-pixel depth information about the surrounding environment. When combined with the 3D mesh data generated by Scene Geometry, this depth information makes virtual object occlusion even more realistic by enabling instant placement of virtual objects and blending them seamlessly with their physical surroundings. This can drive new capabilities within your apps, like taking more precise measurements and applying effects to a user’s environment.

Instant AR

The LiDAR Scanner enables incredibly quick plane detection, allowing for the instant placement of AR objects in the real world without scanning. Instant AR placement is automatically enabled on iPhone 12 Pro, iPhone 12 Pro Max, and iPad Pro for all apps built with ARKit, without any code changes.

Motion Capture

Capture the motion of a person in real time with a single camera. By understanding body position and movement as a series of joints and bones, you can use motion and poses as an input to the AR experience — placing people at the center of AR. Height estimation improves on iPhone 12, iPhone 12 Pro, and iPad Pro in all apps built with ARKit, without any code changes.

Simultaneous front and back camera

You can simultaneously use face and world tracking on the front and back cameras, opening up new possibilities. For example, users can interact with AR content in the back camera view using just their face.

Scene Geometry

Create a topological map of your space with labels identifying floors, walls, ceilings, windows, doors, and seats. This deep understanding of the real world unlocks object occlusion and real-world physics for virtual objects, and also gives you more information to power your AR workflows.

People Occlusion

AR content realistically passes behind and in front of people in the real world, making AR experiences more immersive while also enabling green screen-style effects in almost any environment. Depth estimation improves on iPhone 12, iPhone 12 Pro, and iPad Pro in all apps built with ARKit, without any code changes.

Additional improvements

Detect up to 100 images at a time and get an automatic estimate of the physical size of the object in the image. 3D object detection is more robust, as objects are better recognized in complex environments. And now, machine learning is used to detect planes in the environment even faster.

The Depth API is specific to devices equipped with the LiDAR Scanner (iPad Pro 11-inch (2nd generation), iPad Pro 12.9-inch (4th generation), iPhone 12 Pro, iPhone 12 Pro Max).