A set of images for ARKit to attempt to detect in the user's environment.
- iOS 11.3+
Use this property to choose known 2D images for ARKit to find in the user's environment and present as
ARImage for use in your AR experience. Use an Xcode asset catalog to provide reference images for ARKit to detect, or define them programmatically with the
ARKit has two ways to recognize images from this set during a session:
With image detection, ARKit reports when it first detects an image in view of the camera, and provides infrequent updates to the image anchor's
transformthereafter. Image detection doesn't continuously track real-world movement of the image or track when the image disappears from view. Image detection works best for cases where AR content responds to static images in the scene—for example, identifying art in a museum or adding animated elements to a movie poster.
In iOS 12.0 or later, set the
maximumto enable image tracking. Image tracking provides continuous updates for detected images that move relative to the world, and can accurately track when images disappear from view (or reappear afterward). Image tracking is well suited to cases where AR content responds to images on moving objects—for example, adding interactive characters to a tabletop card or board game.
Number Of Tracked Images
Image detection accuracy and performance are reduced with larger numbers of detection images. For best results, use no more than around 25 images in this set.
You can support a larger total number of detection images by changing which set of images is active for detection over time. For example, an app that identifies paintings in an art museum might limit the set of detection images based on which area of the museum the user is currently in (after using Core Location to locate the user within the museum).
To detect a different set of images without otherwise affecting the session, call the session's
run(_: method with a configuration containing a different
detection set and no options.