Create anchors that track objects you recognize in the camera feed, using a custom optical-recognition algorithm.
- iOS 11.3+
- Xcode 10.0+
This sample app parses the camera feed, using the Vision framework with a Core ML model that recognizes regular desktop items. The app displays a label onscreen that indicates when it recognizes an item. You then tap the screen to place a textual annotation in the physical environment that’s labeled with the name of the recognized object. Because the Core ML model used by this app doesn’t tell you where the object lies within an image, label placement relative to the object depends on where you tap.
Implement the Vision/Core ML Image Classifier
The sample code’s
classify method, and
process method manage:
A Core ML image-classifier model, loaded from an
mlmodelfile bundled with the app using the Swift API that Core ML generates for the model
Run the AR Session and Process Camera Images
View class manages the AR session and displays AR overlay content in a SpriteKit view. ARKit captures video frames from the camera and provides them to the view controller in the
session(_: method, which then calls the
classify method to run the Vision image classifier.
Serialize Image Processing for Real-Time Performance
classify method uses the view controller’s
current property to track whether Vision is currently processing an image before starting another Vision task.
In addition, the sample app enables the
uses setting for its Vision request, freeing the GPU for use in rendering.
Visualize Results in AR
process method stores the best-match result label produced by the image classifier and displays it in the corner of the screen. The user can then tap in the AR scene to place that label at a real-world position. Placing a label requires two main steps.
First, a tap gesture recognizer fires the
place action. This method uses the ARKit
hit method to estimate the 3D real-world position corresponding to the tap, and adds an anchor to the AR session at that position.
Next, after ARKit automatically creates a SpriteKit node for the newly added anchor, the
view(_: delegate method provides content for that node. In this case, the sample
Template class creates a styled text label using the string provided by the image classifier.