A model you train to detect objects within an image.
- macOS 10.15+Beta
- Xcode 11.0+Beta
- Create ML
Use an object detector to train a machine learning model that you can include in your app to identify specific types of objects within images.
When you create an object detection model, you train it with images and annotations for each image. Each annotation contains a label and a region for an object within the image. For example, you can train an object detector with images of tables and annotations for specific objects such as bananas, croissants, and cups of coffee.
After training completes, you evaluate the trained model by showing the model a testing set of images with annotations that the model hasn’t seen before. The metrics that come from this evaluation, such as averagePrecision and meanAveragePrecision, tell you whether the model performs well enough. When the model makes too many mistakes, you can add more or better training data, or change the parameters, and try again.
When your model does perform well enough, you save it as a Core ML model file with the
mlmodel extension. You can then import this model file into an app that uses a Core ML model file to detect objects in images.