An image analysis request that uses a Core ML model to process images.


@interface VNCoreMLRequest : VNImageBasedRequest


The results array of a Core ML-based image analysis request contains a different observation type depending on the kind of MLModel object you create the request with:


Initializing with a Core ML Model

- initWithModel:

Creates a model container to be used with VNCoreMLRequest based on a Core ML model.

- initWithModel:completionHandler:

Creates a model container to be used with VNCoreMLRequest based on a Core ML model, with an optional completion handler.


The Core ML model on which the request is based, wrapped in a VNCoreMLModel.


A container for a Core ML model used with Vision requests.

Configuring Image Options


An optional setting informing the Vision algorithm how to scale an input image.


An enumeration of different ways Vision can crop and scale an input image.

Specifying a Revision


A constant for specifying revision 1 of a Core ML request.


Inherits From

See Also

Machine-Learning Image Analysis

Classifying Images with Vision and Core ML

Preprocess photos using the Vision framework and classify them with a Core ML model.

Training a Create ML Model to Classify Flowers

Train a flower classifier using Create ML in Swift Playgrounds, and apply the resulting model to real-time image classification using Vision.


Classification information produced by an image analysis request.


An output image produced by a Core ML image analysis request.


A collection of key-value information produced by a Core ML image analysis request.

Beta Software

This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.

Learn more about using Apple's beta software