An output image produced by a Core ML image analysis request.


@interface VNPixelBufferObservation : VNObservation


This type of observation results from performing a VNCoreMLRequest image analysis with a Core ML model whose role is image-to-image processing. For example, this observation would result from a model that analyzes the style of one image and then transfers that style to a different image.

Vision infers that an MLModel object is an image-to-image model if that model includes an image. Its modelDescription object includes an image-typed feature description in its outputDescriptionsByName dictionary.


Parsing Observation Content


The image that results from a request with image output.


The name used in the model description of the CoreML model that produced this observation.



Inherits From

See Also

Machine-Learning Image Analysis

Classifying Images with Vision and Core ML

Preprocess photos using the Vision framework and classify them with a Core ML model.

Training a Create ML Model to Classify Flowers

Train a flower classifier using Create ML in Swift Playgrounds, and apply the resulting model to real-time image classification using Vision.


An image analysis request that uses a Core ML model to process images.


Classification information produced by an image analysis request.


A collection of key-value information produced by a Core ML image analysis request.

Beta Software

This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.

Learn more about using Apple's beta software