An image analysis request that finds facial features (such as the eyes and mouth) in an image.
- iOS 11.0+
- macOS 10.13+
- Mac Catalyst 13.0+Beta
- tvOS 11.0+
By default, a face landmarks request first locates all faces in the input image, then analyzes each to detect facial features.
If you've already located all the faces in an image, or want to detect landmarks in only a subset of the faces in the image, set the
inputFaceObservations property to an array of
VNFaceObservation objects representing the faces you want to analyze. (You can either use face observations output by a
VNDetectFaceRectanglesRequest or manually create
VNFaceObservation instances with the bounding boxes of the faces you want to analyze.)
Configuring a Face Landmarks Request
Locating Face Landmarks
A variable describing how a face landmarks request orders or enumerates the resulting features.
Versioning Face Landmark Detection
A Boolean variable that indicates whether the Vision framework supports a given constellation type for a given request revision.
Face or facial-feature information detected by an image analysis request.
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.Learn more about using Apple's beta software