Detecting Faces in an Image
Core Image can analyze and find human faces in an image. It performs face detection, not recognition. Face detection is the identification of rectangles that contain human face features, whereas face recognition is the identification of specific human faces (John, Mary, and so on). After Core Image detects a face, it can provide information about face features, such as eye and mouth positions. It can also track the position an identified face in a video.
Knowing where the faces are in an image lets you perform other operations, such as cropping or adjusting the image quality of the face (tone balance, red-eye correction and so on). You can also perform other interesting operations on the faces; for example:
“Anonymous Faces Filter Recipe” shows how to apply a pixellate filter only to the faces in an image.
“White Vignette for Faces Filter Recipe (iOS only)” shows how to place a vignette around a face.
CIDetector class to find faces in an image as shown in Listing 2-1.
Listing 2-1 Creating a face detector
CIContext *context = [CIContext contextWithOptions:nil]; // 1
NSDictionary *opts = [NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh
forKey:CIDetectorAccuracy]; // 2
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeFace
options:opts]; // 3
opts = [NSDictionary dictionaryWithObject:
[[myImage properties] valueForKey:kCGImagePropertyOrientation;
forKey:CIDetectorImageOrientation]]; // 4
NSArray *features = [detector featuresInImage:myImage
options:opts]; // 5
Here’s what the code does:
Creates a context; in this example, a context for iOS. You can use any of the context-creation functions described in “Processing Images.”) You also have the option of supplying
nilinstead of a context when you create the detector.)
Creates an options dictionary to specify accuracy for the detector. You can specify low or high accuracy. Low accuracy (
CIDetectorAccuracyLow) is fast; high accuracy, shown in this example, is thorough but slower.
Creates a detector for faces. The only type of detector you can create is one for human faces.
Sets up an options dictionary for finding faces. It’s important to let Core Image know the image orientation so the detector knows where it can find upright faces. Most of the time you’ll read the image orientation from the image itself, and then provide that value to the options dictionary.
Uses the detector to find features in an image. The image you provide must be a
CIImageobject. Core Image returns an array of
CIFeatureobjects, each of which represents a face in the image.
After you get an array of faces, you’ll probably want to find out their characteristics, such as where the eyes and mouth are located. The next sections describes how.
Getting Face and Face Feature Bounds
Face features include:
left and right eye positions
tracking ID and tracking frame count which Core Image uses to follow a face in a video segment (available in iOS v6.0 and later and in OS X v10.8 and later)
After you get an array of face features from a
CIDetector object, you can loop through the array to examine the bounds of each face and each feature in the faces, as shown in Listing 2-2.
Listing 2-2 Examining face feature bounds
for (CIFaceFeature *f in features)
printf("Left eye %g %g\n", f.leftEyePosition.x.
printf("Right eye %g %g\n", f.rightEyePosition.x.
printf("Mouth %g %g\n", f.mouthPosition.x.
© 2004, 2013 Apple Inc. All Rights Reserved. (Last updated: 2013-01-28)