Processing Images

Core Image has three classes that support image processing on iOS and OS X:

The remainder of this chapter provides all the details you need to use Core Image filters and the CIFilter, CIImage, and CIContext classes on iOS and OS X.

Overview

Processing an image is straightforward as shown in Listing 1-1. This example uses Core Image methods specific to iOS; see below for the corresponding OS X methods. Each numbered step in the listing is described in more detail following the listing.

Listing 1-1  The basics of applying a filter to an image on iOS

CIContext *context = [CIContext contextWithOptions:nil];               // 1
CIImage *image = [CIImage imageWithContentsOfURL:myURL];               // 2
CIFilter *filter = [CIFilter filterWithName:@"CISepiaTone"];           // 3
[filter setValue:image forKey:kCIInputImageKey];
[filter setValue:@0.8f forKey:kCIInputIntensityKey];
CIImage *result = [filter valueForKey:kCIOutputImageKey];              // 4
CGRect extent = [result extent];
CGImageRef cgImage = [context createCGImage:result fromRect:extent];   // 5

Here’s what the code does:

  1. Create a CIContext object. The contextWithOptions: method is only available on iOS. For details on other methods for iOS and OS X see Table 1-2.

  2. Create a CIImage object. You can create a CIImage from a variety of sources, such as a URL. See “Creating a CIImage Object” for more options.

  3. Create the filter and set values for its input parameters. There are more compact ways to set values than shown here. See “Creating a CIFilter Object and Setting Values.”

  4. Get the output image. The output image is a recipe for how to produce the image. The image is not yet rendered. See “Getting the Output Image.”

  5. Render the CIImage to a Core Graphics image that is ready for display or saving to a file.

The Built-in Filters

Core Image comes with dozens of built-in filters ready to support image processing in your app. Core Image Filter Reference lists these filters, their characteristics, their iOS and OS X availability, and shows a sample image produced by the filter. The list of built-in filters can change, so for that reason, Core Image provides methods that let you query the system for the available filters (see “Querying the System for Filters”).

A filter category specifies the type of effect—blur, distortion, generator, and so forth—or its intended use—still images, video, nonsquare pixels, and so on. A filter can be a member of more than one category. A filter also has a display name, which is the name to show to users and a filter name, which is the name you must use to access the filter programmatically.

Most filters have one or more input parameters that let you control how processing is done. Each input parameter has an attribute class that specifies its data type, such as NSNumber. An input parameter can optionally have other attributes, such as its default value, the allowable minimum and maximum values, the display name for the parameter, and any other attributes that are described in CIFilter Class Reference.

For example, the CIColorMonochrome filter has three input parameters—the image to process, a monochrome color, and the color intensity. You supply the image and have the option to set a color and its intensity. Most filters, including the CIColorMonochrome filter, have default values for each nonimage input parameter. Core Image uses the default values to process your image if you choose not to supply your own values for the input parameters.

Filter attributes are stored as key-value pairs. The key is a constant that identifies the attribute and the value is the setting associated with the key. Core Image attribute values are typically one of the data types listed in Table 1-1.

Table 1-1  Attribute value data types

Data Type

Object

Description

Strings

NSString

Used for such things as display names

Floating-point values

NSNumber

Scalar values such as intensity levels and radii

Vectors

CIVector

Specify positions, areas, and color values; can have 2, 3, or 4 elements, each of which is a floating-point number

Colors

CIColor

Contain color values and a color space in which to interpret the values

Images

CIImage

Lightweight objects that specify image “recipes”

Transforms

NSData on iOS

NSAffineTransform on OS X

An affine transformation to apply to an image

Core Image uses key-value coding, which means you can get and set values for the attributes of a filter by using the methods provided by the NSKeyValueCoding protocol. (For more information, see Key-Value Coding Programming Guide.)

Creating a Core Image Context

To render the image, you need to create a Core Image context and then use that context to draw the output image. A Core Image context represents a drawing destination. The destination determines whether Core Image uses the GPU or the CPU for rendering. Table 1-2 lists the various methods you can use for specific platforms and renderers.

Table 1-2  Methods that create a Core Image context

Context

Renderer

Supported Platform

contextWithOptions:

CPU or GPU

iOS

contextWithCGContext:options:

NSGraphicsContext

CPU or GPU

OS X

contextWithEAGLContext:

contextWithEAGLContext:options:

GPU

iOS

contextWithCGLContext:pixelFormat:options:

contextWithCGLContext:pixelFormat:colorSpace:options:

GPU

OS X

Creating a Core Image Context on iOS When You Don’t Need Real-Time Performance

If your app doesn’t require real-time display, you can create a CIContext object as follows:

CIContext *context = [CIContext contextWithOptions:nil];

This method can use either the CPU or GPU for rendering. To specify which to use, set up an options dictionary and add the key kCIContextUseSoftwareRenderer with the appropriate Boolean value for your app. CPU rendering is slower than GPU rendering. But in the case of GPU rendering, the resulting image is not displayed until after it is copied back to CPU memory and converted to another image type such as a UIImage object.

Creating a Core Image Context on iOS When You Need Real-Time Performance

If your app supports real-time image processing you should create a CIContext object from an EAGL context rather than using contextWithOptions: and specifying the GPU. The advantage is that the rendered image stays on the GPU and never gets copied back to CPU memory. First you need to create an EAGL context:

EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];

Then use the method contextWithEAGLContext: as shown in Listing 1-2 to create a CIContext object.

You should turn off color management by supplying null for the working color space. Color management slows down performance. You’ll want to use color management for situations that require color fidelity. But in a real-time app, color fidelity is often not a concern. (See “Does Your App Need Color Management?.”)

Listing 1-2  Creating a CIContext on iOS for real-time performance

NSDictionary *options = @{ kCIContextWorkingColorSpace : [NSNull null] };
CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];

Creating a Core Image Context from a CGContext on OS X

You can create a Core Image context from a Quartz 2D graphics context using code similar to that shown in Listing 1-3, which is an excerpt from the drawRect: method in a Cocoa app. You get the current NSGraphicsContext, convert that to a Quartz 2D graphics context (CGContextRef), and then provide the Quartz 2D graphics context as an argument to the contextWithCGContext:options: method of the CIContext class. For information on Quartz 2D graphics contexts, see Quartz 2D Programming Guide.

Listing 1-3  Creating a Core Image context from a Quartz 2D graphics context

context = [CIContext contextWithCGContext:
                    [[NSGraphicsContext currentContext] graphicsPort]
                    options: nil]

Creating a Core Image Context from an OpenGL Context on OS X

The code in Listing 1-4 shows how to set up a Core Image context from the current OpenGL graphics context. It’s important that the pixel format for the context includes the NSOpenGLPFANoRecovery constant as an attribute. Otherwise Core Image may not be able to create another context that shares textures with this one. You must also make sure that you pass a pixel format whose data type is CGLPixelFormatObj, as shown in Listing 1-4. For more information on pixel formats and OpenGL, see OpenGL Programming Guide for Mac.

Listing 1-4  Creating a Core Image context from an OpenGL graphics context

const NSOpenGLPixelFormatAttribute attr[] = {
        NSOpenGLPFAAccelerated,
        NSOpenGLPFANoRecovery,
        NSOpenGLPFAColorSize, 32,
        0
    };
NSOpenGLPixelFormat *pf = [[NSOpenGLPixelFormat alloc] initWithAttributes:(void *)&attr];
CIContext *myCIContext = [CIContext contextWithCGLContext: CGLGetCurrentContext()
                                pixelFormat: [pf CGLPixelFormatObj]
                                options: nil];

Creating a Core Image Context from an NSGraphicsContext on OS X

The CIContext method of the NSGraphicsContext class returns a CIContext object that you can use to render into the NSGraphicsContext object. The CIContext object is created on demand and remains in existence for the lifetime of its owning NSGraphicsContext object. You create the Core Image context using a line of code similar to the following:

[[NSGraphicsContext currentContext] CIContext]

For more information on this method, see NSGraphicsContext Class Reference.

Creating a CIImage Object

Core Image filters process Core Image images (CIImage objects). Table 1-3 lists the methods that create a CIImage object. The method you use depends on the source of the image. Keep in mind that a CIImage object is really an image recipe; Core Image doesn’t actually produce any pixels until it’s called on to render results to a destination.

Table 1-3  Methods used to create a CIImage object from existing image sources

Image source

Methods

Platform

URL

imageWithContentsOfURL:

imageWithContentsOfURL:options:

iOS, OS X

Quartz 2D image (CGImageRef)

imageWithCGImage:

imageWithCGImage:options:

iOS, OS X

Bitmap data

imageWithBitmapData:bytesPerRow:size:format:colorSpace:

imageWithImageProvider:size:format:colorSpace:options:

iOS, OS X

Encoded data (an image in memory)

imageWithData:

imageWithData:options:

iOS, OS X

CIImageProvider

imageWithImageProvider:size:format:colorSpace:options:

iOS, OS X

OpenGL texture

imageWithTexture:size:flipped:colorSpace:

imageWithTexture:size:flipped:options:

iOS, OS X

Core Video pixel buffer

imageWithCVPixelBuffer:

imageWithCVPixelBuffer:options:

iOS

Core Video image buffer

imageWithCVImageBuffer:

imageWithCVImageBuffer:options:

OS X

IOSurface

imageWithIOSurface:

imageWithIOSurface:options:

OS X

Quartz 2D layer (CGLayerRef)

imageWithCGLayer:

imageWithCGLayer:options:

OS X

NSCIImageRep

initWithBitmapImageRep:

OS X

Creating a CIFilter Object and Setting Values

The filterWithName: method creates a filter whose type is specified by the name argument. The name argument is a string whose value must match exactly the filter name of a built-in filter (see “The Built-in Filters”). You can obtain a list of filter names by following the instructions in “Querying the System for Filters” or you can look up a filter name in Core Image Filter Reference.

On iOS, the input values for a filter are set to default values when you call the filterWithName: method.

On OS X, the input values for a filter are undefined when you first create it, which is why you either need to call the setDefaults method to set the default values or supply values for all input parameters at the time you create the filter by calling the method filterWithName:keysAndValues:. If you call setDefaults, you can call setValue:forKey: later to change the input parameter values.

If you don’t know the input parameters for a filter, you can get an array of them using the method inputKeys. (Or, you can look up the input parameters for most of the built-in filters in Core Image Filter Reference.) Filters, except for generator filters, require an input image. Some require two or more images or textures. Set a value for each input parameter whose default value you want to change by calling the method setValue:forKey:.

Let’s look at an example of setting up a filter to adjust the hue of an image. The filter’s name is CIHueAdjust. As long as you enter the name correctly, you’ll be able to create the filter with this line of code:

hueAdjust = [CIFilter filterWithName:@"CIHueAdjust"];

Defaults are set for you on iOS but not on OS X. When you create a filter on OS X, it’s advisable to immediately set the input values. In this case, set the defaults:

[hueAdjust setDefaults];

This filter has two input parameters: the input image and the input angle. The input angle for the hue adjustment filter refers to the location of the hue in the HSV and HLS color spaces. This is an angular measurement that can vary from 0.0 to 2 pi. A value of 0 indicates the color red; the color green corresponds to 2/3 pi radians, and the color blue is 4/3 pi radians.

Next you’ll want to specify an input image and an angle. The image can be one created from any of the methods listed in “Creating a CIImage Object.” Figure 1-1 shows the unprocessed image.

Figure 1-1  The unprocessed image
The original image

The floating-point value in this code specifies a rose-colored hue:

[hueAdjust setValue: myCIImage forKey: kCIInputImageKey];
[hueAdjust setValue: @2.094f forKey: kCIInputAngleKey];

The following code shows a more compact way to create a filter and set values for input parameters:

hueAdjust = [CIFilter filterWithName:@"CIHueAdjust"] keysAndValues:
                         kCIInputImageKey, myCIImage,
                         kCIInputAngleKey, @2.094f,
                         nil];

You can supply as many input parameters a you’d like, but you must end the list with nil.

Getting the Output Image

You get the output image by retrieving the value for the outputImage key:

CIImage *result = [hueAdjust valueForKey: kCIOutputImageKey];

Core Image does not perform any image processing until you call a method that actually renders the image (see “Rendering the Resulting Output Image”). When you request the output image, Core Image assembles the calculations that it needs to produce an output image and stores those calculations (that is, image “recipe”) in a CIImage object. The actual image is only rendered (and hence, the calculations performed) if there is an explicit call to one of the image-drawing methods. See “Rendering the Resulting Output Image.”

Deferring processing until rendering time makes Core Image fast and efficient. At rendering time, Core Image can see if more than one filter needs to be applied to an image. If so, it automatically concatenates multiple “recipes” into one operation, which means each pixel is processed only once rather than many times. Figure 1-2 illustrates a multiple-operations workflow that Core Image can make more efficient. The final image is a scaled-down version of the original. For the case of a large image, applying color adjustment before scaling down the image requires more processing power than scaling down the image and then applying color adjustment. By waiting until render time to apply filters, Core Image can determine that it is more efficient to perform these operations in reverse order.

Figure 1-2  A work flow that Core Image optimizes
A work flow that can benefit from lazy evaluation

Rendering the Resulting Output Image

Rendering the resulting output image triggers the processor-intensive operations—either GPU or CPU, depending on the context you set up. The following methods are available for rendering:

Method

Use

drawImage:inRect:fromRect:

Renders a region of an image to a rectangle in the context destination.

On iOS, this method renders only to a CIContext object that is created with contextWithEAGLContext:.

On OS X, the dimensions of the destination rectangle are in points if the CIContext object is created with a CGContextRef. The dimensions are in points if the CIContext object is created with a CGLContext object.

createCGImage:fromRect:

createCGImage:fromRect:format:colorSpace:

Create a Quartz 2D image from a region of a CIImage object. (iOS and OS X)

render:toBitmap:rowBytes:bounds:format:colorSpace:

Renders a CIImage object into a bitmap. (iOS and OS X)

createCGLayerWithSize:info:

Creates a CGLayer object renders the CIImage object into the layer. (OS X only)

render:toCVPixelBuffer:

render:toCVPixelBuffer:bounds:colorSpace:

Renders the CIImage object into a Core Video pixel buffer. (iOS only)

render:toIOSurface:bounds:colorSpace:

Renders the CIImage object into an IOSurface object. (OS X only)

To render the image discussed in “Creating a CIFilter Object and Setting Values,” you can use this line of code on OS X to draw the result onscreen:

[myContext drawImage:result inRect:destinationRect fromRect:contextRect];

The original image from this example (shown in Figure 1-1) now appears in its processed form, as shown in Figure 1-3.

Figure 1-3  The unprocessed image after the hue adjustment filter has been applied
The image after applying the color controls filter

Maintaining Thread Safety

CIContext and CIImage objects are immutable, which means each can be shared safely among threads. Multiple threads can use the same GPU or CPU CIContext object to render CIImage objects. However, this is not the case for CIFilter objects, which are mutable. A CIFilter object cannot be shared safely among threads. If your app is multithreaded, each thread must create its own CIFilter objects. Otherwise, your app could behave unexpectedly.

Chaining Filters

You can create amazing effects by chaining filters—that is, using the output image from one filter as input to another filter. Let’s see how to apply two more filters to the image shown in Figure 1-3—gloom (CIGloom) and bump distortion (CIBumpDistortion).

The gloom filter does just that; it makes an image gloomy by dulling its highlights. Notice that the code in Listing 1-5 is very similar to that shown in “Creating a CIFilter Object and Setting Values.” It creates a filter and sets default values for the gloom filter. This time, the input image is the output image from the hue adjustment filter. It’s that easy to chain filters together!

Listing 1-5  Creating, setting up, and applying a gloom filter

CIFilter *gloom = [CIFilter filterWithName:@"CIGloom"];
[gloom setDefaults];                                        // 1
[gloom setValue: result forKey: kCIInputImageKey];
[gloom setValue: @25.0f forKey: kCIInputRadiusKey];         // 2
[gloom setValue: @0.75f forKey: kCIInputIntensityKey];      // 3
result = [gloom valueForKey: kCIOutputImageKey];            // 4

Here’s what the code does:

  1. Sets default values. You must set defaults on OS X. On iOS you do not need to set default values because they are set automatically.

  2. Sets the input radius to 25. The input radius specifies the extent of the effect, and can vary from 0 to 100 with a default value of 10. Recall that you can find the minimum, maximum, and default values for a filter programmatically by retrieving the attribute dictionary for the filter.

  3. Sets the input intensity to 0.75. The input intensity is a scalar value that specifies a linear blend between the filter output and the original image. The minimum is 0.0, the maximum is 1.0, and the default value is 1.0.

  4. Requests the output image, but does not draw the image.

The code requests the output image but does not draw the image. Figure 1-4 shows what the image would look like if you drew it at this point after processing it with both the hue adjustment and gloom filters.

Figure 1-4  The image after applying the hue adjustment and gloom filters
The image after applying the hue adjustment and gloom filters

The bump distortion filter (CIBumpDistortion) creates a bulge in an image that originates at a specified point. Listing 1-6 shows how to create, set up, and apply this filter to the output image from the previous filter, the gloom filter. The bump distortion takes three parameters: a location that specifies the center of the effect, the radius of the effect, and the input scale.

Listing 1-6  Creating, setting up, and applying the bump distortion filter

CIFilter *bumpDistortion = [CIFilter filterWithName:@"CIBumpDistortion"];    // 1
[bumpDistortion setDefaults];                                                // 2
[bumpDistortion setValue: result forKey: kCIInputImageKey];
[bumpDistortion setValue: [CIVector vectorWithX:200 Y:150]
                    forKey: kCIInputCenterKey];                              // 3
[bumpDistortion setValue: @100.0f forKey: kCIInputRadiusKey];                // 4
[bumpDistortion setValue: @3.0f forKey: kCIInputScaleKey];                   // 5
result = [bumpDistortion valueForKey: kCIOutputImageKey];

Here’s what the code does:

  1. Creates the filter by providing its name.

  2. On OS X, sets the default values (not necessary on iOS).

  3. Sets the center of the effect to the center of the image.

  4. Sets the radius of the bump to 100 pixels.

  5. Sets the input scale to 3. The input scale specifies the direction and the amount of the effect. The default value is –0.5. The range is –10.0 through 10.0. A value of 0 specifies no effect. A negative value creates an outward bump; a positive value creates an inward bump.

Figure 1-5 shows the final rendered image.

Figure 1-5  The image after applying the hue adjustment along with the gloom and bump distortion filters
The image after applying the hue adjustment, gloom, and bump distortion filters

Using Transition Effects

Transitions are typically used between images in a slide show or to switch from one scene to another in video. These effects are rendered over time and require that you set up a timer. The purpose of this section is to show how to set up the timer. You’ll learn how to do this by setting up and applying the copy machine transition filter (CICopyMachine) to two still images. The copy machine transition creates a light bar similar to what you see in a copy machine or image scanner. The light bar sweeps from left to right across the initial image to reveal the target image. Figure 1-6 shows what this filter looks like before, partway through, and after the transition from an image of ski boots to an image of a skier. (To learn more about specific input parameter of the CICopyMachine filter, see Core Image Filter Reference.)

Figure 1-6  A copy machine transition from ski boots to a skier
A copy machine transition from ski boots to a skier

Transition filters require the following tasks:

  1. Create Core Image images (CIImage objects) to use for the transition.

  2. Set up and schedule a timer.

  3. Create a CIContext object.

  4. Create a CIFilter object for the filter to apply to the image.

  5. On OS X, set the default values for the filter.

  6. Set the filter parameters.

  7. Set the source and the target images to process.

  8. Calculate the time.

  9. Apply the filter.

  10. Draw the result.

  11. Repeat steps 8–10 until the transition is complete.

You’ll notice that many of these tasks are the same as those required to process an image using a filter other than a transition filter. The difference, however, is the timer used to repeatedly draw the effect at various intervals throughout the transition.

The awakeFromNib method, shown in Listing 1-7, gets two images (boots.jpg and skier.jpg) and sets them as the source and target images. Using the NSTimer class, a timer is set to repeat every 1/30 second. Note the variables thumbnailWidth and thumbnailHeight. These are used to constrain the rendered images to the view set up in Interface Builder.

Listing 1-7  Getting images and setting up a timer

- (void)awakeFromNib
{
    NSTimer    *timer;
    NSURL      *url;
 
    thumbnailWidth  = 340.0;
    thumbnailHeight = 240.0;
 
    url   = [NSURL fileURLWithPath: [[NSBundle mainBundle]
                    pathForResource: @"boots" ofType: @"jpg"]];
    [self setSourceImage: [CIImage imageWithContentsOfURL: url]];
 
    url   = [NSURL fileURLWithPath: [[NSBundle mainBundle]
                    pathForResource: @"skier" ofType: @"jpg"]];
    [self setTargetImage: [CIImage imageWithContentsOfURL: url]];
 
    timer = [NSTimer scheduledTimerWithTimeInterval: 1.0/30.0
                                             target: self
                                           selector: @selector(timerFired:)
                                           userInfo: nil
                                            repeats: YES];
 
    base = [NSDate timeIntervalSinceReferenceDate];
    [[NSRunLoop currentRunLoop] addTimer: timer
                                 forMode: NSDefaultRunLoopMode];
    [[NSRunLoop currentRunLoop] addTimer: timer
                                 forMode: NSEventTrackingRunLoopMode];
}

You set up a transition filter just as you’d set up any other filter. Listing 1-8 uses the filterWithName: method to create the filter. It then calls setDefaults to initialize all input parameters. The code sets the extent to correspond with the thumbnail width and height declared in the awakeFromNib: method, shown in Listing 1-7.

The routine uses the thumbnail variables to specify the center of the effect. For this example, the center of the effect is the center of the image, but it doesn’t have to be.

Listing 1-8  Setting up the transition filter

- (void)setupTransition
{
    CGFloat w = thumbnailWidth;
    CGFloat h = thumbnailHeight;
 
    CIVector *extent = [CIVector vectorWithX: 0  Y: 0  Z: w  W: h];
 
    transition  = [CIFilter filterWithName: @"CICopyMachineTransition"];
    // Set defaults on OS X; not necessary on iOS.
    [transition setDefaults];
    [transition setValue: extent forKey: kCIInputExtentKey];
}

The drawRect: method for the copy machine transition effect is shown in Listing 1-9. This method sets up a rectangle that’s the same size as the view and then sets up a floating-point value for the rendering time. If the CIContext object hasn’t already been created, the method creates one. If the transition is not yet set up, the method calls the setupTransition method (see Listing 1-8). Finally, the method calls the drawImage:inRect:fromRect: method, passing the image that should be shown for the rendering time. The imageForTransition: method, shown in Listing 1-10, applies the filter and returns the appropriate image for the rendering time.

Listing 1-9  The drawRect: method for the copy machine transition effect

- (void)drawRect: (NSRect)rectangle
{
    CGRect  cg = CGRectMake(NSMinX(rectangle), NSMinY(rectangle),
                            NSWidth(rectangle), NSHeight(rectangle));
 
    CGFloat t = 0.4 * ([NSDate timeIntervalSinceReferenceDate] - base);
    if (context == nil) {
        context = [CIContext contextWithCGContext:
                        [[NSGraphicsContext currentContext] graphicsPort]
                                          options: nil];
    }
    if (transition == nil) {
        [self setupTransition];
    }
    [context drawImage: [self imageForTransition: t + 0.1]
                inRect: cg
              fromRect: cg];
}

The imageForTransition: method figures out, based on the rendering time, which is the source image and which is the target image. It’s set up to allow a transition to repeatedly loop back and forth. If your app applies a transition that doesn’t loop, it would not need the if-else construction shown in Listing 1-10.

The routine sets the inputTime value based on the rendering time passed to the imageForTransition: method. It applies the transition, passing the output image from the transition to the crop filter (CICrop). Cropping ensures the output image fits in the view rectangle. The routine returns the cropped transition image to the drawRect: method, which then draws the image.

Listing 1-10  Applying the transition filter

- (CIImage *)imageForTransition: (float)t
{
    // Remove the if-else construct if you don't want the transition to loop
    if (fmodf(t, 2.0) < 1.0f) {
        [transition setValue: sourceImage  forKey: kCIInputImageKey];
        [transition setValue: targetImage  forKey: kCIInputTargetImageKey];
    } else {
        [transition setValue: targetImage  forKey: kCIInputImageKey];
        [transition setValue: sourceImage  forKey: kCIInputTargetImageKey];
    }
 
    [transition setValue: @( 0.5 * (1 - cos(fmodf(t, 1.0f) * M_PI)) )
                  forKey: kCIInputTimeKey];
 
    CIFilter  *crop = [CIFilter filterWithName: @"CICrop"
                                 keysAndValues:
                   kCIInputImageKey, [transition valueForKey: kCIOutputImageKey],
                   @"inputRectangle", [CIVector vectorWithX: 0  Y: 0
                                       Z: thumbnailWidth  W: thumbnailHeight],
                   nil];
    return [crop valueForKey: kCIOutputImageKey];
}

Each time the timer that you set up fires, the display must be updated. Listing 1-11 shows a timerFired: routine that does just that.

Listing 1-11  Using the timer to update the display

- (void)timerFired: (id)sender
{
    [self setNeedsDisplay: YES];
}

Finally, Listing 1-12 shows the housekeeping that needs to be performed if your app switches the source and target images, as the example in Listing 1-10 does.

Listing 1-12  Setting source and target images

- (void)setSourceImage: (CIImage *)source
{
    sourceImage = source;
}
 
- (void)setTargetImage: (CIImage *)target
{
    targetImage = target;
}

Applying a Filter to Video

Core Image and Core Video can work together to achieve a variety of effects. For example, you can use a color correction filter on a video shot under water to correct for the fact that water absorbs red light faster than green and blue light. There are many more ways you can use these technologies together.

Follow these steps to apply a Core Image filter to a video displayed using Core Video on OS X:

  1. When you subclass NSView to create a view for the video, declare a CIFilter object in the interface, similar to what’s shown in this code:

    @interface MyVideoView : NSView
    {
        NSRecursiveLock     *lock;
        QTMovie             *qtMovie;
        QTVisualContextRef  qtVisualContext;
        CVDisplayLinkRef    displayLink;
        CVImageBufferRef    currentFrame;
        CIFilter            *effectFilter;
        id                  delegate;
    }
  2. When you initialize the view with a frame, you create a CIFilter object for the filter and set the default values using code similar to the following:

    effectFilter = [CIFilter filterWithName:@"CILineScreen"];
    [effectFilter setDefaults];

    This example uses the Core Image filter CILineScreen, but you’d use whatever is appropriate for your app.

  3. Set the filter input parameters, except for the input image.

  4. Each time you render a frame, you need to set the input image and draw the output image. Your renderCurrentFrame routine would look similar to the following. To avoid interpolation, this example uses integral coordinates when it draws the output.

    - (void)renderCurrentFrame
    {
        NSRect frame = [self frame];
     
        if (currentFrame) {
            CIImage *inputImage = [CIImage imageWithCVImageBuffer:currentFrame];
            CGRect imageRect = [inputImage extent];
            CGFloat x = (frame.size.width - imageRect.size.width) * 0.5;
            CGFloat y = (frame.size.height - imageRect.size.height) * 0.5;
            [effectFilter setValue:inputImage forKey:kCIInputImageKey];
            [[[NSGraphicsContext currentContext] CIContext]
                drawImage:[effectFilter valueForKey:kCIOutputImageKey]
                  atPoint:CGPointMake(floor(x), floor(y))
                 fromRect:imageRect];
        }
    }