An Image Unit and Its Parts

This tutorial provides detailed information on how to write the various parts of an image unit so that they work together as an image unit. It’s important that you have an idea not only of what the parts are, but how they fit together. This chapter provides such an overview. It describes the parts of an image unit, discusses what each one does, and provides guidelines for writing some of the components in an image unit.

Before you start this chapter, you should be familiar with the concepts described in Core Image Programming Guide, have already used some of the built-in image processing filters provided by Core Image, and understand the classes defined by the Core Image API (see Core Image Reference Collection).

The Major Parts of an Image Unit

An image processing filter, when packaged as an executable image unit, has three major parts:

Division of Labor

Image processing tasks are divided into low-level (kernel) and high-level (Objective-C) tasks. When you first start writing image units, you might find it challenging to divide the tasks appropriately. If you strive to keep kernel routines lean, you will likely succeed in dividing the tasks appropriately.

A kernel routine operates on individual pixels and uses the GPU (assuming one is available). For best performance, a kernel routine should be as focused as possible on pixel processing. Any set up work or calculations that can be done outside the kernel routine should be done outside the kernel routine, in the Objective-C filter files. As you’ll see, because Core Image expects certain tasks to be performed outside the kernel routine, the Xcode image unit plug-in template provides methods set up for just this purpose. In Writing the Objective-C Portion you see the specifics, but for now, a general understanding is all you’ll need.

These are the tasks typically performed in the Objective-C filter files:

Kernel Routine Rules

A kernel routine is like a worker on an assembly line—it specializes in a focused task. Each time the routine is invoked, it produces a vec4 data type from the materials (input parameters) given to it. The routine must perform as little work as possible to be efficient. Assembly line work goes fastest when workers use preassembled subcomponents. It’s also true of kernel routines. Anything that can be calculated ahead of time and passed to the routine should be. As you become more experienced at writing kernel routines, you’ll devise clever ways to pare down the code in the routine. The examples in Writing Kernels should give you some ideas. Core Image also helps in this regard by restricting what sorts of operations can be done in a kernel routine.

Keep the following in mind as you design and write kernel routines:

Table 1-1 lists the valid input parameters for a kernel routine and the associated objects that must be passed to the kernel routine from the Objective-C portion of an image unit. Core Image extracts the appropriate base data type from the higher-level Objective-C object that you supply. If you don’t use an object, the filter may unexpectedly quit. For example, if, in the Objective-C portion of the image unit, you pass a floating-point value directly instead of packaging it as an NSNumber object, your filter will not work. In fact, when you use the Image Unit Validator tool on such an image unit, the image unit fails with a cryptic message. (See Validating an Image Unit.)

Table 1-1  Kernel routine input parameters and their associated objects

Kernel routine input parameter

Object

sampler

CISampler

__table sampler

CISampler

__color

CIColor

float

NSNumber

vec2, vec3, or vec4

CIVector

Region-of-Interest Methods

The region of interest (ROI) is the area of the sampler source data that your kernel uses for its per-pixel processing. A kernel routine always returns a vec4 data type—that is, one pixel. However, the routine can operate on any number of pixels to produce that vec4 data type. If the mapping between the source and the destination is not one-to-one, then you must define a region-of-interest method in the Objective-C filter file.

You do not need an ROI method when a kernel routine:

You must supply an ROI method when a kernel routine:

You supply an ROI method for each kernel routine in an image unit that needs you. (An image unit can contain one or more kernel routines.) Each ROI method that you supply must use a method signature of the following form:

- (CGRect) regionOf:(int)samplerIndex
            destRect:(CGRect)r
            userInfo:obj;

You can replace regionOf with an appropriate name. For example, an ROI method for a blur filter could be named blurROI:destRect:userInfo:.

Core Image invokes your ROI method when appropriate, passing to it the sampler index (which you’ll learn more about later), the rectangle for the region being rendered, and any data that is needed by your routine. The method must define the ROI for each sampler data source used by the kernel routine. If a sampler data source used by the kernel routine doesn’t require an ROI method, then you can pass back the destRect rectangle for that sampler. You simply check the samplerIndex value passed to the method. If an ROI calculation is need for the sampler, perform the calculation and pass back the appropriate rectangle. If an ROI calculation is not needed for that particular sampler, then pass back the destRect passed to the method.

For example, if your kernel routine accesses pixels within a radius r around the current target, you need to offset the destination rectangle in the ROI method by the radius r. You’ll see how all this works in more detail later.

Next Steps

Now that you have a general idea of what the major parts of an image unit are and what each does, you are ready to move on to writing kernel routines. Writing Kernels starts with a few simple kernel routines and progresses to more complex ones. Not only will you se how to write kernel routines, but you’ll see how you can test simple kernel routines without the need to provide Objective-C code.