A representation of an image to be processed or produced by Core Image filters.
- iOS 5.0+
- macOS 10.4+
- tvOS 9.0+
- Core Image
CIImage objects in conjunction with other Core Image classes—such as
CIColor—to take advantage of the built-in Core Image filters when processing images. You can create
CIImage objects with data supplied from a variety of sources, including Quartz 2D images, Core Video image buffers (
CVImage), URL-based objects, and
CIImage object has image data associated with it, it is not an image. You can think of a
CIImage object as an image “recipe.” A
CIImage object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible.
CIImage objects are immutable, which means each can be shared safely among threads. Multiple threads can use the same GPU or CPU
CIContext object to render
CIImage objects. However, this is not the case for
CIFilter objects, which are mutable. A
CIFilter object cannot be shared safely among threads. If you app is multithreaded, each thread must create its own
CIFilter objects. Otherwise, your app could behave unexpectedly.
Core Image also provides autoadjustment methods. These methods analyze an image for common deficiencies and return a set of filters to correct those deficiencies. The filters are preset with values for improving image quality by altering values for skin tones, saturation, contrast, and shadows and for removing red-eye or other artifacts caused by flash. (See Getting Autoadjustment Filters.)
For a discussion of all the methods you can use to create
CIImage objects on iOS and macOS, see Core Image Programming Guide.