Mac Developer Library

Developer

Core Image Reference Collection CIContext Class Reference

Options
Deployment Target:

On This Page
Language:

CIContext

The CIContext class provides an evaluation context for rendering a CIImage object through Quartz 2D or OpenGL. You use CIContext objects in conjunction with other Core Image classes, such as CIFilter, CIImage, and CIColor, to take advantage of the built-in Core Image filters when processing images.

CIContext and CIImage objects are immutable, which means each can be shared safely among threads. Multiple threads can use the same GPU or CPU CIContext object to render CIImage objects. However, this is not the case for CIFilter objects, which are mutable. A CIFilter object cannot be shared safely among threads. If you app is multithreaded, each thread must create its own CIFilter objects. Otherwise, your app could behave unexpectedly.

  • Creates a Core Image context from a Quartz context, using the specified options.

    Declaration

    Swift

    init(CGContext cgctx: CGContext, options options: [String : AnyObject]?)

    Objective-C

    + (CIContext *)contextWithCGContext:(CGContextRef)ctx options:(NSDictionary<NSString *,id> *)dict

    Parameters

    ctx

    A Quartz graphics context (CGContextRef object) either obtained from the system or created using a Quartz function such as CGBitmapContextCreate. See Quartz 2D Programming Guide for information on creating Quartz graphics contexts.

    dict

    A dictionary that contains color space information. You can pass any of the keys defined in “Context Options” along with the appropriate value.

    Discussion

    After calling this method, Core Image draws content to the specified Quartz graphics context.

    When you create a CIContext object using a Quartz graphics context, any transformations that are already set on the Quartz graphics context affect drawing to that context.

    Availability

    Available in OS X v10.4 and later.

  • Creates a Core Image context from a CGL context, using the specified options and pixel format object.

    Deprecation Statement

    Instead use contextWithCGLContext:pixelFormat:colorSpace:options:.

    Declaration

    Objective-C

    + (CIContext *)contextWithCGLContext:(CGLContextObj)ctx pixelFormat:(CGLPixelFormatObj)pf options:(NSDictionary<NSString *,id> *)options

    Parameters

    ctx

    A CGL context (CGLContextObj object) obtain by calling the CGL function CGLCreateContext.

    pf

    A CGL pixel format object (CGLPixelFormatObj object) created by calling the CGL function CGLChoosePixelFormat. This argument must be the same pixel format object used to create the CGL context. The pixel format object must be valid for the lifetime of the Core Image context. Don’t release the pixel format object until after you release the Core Image context.

    options

    A dictionary that contains color space information. You can provide the keys kCIContextOutputColorSpace or kCIContextWorkingColorSpace along with a CGColorSpaceRef object for each color space.

    Discussion

    After calling this method, Core Image draws content into the surface (drawable object) attached to the CGL context. A CGL context is an OS X OpenGL context. For more information, see OpenGL Programming Guide for Mac.

    When you create a CIContext object using a CGL context, all OpenGL states set for the CGL context affect rendering to that context. That means that coordinate and viewport transformations set on the CGL context as well as the vertex color.

    For best results, follow these guidelines when you use Core Image to render into an OpenGL context:

    • Ensure that the a single unit in the coordinate space of the OpenGL context represents a single pixel in the output device.

    • The Core Image coordinate space has the origin in the bottom left corner of the screen. You should configure the OpenGL context in the same way.

    • The OpenGL context blending state is respected by Core Image. If the image you want to render contains translucent pixels, it’s best to enable blending using a blend function with the parameters GL_ONE, GL_ONE_MINUS_SRC_ALPHA, as shown in the following code example.

    Some typical initialization code for a view with width W and height H is:

    1. glViewport (0, 0, W, H);
    2. glMatrixMode (GL_PROJECTION);
    3. glLoadIdentity ();
    4. glOrtho (0, W, 0, H, -1, 1);
    5. glMatrixMode (GL_MODELVIEW);
    6. glLoadIdentity ();
    7. glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
    8. glEnable (GL_BLEND);

    Availability

    Available in OS X v10.4 and later.

    Deprecated in OS X v10.6.

  • Creates a Core Image context from a CGL context, using the specified options, color space, and pixel format object.

    Declaration

    Swift

    init(CGLContext cglctx: CGLContextObj, pixelFormat pixelFormat: CGLPixelFormatObj, colorSpace colorSpace: CGColorSpace?, options options: [String : AnyObject]?)

    Objective-C

    + (CIContext *)contextWithCGLContext:(CGLContextObj)ctx pixelFormat:(CGLPixelFormatObj)pf colorSpace:(CGColorSpaceRef)space options:(NSDictionary<NSString *,id> *)options

    Parameters

    ctx

    A CGL context obtained by calling the CGL function CGLCreateContext.

    pf

    A CGL pixel format object either obtained from the system or created by calling a CGL function such as CGLChoosePixelFormat. This parameter must be the same pixel format object used to create the CGL context. The pixel format object must be valid for the lifetime of the Core Image context. Don’t release the pixel format object until after you release the Core Image context.

    space

    A color space object encapsulating color space information that is used to specify how color values are interpreted.

    options

    A dictionary that contains options for creating a CIContext object. You can pass any of the keys defined in “Context Options” along with the appropriate value.

    Discussion

    After calling this method, Core Image draws content into the surface (drawable object) attached to the CGL context. A CGL context is an OS X OpenGL context. For more information, see OpenGL Programming Guide for Mac.

    When you create a CIContext object using a CGL context, all OpenGL states set for the CGL context affect rendering to that context. That means that coordinate and viewport transformations set on the CGL context, as well as the vertex color, affect drawing to that context.

    For best results, follow these guidelines when you use Core Image to render into an OpenGL context:

    • Ensure that a single unit in the coordinate space of the OpenGL context represents a single pixel in the output device.

    • The Core Image coordinate space has the origin in the bottom-left corner of the screen. You should configure the OpenGL context in the same way.

    • The OpenGL context blending state is respected by Core Image. If the image you want to render contains translucent pixels, it’s best to enable blending using a blend function with the parameters GL_ONE, GL_ONE_MINUS_SRC_ALPHA, as shown in the following code example.

    Some typical initialization code for a view with width W and height H is:

    1. glViewport (0, 0, W, H);
    2. glMatrixMode (GL_PROJECTION);
    3. glLoadIdentity ();
    4. glOrtho (0, W, 0, H, -1, 1);
    5. glMatrixMode (GL_MODELVIEW);
    6. glLoadIdentity ();
    7. glBlendFunc (GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
    8. glEnable (GL_BLEND);

    Availability

    Available in OS X v10.6 and later.

  • Creates a CPU-based Core Image context using the specified options.

    Declaration

    Swift

    init(options options: [String : AnyObject]?)

    Objective-C

    + (CIContext *)contextWithOptions:(NSDictionary<NSString *,id> *)dict

    Parameters

    dict

    A dictionary that contains options for the context. You can pass any of the keys defined in “Context Options” along with the appropriate value.

    Return Value

    A Core Image context.

    Discussion

    You can create a CPU-based context by providing the key kCIContextUseSoftwareRenderer. A CPU-based context supports larger input and output images than a GPU-based context. It also allows your app to perform processing in the background, such as when saving the rendered output to the Photo Library.

    GPU rendering is faster than CPU rendering, but the resulting image is not displayed on the device until after is it copied to CPU memory and converted to another image type, such as a UIImage object.

    Availability

    Available in OS X v10.11 and later.

  • Creates an OpenGL-based Core Image context using a GPU that is not currently driving a display.

    Declaration

    Swift

    init(forOfflineGPUAtIndex index: UInt32)

    Objective-C

    + (CIContext *)contextForOfflineGPUAtIndex:(unsigned int)index

    Parameters

    index

    The index of the offline GPU with which to create the context; a number between zero and the value returned by the offlineGPUCount method.

    Return Value

    A Core Image context.

    Discussion

    GPU devices that are not currently being used to drive a display can be used for Core Image rendering. Use the offlineGPUCount method to determine whether any such GPUs are available.

    Availability

    Available in OS X v10.10 and later.

  • Creates a Core Image context using a GPU that is not currently driving a display, with the specified options.

    Declaration

    Swift

    init(forOfflineGPUAtIndex index: UInt32, colorSpace colorSpace: CGColorSpace?, options options: [String : AnyObject]?, sharedContext sharedContext: CGLContextObj)

    Objective-C

    + (CIContext *)contextForOfflineGPUAtIndex:(unsigned int)index colorSpace:(CGColorSpaceRef)colorSpace options:(NSDictionary<NSString *,id> *)options sharedContext:(CGLContextObj)sharedContext

    Parameters

    index

    The index of the offline GPU with which to create the context; a number between zero and the value returned by the offlineGPUCount method.

    colorSpace

    A color space object encapsulating color space information that is used to specify how color values are interpreted.

    options

    A dictionary that contains options for creating a CIContext object. You can pass any of the keys defined in “Context Options” along with the appropriate value.

    sharedContext

    A CGL context with which to share OpenGL resources, obtained by calling the CGL function CGLCreateContext. Pass NULL to use a context that does not share OpenGL resources.

    Return Value

    A Core Image context.

    Discussion

    GPU devices that are not currently being used to drive a display can be used for Core Image rendering. Use the offlineGPUCount method to determine whether any such GPUs are available.

    Availability

    Available in OS X v10.10 and later.

  • Creates a Core Image context using the specified Metal device.

    Declaration

    Swift

    init(MTLDevice device: MTLDevice)

    Objective-C

    + (CIContext *)contextWithMTLDevice:(id<MTLDevice>)device

    Parameters

    device

    The Metal device object to use for rendering.

    Return Value

    A Core Image context.

    Discussion

    Use this method to choose a specific Metal device for rendering when a system contains multiple Metal devices. To create a Metal-based context using the system’s default Metal device, use the contextWithOptions: method.

    Availability

    Available in OS X v10.11 and later.

  • Creates a Core Image context using the specified Metal device and options.

    Declaration

    Swift

    init(MTLDevice device: MTLDevice, options options: [String : AnyObject]?)

    Objective-C

    + (CIContext *)contextWithMTLDevice:(id<MTLDevice>)device options:(NSDictionary<NSString *,id> *)options

    Parameters

    device

    The Metal device object to use for rendering.

    options

    A dictionary that contains options for creating a CIContext object. You can pass any of the keys defined in “Context Options” along with the appropriate value.

    Return Value

    A Core Image context.

    Discussion

    Use this method to choose a specific Metal device for rendering when a system contains multiple Metal devices. To create a Metal-based context using the system’s default Metal device, use the contextWithOptions: method.

    Availability

    Available in OS X v10.11 and later.

  • Creates a Quartz 2D image from a region of a Core Image image object.

    Declaration

    Swift

    func createCGImage(_ image: CIImage, fromRect fromRect: CGRect) -> CGImage

    Objective-C

    - (CGImageRef)createCGImage:(CIImage *)im fromRect:(CGRect)r

    Parameters

    im

    A Core Image image object.

    r

    The region of the image to render.

    Return Value

    A Quartz 2D image. You are responsible for releasing the returned image when you no longer need it.

    Discussion

    Renders a region of an image into a temporary buffer using the context, then creates and returns a Quartz 2D image with the results.

    Availability

    Available in OS X v10.4 and later.

  • Creates a Quartz 2D image from a region of a Core Image image object.

    Declaration

    Swift

    func createCGImage(_ image: CIImage, fromRect fromRect: CGRect, format format: CIFormat, colorSpace colorSpace: CGColorSpace?) -> CGImage

    Objective-C

    - (CGImageRef)createCGImage:(CIImage *)im fromRect:(CGRect)r format:(CIFormat)f colorSpace:(CGColorSpaceRef)cs

    Parameters

    im

    A Core Image image object.

    r

    The region of the image to render.

    f

    The format of the image.

    cs

    The color space of the image.

    Return Value

    A Quartz 2D image. You are responsible for releasing the returned image when you no longer need it.

    Discussion

    Renders a region of an image into a temporary buffer using the context, then creates and returns a Quartz 2D image with the results.

    Availability

    Available in OS X v10.5 and later.

  • Creates a CGLayer object from the provided parameters.

    Declaration

    Swift

    func createCGLayerWithSize(_ size: CGSize, info info: CFDictionary?) -> CGLayer

    Objective-C

    - (CGLayerRef)createCGLayerWithSize:(CGSize)size info:(CFDictionaryRef)d

    Parameters

    size

    The size, in default user space units, of the layer relative to the graphics context.

    d

    A dictionary, which is passed to CGLayerCreateWithContext as the auxiliaryInfo parameter. Pass NULL because this parameter is reserved for future use.

    Return Value

    A CGLayer object.

    Discussion

    After calling this method, Core Image draws content into the CGLayer object. Core Image creates a CGLayer object by calling the Quartz 2D function CGLayerCreateWithContext, whose prototype is:

    1. CGLayerRef CGLayerCreateWithContext (
    2. CGContextRef context,
    3. CGSize size,
    4. CFDictionaryRef auxiliaryInfo
    5. );

    Core Image passes the CIContext object as the context parameter, the size as the size parameter, and the dictionary as the auxiliaryInfo parameter. For more information on CGLayer objects, see Quartz 2D Programming Guide and CGLayer Reference.

    Availability

    Available in OS X v10.4 and later.

    Deprecated in OS X v10.11.

  • Renders a region of an image to a point in the context destination.

    Deprecation Statement

    Instead use drawImage:inRect:fromRect:.

    Declaration

    Objective-C

    - (void)drawImage:(CIImage *)im atPoint:(CGPoint)p fromRect:(CGRect)src

    Parameters

    im

    A Core Image image object.

    p

    The point in the context destination to draw to.

    src

    The region of the image to draw.

    Discussion

    This method because it is ambiguous as to the units of the dimensions and won’t work as expected in a high-resolution environment which is why you should use drawImage:inRect:fromRect: instead.

    On iOS platforms, this method draws the image onto a render buffer for the OpenGL ES context. Use this method only if the CIContext object is created with contextWithEAGLContext:, and hence, you are rendering to a CAEAGLLayer.

    Availability

    Available in OS X v10.4 and later.

    Deprecated in OS X v10.8.

  • Renders a region of an image to a rectangle in the context destination.

    Declaration

    Swift

    func drawImage(_ image: CIImage, inRect inRect: CGRect, fromRect fromRect: CGRect)

    Objective-C

    - (void)drawImage:(CIImage *)im inRect:(CGRect)dest fromRect:(CGRect)src

    Parameters

    im

    A CIImage object.

    dest

    The rectangle in the context destination to draw into. The image is scaled to fill the destination rectangle.

    src

    The subregion of the image that you want to draw into the context, with the origin and target size defined by the dest parameter. This rectangle is always in pixel dimensions.

    Discussion

    On iOS, this method draws the CIImage object into a renderbuffer for the OpenGL ES context. Use this method only if the CIContext object is created with contextWithEAGLContext: and if you are rendering to a CAEAGLayer.

    On OS X, you need to be aware of whether the CIContext object is created with a CGContextRef or a CGLContext object. If you create the CIContext object with a CGContextRef, the dimensions of the destination rectangle are in points. If you create the CIContext object with a CGLContext object, the dimensions are in pixels.

    On iOS 5, this method is synchronous. On iOS 6, this method is asynchronous. For apps linked on iOS 5, this method will continue to be synchronous.

    Availability

    Available in OS X v10.4 and later.

  • Renders to the given bitmap.

    Declaration

    Swift

    func render(_ image: CIImage, toBitmap data: UnsafeMutablePointer<Void>, rowBytes rowBytes: Int, bounds bounds: CGRect, format format: CIFormat, colorSpace colorSpace: CGColorSpace?)

    Objective-C

    - (void)render:(CIImage *)im toBitmap:(void *)data rowBytes:(ptrdiff_t)rb bounds:(CGRect)r format:(CIFormat)f colorSpace:(CGColorSpaceRef)cs

    Parameters

    im

    A Core Image image object.

    data

    Storage for the bitmap data.

    rb

    The bytes per row.

    r

    The bounds of the bitmap data.

    f

    The format of the bitmap data.

    cs

    The color space for the data. Pass NULL if you want to use the output color space of the context.

    Availability

    Available in OS X v10.5 and later.

  • Renders an image into a pixel buffer.

    Declaration

    Swift

    func render(_ image: CIImage, toCVPixelBuffer buffer: CVPixelBuffer)

    Objective-C

    - (void)render:(CIImage *)image toCVPixelBuffer:(CVPixelBufferRef)buffer

    Parameters

    image

    A Core Image image object.

    buffer

    The destination pixel buffer.

    Availability

    Available in OS X v10.11 and later.

  • Renders a region of an image into a pixel buffer.

    Declaration

    Swift

    func render(_ image: CIImage, toCVPixelBuffer buffer: CVPixelBuffer, bounds bounds: CGRect, colorSpace colorSpace: CGColorSpace?)

    Objective-C

    - (void)render:(CIImage *)image toCVPixelBuffer:(CVPixelBufferRef)buffer bounds:(CGRect)r colorSpace:(CGColorSpaceRef)cs

    Parameters

    image

    A Core Image image object.

    buffer

    The destination pixel buffer.

    r

    The rectangle in the destination pixel buffer to draw into.

    cs

    The color space of the destination pixel buffer.

    Availability

    Available in OS X v10.11 and later.

  • Renders a region of an image into an IOSurface object.

    Declaration

    Swift

    func render(_ image: CIImage, toIOSurface surface: IOSurface, bounds bounds: CGRect, colorSpace colorSpace: CGColorSpace?)

    Objective-C

    - (void)render:(CIImage *)image toIOSurface:(IOSurfaceRef)surface bounds:(CGRect)r colorSpace:(CGColorSpaceRef)cs

    Parameters

    image

    A Core Image image object.

    surface

    The destination IOSurface object.

    r

    The rectangle in the destination IOSurface object to draw into.

    cs

    The color space of the destination IOSurface object.

    Availability

    Available in OS X v10.6 and later.

  • Renders a region of an image to a Metal texture.

    Declaration

    Swift

    func render(_ image: CIImage, toMTLTexture texture: MTLTexture, commandBuffer commandBuffer: MTLCommandBuffer?, bounds bounds: CGRect, colorSpace colorSpace: CGColorSpace)

    Objective-C

    - (void)render:(CIImage *)image toMTLTexture:(id<MTLTexture>)texture commandBuffer:(id<MTLCommandBuffer>)commandBuffer bounds:(CGRect)bounds colorSpace:(CGColorSpaceRef)colorSpace

    Parameters

    image

    A Core Image image object.

    texture

    The destination Metal texture object.

    commandBuffer

    The Metal command buffer into which to schedule Core Image rendering tasks.

    bounds

    The rectangle in the destination texture to draw into.

    colorSpace

    The color space of the destination texture.

    Discussion

    If you specify nil for the commandBuffer parameter, Core Image manages its own Metal command buffer. To combine Core Image rendering with other Metal rendering tasks—for example, to use Core Image filters on textures whose content is generated by a Metal render-to-texture operation, or to use Core Image output later in the same Metal rendering pass—pass the same MTLCommandBuffer object you use for those tasks.

    Availability

    Available in OS X v10.11 and later.

  • Frees any cached data, such as temporary images, associated with the context and runs the garbage collector.

    Declaration

    Swift

    func clearCaches()

    Objective-C

    - (void)clearCaches

    Discussion

    You can use this method to remove textures from the texture cache that reference deleted images.

    Availability

    Available in OS X v10.4 and later.

  • Runs the garbage collector to reclaim any resources that the context no longer requires.

    Declaration

    Swift

    func reclaimResources()

    Objective-C

    - (void)reclaimResources

    Discussion

    The system calls this method automatically after every rendering operation. You can use this method to remove textures from the texture cache that reference deleted images.

    Availability

    Available in OS X v10.4 and later.

    See Also

    – clearCaches

  • Returns the number of GPUs not currently driving a display.

    Declaration

    Swift

    class func offlineGPUCount() -> UInt32

    Objective-C

    + (unsigned int)offlineGPUCount

    Return Value

    The number of offline GPU devices.

    Discussion

    If this count is greater than zero, the system has attached GPU devices that are not currently driving a display. You can use these devices for Core Image rendering by creating a context with the contextForOfflineGPUAtIndex: orcontextForOfflineGPUAtIndex:colorSpace:options:sharedContext: method.

    Availability

    Available in OS X v10.10 and later.

  • The working color space of the Core Image context. (read-only)

    Declaration

    Swift

    var workingColorSpace: CGColorSpace { get }

    Objective-C

    @property(nonatomic, readonly) CGColorSpaceRef workingColorSpace

    Discussion

    Working color space determines the color space used when executing filter kernels; Core Image automatically converts to and from the source and destination color spaces of input images and output contexts. You specify a working color space when creating a context using the kCIContextWorkingColorSpace key in the options dictionary of one of the methods listed in Creating a Context.

    Availability

    Available in OS X v10.11 and later.

  • Keys in the options dictionary for a CIContext object.

    Declaration

    Swift

    let kCIContextOutputColorSpace: String let kCIContextWorkingColorSpace: String let kCIContextUseSoftwareRenderer: String let kCIContextWorkingFormat: String

    Objective-C

    NSString *kCIContextOutputColorSpace; NSString *kCIContextWorkingColorSpace; NSString *kCIContextUseSoftwareRenderer; NSString *kCIContextWorkingFormat;

    Constants

    • kCIContextOutputColorSpace

      kCIContextOutputColorSpace

      A key for the color space to use for images before they are rendered to the context.

      By default, Core Image uses the GenericRGB color space, which leaves color matching to the system. You can specify a different output color space by providing a Quartz 2D CGColorSpace object (CGColorSpaceRef). (See Quartz 2D Programming Guide for information on creating and using CGColorSpace objects.)

      To request that Core Image perform no color management, specify the NSNull object as the value for this key. Use this option for images that don’t contain color data (such as elevation maps, normal vector maps, and sampled function tables).

      Available in OS X v10.4 and later.

    • kCIContextWorkingColorSpace

      kCIContextWorkingColorSpace

      A key for the color space to use for image operations.

      By default, Core Image assumes that processing nodes are 128 bits-per-pixel, linear light, premultiplied RGBA floating-point values that use the GenericRGB color space. You can specify a different working color space by providing a Quartz 2D CGColorSpace object (CGColorSpaceRef). Note that the working color space must be RGB-based. If you have YUV data as input (or other data that is not RGB-based), you can use ColorSync functions to convert to the working color space. (See Quartz 2D Programming Guide for information on creating and using CGColorSpace objects.)

      To request that Core Image perform no color management, specify the NSNull object as the value for this key. Use this option for images that don’t contain color data (such as elevation maps, normal vector maps, and sampled function tables).

      Available in OS X v10.4 and later.

    • kCIContextUseSoftwareRenderer

      kCIContextUseSoftwareRenderer

      A key for enabling software renderer use. If the associated NSNumber object is YEStrue, then the software renderer is required.

      Available in OS X v10.4 and later.

    • kCIContextWorkingFormat

      kCIContextWorkingFormat

      An option for the color format to use for intermediate results when rendering with the context.

      The value for this key is an NSNumber object containing a CIFormat value. The default working format is kCIFormatRGBA8 for CPU rendering and kCIFormatRGBAf for GPU rendering. GPU rendering also supports the kCIFormatRGBAh format for greater color precision, but this format requires twice as much memory and can be used only with color management enabled.

      Available in OS X v10.4 and later.

    Discussion

    For a discussion of when to use options and color management, see Core Image Programming Guide.