Convert assets with disparate color spaces and bit depths to a standard working format for applying vImage operations.
- iOS 13.0+
- Xcode 11.3+
To apply a vImage operation to an image, you must know its bit depth (the number of bits required to represent each pixel) and its color space (the number and organization of the color channels the image contains). For example, to apply an affine transform to a 32-bit image, you call
v. To apply the same transform to an 8-bit image, you call
v function helps solve this issue by enabling you to dynamically create converters based on the properties of a source image. By converting all source assets to a standardized working format, you need only one operation.
This sample uses a vImage tent filter to apply a blur to
UIImage objects of arbitrary formats. Because a
UIImage object can contain image data in many formats and color spaces, this implementation converts all input to 8-bit ARGB format before processing.
The example below shows an original 16-bit CMYK image on the left, and the same image converted to 8-bit ARGB and blurred on the right.
This sample walks you through the steps for applying a blur to an image of any format:
Implementing the blurring function.
Creating the source and destination image formats.
Creating the source and destination buffers.
Performing the conversion.
Applying the blur operation.
Returning the blurred result.
Using the blurring function.
Implement the Blurring Function
The code to apply a blur to a
UIImage instance with an arbitrary image format is implemented in
blur. This function accepts three parameters: the image to blur, and the width and height (in pixels) of the blur:
Create the Source and Destination Image Formats
To learn how to create a
v structure from properties derived from the source image, see Creating a Core Graphics Image Format. The properties of the source format are derived from source image. The properties of the destination format are hard coded to match the convolution operation used later in the function:
Create the Source and Destination Buffers
With the source and destination image formats defined, you create and initialize a buffer containing the source image, and a buffer that will contain the 8-bit, ARGB conversion of the source image. Be sure to free the memory allocated to these buffers when you’re finished working with them by using the
free() function. Because
blur may exit early, defer both of these free calls to ensure that they’re always called.
Perform the Conversion
Use the source and destination formats to create a converter using the
make(source function. The converter’s
convert(source: function performs the conversion.
On return, your destination buffer contains the image in 8-bit ARGB color space.
Apply the Blur Operation
Create a blurred version of the converted image using vImage’s tent filter. This filter calculates a weighted average of pixels within a surrounding grid, known as a kernel, with a size specified by the
blur parameters. The tent filter uses a fast algorithm that’s suited for real-time applications.
Create a buffer to receive the tent filter’s result using the same technique that you used for the other buffers. Be sure to free the buffer’s memory when the you’re finished using it.
The width and height values passed to
v must be odd so that the center of the kernel aligns with each pixel. To guarantee that the kernel sizes passed to the convolve function are odd, calculate
odd to add one to any even values passed into the function.
Return the Blurred Result
v has completed,
blur contains a blurred version of the original image. Your function creates a
CGImage representation from the blurred result and returns a
UIImage instance from that.
Use the Blurring Function
This code shows an example usage of the