Use vImage’s convert-any-to-any function to perform real-time image processing of video frames streamed from your device’s camera.
- iOS 11.2+
- Xcode 10.0+
You can combine vImage operations with AVFoundation to process individual video frames in real time. This sample describes how to use the
v function to create a single RGB image from the separate luminance (Y) and two-channel chrominance (CbCr) channels provided by an
The example below shows how an image (top) is split into luminance (bottom left), Cb chrominance (bottom middle), and Cr chrominance (bottom right) channels:
This sample walks you through the steps for applying a histogram equalization operation to a video sample buffer:
Defining reusable variables.
Creating a converter.
Creating source vImage buffers.
Initializing a destination buffer.
Converting the video frame to an ARGB8888 image.
Applying a histogram equalization operation to the destination image.
Creating a displayable
UIImagefrom the destination buffer.
For a complete discussion of how to manage the capture from a device such as a camera, see Still and Video Media Capture in the AVFoundation Programming Guide. For this sample, use this code to configure and start running a session:
After the capture session starts running,
capture is called for each new
video frame. Before you pass the pixel buffer (which contains the image data for the video frame) to vImage for processing, lock it to ensure that your processing code has exclusive access. When you’re finished with the pixel buffer, you can unlock it.
Define Reusable Variables
Reusing vImage buffers is especially important when working with video. If you try to reallocate and zero-fill the buffers with each frame, you’re likely to experience performance issues. To enable buffer reuse, declare them outside of the
capture method of the sample buffer’s delegate. (The same is true for the converter that defines the source and destination types for the convert-any-to-any function.)
Create a Core Video-to-Core Graphics Converter
vImage’s convert-any-to-any function requires a converter that describes the source and destination formats. In this example, you’re converting from a Core Video pixel buffer to a Core Graphics image, so you use the
v function to create a converter. You derive the source Core Video image format from the pixel buffer with
If the error passed to
nil, and you can force unwrap it using
Create and Reuse the Source Buffers
The converter may require more than one source or destination buffer. The number of buffers required by a converter is returned by
In this example, the converter requires two source buffers that represent separate luminance and chrominance planes. Because
source is initialized as an empty array, the following code creates the correct number of buffers for the converter on the first pass of
You can query the type and order of the buffers required by a converter by using the
v functions. In this example, the source buffer order is
Initialize the Source Buffers
v function accepts the array of source buffers and initializes them in the correct order for conversion. You must pass the
kv flag so that the function initializes the buffers to read from the locked pixel buffer.
Initialize the Destination Buffer
data property of the destination buffer you instantiated earlier, to find out if it needs to be initialized. The destination buffer will contain the RGB image after conversion. This code initializes
destination on the first pass and sets its size to match the luminance plane of the pixel buffer:
Note that the luminance and chrominance planes aren’t necessarily the same size. Chroma subsampling saves bandwidth by implementing a lower resolution for chroma information. For example, a 2732 x 2048 pixel buffer can have a chroma plane that’s 1366 x 1024.
Convert YpCbCr Planes to RGB
With the converter, source buffers, and destination buffer prepared, you’re ready to convert the luminance and chrominance buffers in
source to the single RGB buffer,
v function requires the converter and buffers, populating the destination buffer with the conversion result:
Apply an Operation to the RGB Image
The destination buffer now contains an RGB representation of the video frame, and you can apply vImage operations to it. In this example, a histogram equalization transforms the image so that it has a more uniform histogram, adding detail to low-contrast areas of an image.
Display the Result
To display the scaled image to the user, create a Core Graphics image from
destination, and initialize a
UIImage instance from that. The
v function returns an unmanaged
CGImage instance based on the supplied buffer and the same format you used earlier.
capture runs in a background thread, you must dispatch the call to update the image view to the main thread.
Free the Buffer Memory
After you’re finished with the destination buffer, it’s important that you free the memory allocated to it, as shown here for the
Because the source buffers were initialized with the
kv flag, you don’t need to free their data.