- iOS 10.3+
- tvOS 10.2+
- macOS 10.12+
- Xcode 10.0+Beta
In the Basic Buffers sample, you learned how to render basic geometry in Metal.
In this sample, you’ll learn how to render a 2D image by applying a texture to a single quad. In particular, you’ll learn how to configure texture properties, interpret texture coordinates, and access a texture in a fragment function.
Images and Textures
A key feature of any graphics technology is the ability to process and draw images. Metal supports this feature in the form of textures that contain image data. Unlike regular 2D images, textures can be used in more creative ways and applied to more surface types. For example, textures can be used to displace select vertex positions, or they can be completely wrapped around a 3D object. In this sample, image data is loaded into a texture, applied to a single quad, and rendered as a 2D image.
Load Image Data
The Metal framework doesn’t provide API that directly loads image data from a file to a texture. Instead, Metal apps or games rely on custom code or other frameworks, such as Image I/O, MetalKit, UIKit, or AppKit, to handle image files. Metal itself only allocates texture resources and then populates them with image data that was previously loaded into memory.
In this sample, for simplicity, the custom
AAPLImage class loads image data from a file (
Image) into memory (
This sample uses the TGA file format for its simplicity. The file consists of a header describing metadata, such as the image dimensions, and the image data itself. The key takeaway from this file format is the memory layout of the image data; in particular, the layout of each pixel.
Metal requires all textures to be formatted with a specific
MTLPixel value. The pixel format describes the layout of each of the texture’s pixels (its texels). To populate a Metal texture with image data, its pixel data must already be formatted in a Metal-compatible pixel format, defined by a single
MTLPixel enumeration value. This sample uses the
MTLPixel pixel format, which indicates that each pixel has the following memory layout:
This pixel format uses 32 bits per pixel, arranged into 8 bits per component, in blue, green, red, and alpha order. TGA files that use 32 bits per pixel are already arranged in this format, so no further conversion operations are needed. However, this sample uses a 24-bit-per-pixel BGR image that needs an extra 8-bit alpha component added to each pixel. Because alpha typically defines the opacity of an image and the sample’s image is fully opaque, the additional 8-bit alpha component of a 32-bit BGRA pixel is set to 255.
AAPLImage class loads an image file, the image data is accessible through a query to the
Create a Texture
MTLTexture object is used to configure properties such as texture dimensions and pixel format for a
MTLTexture object. The
new method is then called to create an empty texture container and allocate enough memory for the texture data.
MTLBuffer objects, which store many kinds of custom data,
MTLTexture objects are used specifically to store formatted image data. Although a
MTLTexture object specifies enough information to allocate texture memory, additional information is needed to populate the empty texture container. A
MTLTexture object is populated with image data by the
Image data is typically organized in rows. This sample calculates the number of bytes per row as the number of bytes per pixel multiplied by the image width. This type of image data is considered to be tightly packed because the data of subsequent pixel rows immediately follows the previous row.
Textures have known dimensions that can be interpreted as regions of pixels. A
MTLRegion structure is used to identify a specific region of a texture. This sample populates the entire texture with image data; therefore, the region of pixels that covers the entire texture is equal to the texture’s dimensions.
The number of bytes per row and specific pixel region are required arguments for populating an empty texture container with image data. Calling the
replace method performs this operation by copying data from the
image pointer into the
The main task of the fragment function is to process incoming fragment data and calculate a color value for the drawable’s pixels. The goal of this sample is to display the color of each texel on the screen by applying a texture to a single quad. Therefore, the sample’s fragment function must be able to read each texel and output its color.
A texture can’t be rendered on its own; it must correspond to some geometric surface that’s output by the vertex function and turned into fragments by the rasterizer. This relationship is defined by texture coordinates: floating-point positions that map locations on a texture image to locations on a geometric surface.
For 2D textures, texture coordinates are values from 0.0 to 1.0 in both x and y directions. A value of (0.0, 0.0) maps to the texel at the first byte of the image data (the bottom-left corner of the image). A value of (1.0, 1.0) maps to the texel at the last byte of the image data (the top-right corner of the image). Following these rules, accessing the texel in the center of the image requires specifying a texture coordinate of (0.5, 0.5).
Map the Vertex Texture Coordinates
To render a complete 2D image, the texture that contains the image data must be mapped onto vertices that define a 2D quad. In this sample, each of the quad’s vertices specifies a texture coordinate that maps the quad’s corners to the texture’s corners.
vertex vertex function passes these values along the pipeline by writing them into the
texture member of the
Rasterizer output structure. These values are interpolated across the quad’s triangle fragments, similar to the interpolated color values in the Hello Triangle sample.
The signature of the
sampling fragment function includes the
color argument, which has a
texture2d type and uses the
[[texture(index)]] attribute qualifier. This argument is a reference to a
MTLTexture object and is used to read its texels.
Reading a texel is also known as sampling. The fragment function uses the built-in texture
sample() function to sample texel data. The
sample() function takes two arguments: a sampler (
texture) and a texture coordinate (
in). The sampler is used to calculate the color of a texel, and the texture coordinate is used to locate a specific texel.
When the area being rendered to isn’t the same size as the texture, the sampler can use different algorithms to calculate exactly what texel color the
sample() function should return. The
mag mode specifies how the sampler should calculate the returned color when the area is larger than the size of the texture; the
min mode specifies how the sampler should calculate the returned color when the area is smaller than the size of the texture. Setting a
linear mode for both filters makes the sampler average the color of texels surrounding the given texture coordinate, resulting in a smoother output image.
Set a Fragment Texture
This sample uses the
AAPLTexture index to identify the texture in both Objective-C and Metal Shading Language code. Fragment functions also take arguments similarly to vertex functions: you call the
set method to set a texture at a specific index.
In this sample, you learned how to render a 2D image by applying a texture to a single quad.
In the Hello Compute sample, you’ll learn how to execute compute-processing workloads in Metal for image processing