Improving Filtering Quality and Sampling Performance

Provide multiple levels of detail for your textures by using mipmaps.


Texture sampling in GPU hardware works best when the texture's dimensions are close to the dimensions in the rendered image. In that case, the samples being read from the texture are sampled at a similar frequency to the source data. Imperfections are minimized through the use of linear or bilinear filtering.

If the rendered image is much smaller than the texture, to properly filter the pixels to the destination color, the GPU would need to fetch many pixels from the texture. For example, in the image below, for each pixel in the output, the GPU would need to fetch and blend together a quarter of the pixels from the texture. But GPUs don't work like that; using bilinear filtering, the GPU samples and blends four pixels at a time. Because you usually can't control precisely which pixels are fetched, the GPU may produce an incorrect image, and the image may shimmer or change as you animate the content.

A figure showing a section of a larger texture being applied to a small primitive, with poor results.

To solve this problem, GPUs use mipmaps. Mipmaps are sometimes called levels of detail (LOD). Mipmaps are progressively smaller versions of the same texture image whose contents have already been generated at the proper size, as shown in the figure below. The complete set of mipmaps in a texture is sometimes called the mipmap chain.

The mipmaps are numbered; mipmap 0 is the top mipmap on the chain. Smaller mipmaps have a larger index and are lower on the mipmap chain. Each level is half the size of the previous level, with a minimum length of 1 pixel in each dimension.

When you render using a texture that has mipmaps, if the rendered primitive is significantly smaller than the image in mipmap 0, the GPU can sample pixels from other mipmaps instead. In a sense, you've already prefiltered the images. Also, if the GPU only needs to sample lower-level mipmaps, it samples fewer pixels overall, reducing the amount of memory bandwidth and cache memory needed and further improving performance.

A figure showing a texture with a series of texture mipmaps getting progressively smaller.



Creating a Mipmapped Texture

Decide whether a texture that you are creating needs mipmaps.

Copying Data into or out of Mipmaps

Specify which mipmaps are affected by the data transfer.

Generating Mipmap Data

Create your mipmaps either when you author content or at runtime.

Adding Mipmap Filtering to Samplers

Specify how the GPU samples mipmaps in your textures.

Mipmap Access

Restricting Access to Specific Mipmaps

Set the range of mipmap levels that a sampler can access.

Determining Which Mipmaps the GPU Will Try to Access

Get information at runtime about mipmaps the GPU will sample.

Dynamically Adjusting Texture Level of Detail

Defer generating or loading larger mipmaps until that level of detail is needed.

See Also

Working with Textures

Creating and Sampling Textures

Load image data into a texture and apply it to a quadrangle.

Understanding Color-Renderable Pixel Format Sizes

Know the size limits of pixel formats used by color render targets in iOS and tvOS GPUs.

Optimizing Texture Data

Optimize a texture’s data to improve GPU or CPU access.

Managing Texture Memory

Take direct control of memory allocation for texture data by using sparse textures.

protocol MTLTexture

A resource that holds formatted image data.

class MTLTextureDescriptor

An object that you use to configure new Metal texture objects.

class MTKTextureLoader

An object that decodes common image formats into Metal textures for use in your app.

class MTLSharedTextureHandle

A texture handle that can be shared across process address space boundaries.

enum MTLPixelFormat

The data formats that describe the organization and characteristics of individual pixels in a texture.