Enumeration Case

MTLPixelFormatDepth16Unorm

A pixel format with a 16-bit normalized unsigned integer component, used for a depth render target.

Declaration

MTLPixelFormatDepth16Unorm = 250

Discussion

If you use this format when your app is running on a device with an Apple A8 or earlier GPU, setting a depth bias (setDepthBias:slopeScale:clamp:) will generate incorrect results. If you need to apply depth bias, use a different depth format.

See Also

Depth and Stencil Pixel Formats

MTLPixelFormatDepth32Float

A pixel format with one 32-bit floating-point component, used for a depth render target.

MTLPixelFormatStencil8

A pixel format with an 8-bit unsigned integer component, used for a stencil render target.

MTLPixelFormatDepth24Unorm_Stencil8

A 32-bit combined depth and stencil pixel format with a 24-bit normalized unsigned integer for depth and an 8-bit unsigned integer for stencil.

MTLPixelFormatDepth32Float_Stencil8

A 40-bit combined depth and stencil pixel format with a 32-bit floating-point value for depth and an 8-bit unsigned integer for stencil.

MTLPixelFormatX32_Stencil8

A stencil pixel format used to read the stencil value from a texture with a combined 32-bit depth and 8-bit stencil value.

MTLPixelFormatX24_Stencil8

A stencil pixel format used to read the stencil value from a texture with a combined 24-bit depth and 8-bit stencil value.