How do I feed a 2D texture based on MTLPixelFormatBGRA8Unorm to a Compute Shader?

How does one go about feeding a 2D texture based on MTLPixelFormatBGRA8Unorm to a compute shader? I tried what seemed obvious:

texture2d<uchar, ...

texture2d<unsigned char, ...

texture2d<uint8_t, ...

...with the intention of sampling a uchar4 out of the texture, but all three are flagged as errors.

Any help appreciated!

Answered by MikeAlpha in 275612022

Yeah, just looked it up. Source: "Metal Shading Language Specification Version 1.2", "7.7 Texture Addressing and Conversion Rules" on page 115, reads:

"For textures that have 8-bit, 10-bit or 16-bit normalized unsigned integer pixel values, the texture sample and read functions convert the pixel values from an 8-bit or 16-bit unsigned integer to a normalized single or half-precision floating-point value in the range [0.0 ... 1.0]."


Note that it doesn't mean that texel (I guess this is what you're referring to when you wrote "element size") has 16 bytes! This is just what sample and read conversion functions do - this is how you declare texture2d argument to the vertex shader to avoid compile problems. They'll read 32 bits, dice it up into 4 byte values and convert these to floating point values within 0-1 range. And I'd say this is actually sensible - GPU prefers operating on 32-bit floats anyway, and this way you can use more than one format of normalized textures with same shader code.


If you really want to read actual integer values out of normalized texture, then (never tried any of these, just remember from reading the doc so take it with a grain of salt) you probably can:

- Create "view" of that original texture with the not normalized integer format (views of a texture share the same memory, but allow for different formats - within certain constraints), then read integers from integer view

- Read floating point values and then use pack/unpack functions (5.11.2 of aforementined specs) to get original integers out of fp representation.


Hope that helps

Michal



If memory serves, you should use float with all unorm textures. Michal

Mike,


Are you sure about that? When using float with a 4 component texture, you have an element size of 16 bytes. My underlying texture being 32-bit packed BGRA has a 4-byte element size (a.k.a. pixel).


At the moment I am paying the price of copying the texture to a buffer, since no restrictions exist on those. I am just wondering if there is a more efficient way to access those pixels that doesn't require an interim copy to the buffer. It seems crazy that the shading language would not support the most common 2D 8-bit/component texture format out there.

Accepted Answer

Yeah, just looked it up. Source: "Metal Shading Language Specification Version 1.2", "7.7 Texture Addressing and Conversion Rules" on page 115, reads:

"For textures that have 8-bit, 10-bit or 16-bit normalized unsigned integer pixel values, the texture sample and read functions convert the pixel values from an 8-bit or 16-bit unsigned integer to a normalized single or half-precision floating-point value in the range [0.0 ... 1.0]."


Note that it doesn't mean that texel (I guess this is what you're referring to when you wrote "element size") has 16 bytes! This is just what sample and read conversion functions do - this is how you declare texture2d argument to the vertex shader to avoid compile problems. They'll read 32 bits, dice it up into 4 byte values and convert these to floating point values within 0-1 range. And I'd say this is actually sensible - GPU prefers operating on 32-bit floats anyway, and this way you can use more than one format of normalized textures with same shader code.


If you really want to read actual integer values out of normalized texture, then (never tried any of these, just remember from reading the doc so take it with a grain of salt) you probably can:

- Create "view" of that original texture with the not normalized integer format (views of a texture share the same memory, but allow for different formats - within certain constraints), then read integers from integer view

- Read floating point values and then use pack/unpack functions (5.11.2 of aforementined specs) to get original integers out of fp representation.


Hope that helps

Michal



Thank you! :-) I feel terrible for overlooking that info.

How do I feed a 2D texture based on MTLPixelFormatBGRA8Unorm to a Compute Shader?
 
 
Q