How does one go about feeding a 2D texture based on MTLPixelFormatBGRA8Unorm to a compute shader? I tried what seemed obvious:
texture2d<uchar, ...
texture2d<unsigned char, ...
texture2d<uint8_t, ...
...with the intention of sampling a uchar4 out of the texture, but all three are flagged as errors.
Any help appreciated!
Yeah, just looked it up. Source: "Metal Shading Language Specification Version 1.2", "7.7 Texture Addressing and Conversion Rules" on page 115, reads:
"For textures that have 8-bit, 10-bit or 16-bit normalized unsigned integer pixel values, the texture sample and read functions convert the pixel values from an 8-bit or 16-bit unsigned integer to a normalized single or half-precision floating-point value in the range [0.0 ... 1.0]."
Note that it doesn't mean that texel (I guess this is what you're referring to when you wrote "element size") has 16 bytes! This is just what sample and read conversion functions do - this is how you declare texture2d argument to the vertex shader to avoid compile problems. They'll read 32 bits, dice it up into 4 byte values and convert these to floating point values within 0-1 range. And I'd say this is actually sensible - GPU prefers operating on 32-bit floats anyway, and this way you can use more than one format of normalized textures with same shader code.
If you really want to read actual integer values out of normalized texture, then (never tried any of these, just remember from reading the doc so take it with a grain of salt) you probably can:
- Create "view" of that original texture with the not normalized integer format (views of a texture share the same memory, but allow for different formats - within certain constraints), then read integers from integer view
- Read floating point values and then use pack/unpack functions (5.11.2 of aforementined specs) to get original integers out of fp representation.
Hope that helps
Michal