Multisampling is pretty well-covered, including under its own heading in the documentation for MTLRenderPipelineDescriptor and that's all fairly straightforward.
But multisampling usually means that only a limit subset of things is sampled multiple times — in a typical implementation, depth, stencil and inclusion tests are run at the higher resolution but the fragment shader is performed only at the target output resolution. So you reduce aliasing around things like edges but polygon fills are not improved.
Is that what Metal considers to be multisampling and, if so, is true supersampling available, in which everything including the fragment shader occurs at a higher frequency?
If it helps as to motivation, you can imagine my app as presenting a 2d framebuffer that does not necessarily match the output display's resolution or pixel aspect ratio. So I want some filtering on the internal pixels and if a fixed hardware solution is available then that's definitely worth exploring.
Mine is an app that does partial updates of its display and it uses an MTKView for output so I already have a texture that keeps the current state of the backing buffer that I simply copy to the front. For the time being I'm doing a poor-man's supersampling by just maintaining that at twice the width and height of the target output and linearly sampling for output. Since it isn't MIP mapped each fragment falls exactly between four pixels and I get a 2x2 box filter from a single sampling.
That's not terrible, but on a lot of hardware supersampling is implemented with a more nuanced sampling grid so real hardware supersampling would likely be a good first improvement.
But multisampling usually means that only a limit subset of things is sampled multiple times — in a typical implementation, depth, stencil and inclusion tests are run at the higher resolution but the fragment shader is performed only at the target output resolution. So you reduce aliasing around things like edges but polygon fills are not improved.
Is that what Metal considers to be multisampling and, if so, is true supersampling available, in which everything including the fragment shader occurs at a higher frequency?
If it helps as to motivation, you can imagine my app as presenting a 2d framebuffer that does not necessarily match the output display's resolution or pixel aspect ratio. So I want some filtering on the internal pixels and if a fixed hardware solution is available then that's definitely worth exploring.
Mine is an app that does partial updates of its display and it uses an MTKView for output so I already have a texture that keeps the current state of the backing buffer that I simply copy to the front. For the time being I'm doing a poor-man's supersampling by just maintaining that at twice the width and height of the target output and linearly sampling for output. Since it isn't MIP mapped each fragment falls exactly between four pixels and I get a 2x2 box filter from a single sampling.
That's not terrible, but on a lot of hardware supersampling is implemented with a more nuanced sampling grid so real hardware supersampling would likely be a good first improvement.