Hi,
I'm very very very new to graphics / gpu programming.
I've spent the past couple days immersing myself in Metal with a fairly simple task that I've had a varying levels of success with.
I'm trying to render a wave form. I've put together a couple convienence classes to pull pcm floats out of different audio files.
I was able to successfully render a live wave form using a fragment / vertex shader combo setting the draw primitives to lineStrip.
I wanted a bit more control over how the wave was rendered though, so I created a compute kernel.
This was interesting because my first attempts rendered something that resembled success, but was obviously broken.
I tried to simplify my work but just rendering a simple sine wave. This uncovered quite a few problems.
I was able to render sine wave generated in the shader, but could not do so using the samples generated in CPU land.
Here's the code to the compute kernel I came up with.
#include <metal_stdlib>
using namespace metal;
float2 normalizedCoords(texture2d<float, access::write> texture, uint2 threadID) {
float width = texture.get_width();
float height = texture.get_height();
float2 uv = float2(threadID) / float2(width, height);
uv = uv * 2.0 - 1.0;
return uv;
}
kernel void renderWave(texture2d<float, access::write> output [[texture(0)]],
device float *points [[ buffer(0) ]],
const device float &time [[ buffer(1) ]],
uint2 gid [[thread_position_in_grid]]) {
float2 uv = normalizedCoords(output, gid);
float amp = points[gid.x + gid.y]; /// This stopped working for some reason :-\
uv.x += time;
float b = sin(uv.x * 8.0) / 2;
float4 color = float4(1, 1, 1, 1);
float wave = smoothstep(0.0, 0.1, abs(b - sin(uv.y)));
color.r -= wave;
color.g -= wave;
color.b -= wave;
output.write(color, gid);
}
This produces something closer to what I'm looking for: a wave that can be "styled".
In the above case there's a slight blur.
Anytime I attempt to bring my samples into the mix I'm getting some really strange distortions in the rendered texture.
So now I'm thinking that instead of writing out to the texture in the compute kernel, I should instead use the kernel to build an array of triangles from the samples. Off the top of the head I'd imagine each of the triangles would have a shared vertex (the middle of the screen) and the other two vertices would be the amplitude of the current sample and the amplitude of the previous sample.
I could then take the output of the compute kernel and pass it to a vertex function.
Like I said, I'm very new to this, so I'm not even sure if this is a sane approach.
Does metal include any convienence routines for triangle calculation? I've briefly looked at the tesselation stuff in metal 2, but I'm not sure if that's what I need.
Am I going about this all wrong?