Bindless/GPU-Driven approach with dynamic scenes?

I have been experimenting with different rendering approaches in Metal and am hitting a wall when it comes to reconciling "bindless" or GPU-driven approaches* with a dynamic scene where meshes can be added, removed, and changed. All the examples I have found of such approaches use fixed scenes, where all the data is fixed before the first draw call into something like a MeshBuffer that holds all scene geometry in the form of Mesh objects (for instance).

While I can assume that recreating a MeshBuffer from scratch each frame would be possible but completely undesirable, and that there may be some clever tricks with pointers to update a MeshBuffer as needed, I would like to know if there is an established or optimal solution to this problem, or if these approaches are simply incompatible with dynamic geometry. Any example projects that do what I am asking that I may have missed would be appreciated, too.

* I know these are not the same, but seem to share some common characteristics, namely providing your entire geometry to the GPU at once. Looping over an array of meshes and calling drawIndexedPrimitives from the CPU does not post any such obstacles, but also precludes some of the benefits of offloading work to the GPU, or having access to all geometry on the GPU for things like path tracing.

Post not yet marked as solved Up vote post of spamheat Down vote post of spamheat
1.1k views

Replies

I don't understand, if you are doing ray-tracing then there are BLAS and TLAS BVH structures that wrap rigid models, and allow the ray to quickly hit triangles and then resolve attributes. The only limitation there is that all positions must be in a single VB.

Yes, I understand that. The part I am stuck on is whether anything like this could be done with procedural geometry, e.g. chunks of terrain that may have a totally different number of vertices as the ones from previous frames. This would results in a vertices array/buffer with either some old vertices or bad memory, or constantly moving array elements around to ensure no blank spots and recreating the buffer, which sounds too expensive.

I can solve the problem with a models buffer instead of a vertices buffer, because I can designate the first n elements as representing chunks of terrain and update them as needed. But I can do no such thing, at least as far as I can tell, with a shared vertex buffer.

Thanks for your response and I would appreciate any other thoughts. I am indeed a beginner but using Metal has been a treat.

I am not quite sure what exactly the problem is that you are trying to solve, hence it’s difficult to give recommendations. Generally, if you have truly dynamic geometry (which is kind of difficult for me to imagine - why would your geometry change that radically every frame?), you can either compute new vertex data on the CPU (I think Apple recommends tripple buffering) and send the appropriate draw call, or use mesh shaders and generate the geometry directly on the GPU.

  • Thanks. I'm starting to get an idea.

    I'm new to 3d rendering in general, and all Metal examples one can find take 1 of 2 approaches:

    Naive approach, every game object has its own buffer(s) and we loop through them on the CPU and do a draw call for each. Add/change/remove objects at will.More modern rendering techniques that fix geometry up front and offload work to GPU. It's expected to never change.

    My case is to load/unload terrain as camera moves. Will look at triple buffering.

  • To then rephrase my question with that context in mind, it would be "Are there best practices/examples for managing change to geometry with modern rendering approaches using Metal?" I think I understand the point with triple buffering (three vertex buffers, update one while the previous one is being rendered) but I wonder about the performance of having to remake the buffer so often. I will just have to try to implement it. Thank you for the suggestions!

Add a Comment