Hi @HeoJin,
I certainly could not fancy myself an expert in Metal or working with LiDAR/point clouds in any way, but the help I received in this thread was what get me in the right direction to understanding how to work with the data being gathered and rendered by the LiDAR scanner/Metal.
My suggestion is to begin with the
Visualizing a Point Cloud Using Scene Depth sample project that Apple provides, and have a look at the comments in this thread to gather an understanding of where the points are being saved. Namely, this code from @gchiste;
Code Block commandBuffer.addCompletedHandler { [self] _ in |
print(particlesBuffer[9].position) // Prints the 10th particles position |
} |
If you have a look in
Renderer.swift in that referenced sample project, you will find that
particlesBuffer is already a variable, which is a buffer that contains an array of
ParticleUniforms (which has the position, rather, coordinate, of each point, the color values of each point, as well as the confidence of each point and an index).
What I ended up doing, per my comment to @JeffCloe, is to iterate over the
particlesBuffer "array", using the
currentPointCount, which is another variable you will find in
Renderer.swift. As an example;
Code Block for i in 0..<currentPointCount { |
let point = particlesBuffer[i] |
} |
Doing that would give you access to each gathered point from the "scan" of the environment. That said, I have a way to go to learning more myself on this topic, including improving efficiency, but exploring that
particlesBuffer really helped me to gather an understanding of what's happening here.