Access to Raw Lidar point cloud

Is it possible to access the raw lidar measurements before the sceneDepth calculation is done to combines the lidar measurements with visual data. In low light environments the lidar scanner should still work and provide depth info but I cannot figure out how to access those pure lidar depth measurements. I am currently using:

        guard let frame = arView.session.currentFrame,
              let depthData = frame.sceneDepth?.depthMap else {
            print("Depth data is unavailable.")
            return
        }

but this is the depth data after sensor fusion occurs and fails in low light conditions.

I don't think there's any API for that, but I'm not entirely sure (sso don't count on this). But either way, the sensor fusion algorithms are there to improve it, so I'm not sure if you would get better results from that.

and fails in low light conditions

Do you think raw depth data would be better than the sensor fusion-processed data?

Feedback FB15735753 is filed.

Any data processing (through HW or SW) makes the original information lose irreversibly.

The data processing steps:

  1. Acquisition of ‘sparse’ 576 raw LiDAR distance points even in dark lighting (No API. R1 chip inside?)
  2. Interpolation of the 576 distance points with RGB image, producing ‘dense’ 256x192 depthMap image of 60 Hz (API in iOS)
  3. Generating and updating ‘sparse’ MeshAnchor of about 2 Hz from depthMap (API in iOS and visionOS).

Review on the data processing:

  • 576 raw LiDAR distance points are original.
  • Object edges and textures cause artefacts in depthMap image.
  • Low lighting conditions make the existing original information lose.
  • Data density of sparse -> dense –> sparse.
  • In summary, 576 raw LiDAR distance points are preferable to MeshAnchor.
Access to Raw Lidar point cloud
 
 
Q