Raw Lidar Data Access in visionOS 2 with Enterprise API

Hi,

I was wondering if the Enterprise API for visionOS 2 includes access to the raw Lidar data from the Apple Vision Pro, or any intermediate data representation (like the depthMap as shown in this post)? Or if there would be any way to get access to this data?

Thanks in advance!

Hi @bkosa

No, there is no visionOS API that provides access to raw Lidar data or a depth map. That said, SceneReconstructionProvider provides live data about the shape of a person’s surroundings. Incorporating real-world surroundings in an immersive experience is a sample code project that shows how to create collision shapes for a person's surroundings. You may be able to achieve your goal using SceneReconstructionProvider and raycast(from:to:query😷 relativeTo:). Can you elaborate on your use case so I can point you in the right direction?

Thanks for your suggestion!

Just to confirm: does SceneReconstructionProvider give a mesh of the surrounding environments?

I am trying to detect stairs and I already have an algorithm that takes in a point cloud or a depth map as input. If I understand correctly, I can reconstruct the pointcloud from the environmental mesh. Is that the case?

Thanks!

Hi @bkosa

You can generate collision shapes and mesh data using the MeshAnchor instances SceneReconstructionProvider provides. Take a look at ShapeResource.generateStaticMesh(from:) and MeshResource.init(from:). It sounds like you need MeshResource.init(from:). The challenge is SceneReconstructionProvider will provide several instances of MeshAnchor. If you need a single mesh, you will need to combine the meshes into a single mesh. That's possible, but I don't have code offhand to do so. Give it a try and follow up if you get stuck.

Raw Lidar Data Access in visionOS 2 with Enterprise API
 
 
Q