PlaneDetection, ImageTracking and Scene Reconstruction support on VisionOS Simulator NOT WORKING

Hello,

I am developing a VisionOS based application, that uses the various data providers like Image Tracking, Plane Detection, Scene Reconstruction but these are not supported on VisionOS Simulator. What is the Work Around for this issue ?

Answered by J0hn in 766462022

At the moment there is no workaround other than receive a dev-kit or purchase a device when it releases. I am in the same boat as you.

Accepted Answer

At the moment there is no workaround other than receive a dev-kit or purchase a device when it releases. I am in the same boat as you.

The ARPlaneAnchor and ARMeshAnchor (plus Scene understanding) spatial information will be the only secondary information provided even after the physical primary sensor information is processed internally by visionOS. Currently, no primary sensor information is provided in the visionOS 'simulator'. Although the primary physical sensors, the RGB camera and the LiDAR 3D camera, are installed in the commercially available Vision Pro, only ARPlaneAnchor and ARMeshAnchor appear to be made available to developers via visionOS ARKit to protect personal data. It seems that information such as RGB stream, LiDAR depth map, facial recognition and human body contours are not provided. There is absolutely no reason why Apple would allow the development of apps that allow users to attach Vision Pro to their heads and secretly alter other people's faces and bodies.

Request the Feedback Assistant to also support the visionOS simulator with virtual data.

Still not working in beta 4. Still haven't heard back on devkit application.

My feedback says no recent similar reports. FB12639395.

PlaneDetection, ImageTracking and Scene Reconstruction support on VisionOS Simulator NOT WORKING
 
 
Q