Align a virtual copy of a real object with the real one

Hi, We are currently trying to implement very simple test application using Vision Pro. We display a virtual copy of an object (based on CAD data) and then we try to align the real object with the virtual one. It seems to be impossible! You can align them to a certain degree but if you walk around the object to control the alignment it seems the reality us warping and wobbling for almost 2 cm.

Is there any way to fix this?

Your target problem is the core of spatial computing; Apple Vision Pro seamlessly blends digital content with your physical space. https://www.apple.com/apple-vision-pro/

Both the known/controllable digital content and the unknown/uncontrollable physical object surface have shape, size, and 6DoF pose.

To seamlessly blend digital content with physical object surface, the shape, size, and 6DoF pose of physical object surface must be accurately recognized and measured in real-time.

Virtual ads inside Chungmuro station Line 3 - iPhone Pro 12 https://youtu.be/BmKNmZCiMkw

FindSurface 1 - Apple Vision Pro https://youtu.be/p5msrVsEpa0

Hi again and Thank you for your answer. You are lining out solutions for registration and the algorthms involved.

however we actually come even from a step earlier. Let me clarify my problem: We MANUALLY want to align a virtual object with its real counterpart by just moving the objects until they match on a table. We wanted to test how precise you can be. (Ideally one can align the two objects intuitively just using the visualization up to 1 millimeter) In the process you would walk around the object to check from all sides if they really match precisely.

However what we found is: If you try to match those objects just based on the visualization the reality is displayed with a (for us) non-deterministic distortion as we walk around the object. So the virtual object seems to stay rock-solid, however reality seems to wobble around 2 cm. So you basically never can finish the task because when your object matches from one perspective it wont match anymore from another.

Performing this usecase manually or just verifying the matching from different perspectives manually just using the visualization is very intersting to us.

We know this problem from other headsets as well. For example Quest Pro performed very good here. (Not talking about image quality, just the "wobbling") And we are pretty surprised that vision Pro pretty bad in this regard.

We currently used Unity and a shared environment. Next we want to try a pure native implementation. But I doubt that this will change something. (Although i hope there is something we can do on our side to improve the situation)

A solution would be:

  1. Align the virtual object horizontally and vertically to the real object from your point of view.
  2. Get into the orthogonal position to your line of sight.
  3. Align the virtual object to the real object while letting the virtual object move along your original line of sight.
  4. Repeat if necessary.
Align a virtual copy of a real object with the real one
 
 
Q