Inputs Functionality in VisionOS 2.0?

Inputs Updates to inputs on Apple Vision Pro let you decide if you want the user’s hands to appear in front of or behind the digital content.


Trying to understand why this is being introduced? Why would one corrupt the spatial experience by forcing your hands to appear in front of or behind digital content? Won't this be confusing to users? i.e., it should be a natural mixed reality experience where occlusion occurs when needed. if your "physical hand" is in front of a virtual obj, then it remains visible... and likewise, if you move it behind, then it disappears (not a semi-transparent view of your hand through the model).

Answered by Vision Pro Engineer in 790151022

Hello, there is new functionality with VisionOS 2.0 in the UpperLimbVisibility API that allows you to choose if you want us to blend their hands with content, hands on top, hands behind the content. You can change this to what works best for you

Hello, there is new functionality with VisionOS 2.0 in the UpperLimbVisibility API that allows you to choose if you want us to blend their hands with content, hands on top, hands behind the content. You can change this to what works best for you

Inputs Functionality in VisionOS 2.0?
 
 
Q