Dragging coordinates issue in VisionOS

I am attempting to place images in wall anchors and be able to move their position using drag gestures. This seem pretty straightforward if the wall anchor is facing you when you start the app. But, if you place an image on a wall anchor to the left or the wall behind the original position then the logic stops working properly. The problem seems to be the anchor and the drag.location3D orientations don't coincide once you are dealing with wall anchors that are not facing the original user position (Using Xcode Beta 8)

Question:

How do I apply dragging gestures to an image regardless where the wall anchor is located at in relation to the user original facing direction?

Using the following code:

var dragGesture: some Gesture {
    DragGesture(minimumDistance: 0)
        .targetedToAnyEntity()
        .onChanged { value in
            let entity = value.entity
            let convertedPos = value.convert(value.location3D, from: .local, to: entity.parent!) * 0.1
            entity.position = SIMD3<Float>(x: convertedPos.x, y: 0, z: convertedPos.y * (-1))
        }
} 
Answered by J0hn in 763619022

Please file a feedback on this to increase pressure to get it added. I've done so on an adjacent issue related to non-"world targeting" entities.

The lack of the PlaneDetectionProvider and SceneReconstructionProvider support in the simulator is felt more and more as we run into these issues.

Hello,

Every AnchorEntity (except for those that have a world target exist in their own space. It is not possible for you to properly apply a drag to such an Entity. Please file an enhancement request using Feedback Assistant for API that would enable this.

Also note that if you utilize ARKit for anchoring, you receive the anchors in world-space, and you would then be able to properly apply the drag. This approach requires user permission.

Accepted Answer

Please file a feedback on this to increase pressure to get it added. I've done so on an adjacent issue related to non-"world targeting" entities.

The lack of the PlaneDetectionProvider and SceneReconstructionProvider support in the simulator is felt more and more as we run into these issues.

You're probably going through a moment of "What in the world? That wasn't mentioned anywhere!".

And yeah, a lot of the demonstrations use an AnchorEntity of type "Plane" to insert a RealityComposer scene into the RealityView at a spot in the world that meets the size criteria, or just do "Content.addEntity" when the realityView loads for an Immersive Scene.

It's important to note these are not "world tracked" entities, and will not enable accurate location3D values for interactions.

We can use "rotate" and "magnify" in these situations as those gesture change relative to their initial value. Tap gestures can even be used as a sort of tap-boolean, but the location of the tap is not reliable.

You're also probably asking "How can I make anything interactive enough to feel immersive this way???" And yeah, I don't have a clue.

Maybe if we pay $4000 or get lucky with a developer kit we can figure it out. We can't ask anyone with a developer kit because they're banned from telling us.

It's sad that these issues still persist after so long - it's been months since I kept on hoping that they'll fix this or provide a decent alternative. This is so ridiculous that, if the AnchorEntity is on a left or right wall, dragging direction will switch mid-way of dragging along the wall.

I was able to get it working. In case anybody is trying to do something similar, these are the steps I followed to be able to drag around an entity I had positioned on a vertical plane and limit the movements along its surface:

  1. Add to the current position of the object being dragged, the translation value from the drag gesture. Don't forget to convert the drag value to world coordinates first.
  2. Do a raycast from the origin in the direction of the dragging object to hit the plane anchor. Remember that the direction vector (based on the dragging object new pos) has to be normalized.
  3. Use the the hit position from the raycast result as the new position for the object. This will keep the the object in the surface of the plane.

I know this explanation is a little abstract and lacking details. If I find time I'll post some sample code in github. The code in the ObjectPlacement project from Apple was pretty helpful.

Dragging coordinates issue in VisionOS
 
 
Q