https://developer.apple.com/documentation/realitykit/model3d/ontapgesture(count:coordinatespace:perform:)
link -> double tap gesture deprecated in visionOS2.0. use only watchOS. right..?
so how can i make a double tap gesture in visionOS??
ARKit
RSS for tagIntegrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In visionOS, I want to make a watch. After the actual production, the display of the hand makes it impossible to see the watch due to the virtual watch.
How to set up the watch to give priority to display?
Topic:
Spatial Computing
SubTopic:
ARKit
I have an application running on visionOS 2.0 that uses the ARKit C API to create anchors and listen for updates.
I am running an ARKit session with a WorldTrackingProvider (and a CameraFrameProvider, if that is relevant)
Then, I am registering a callback using ar_world_tracking_provider_set_anchor_update_handler_f
When updates arrive I iterate over the updated anchors using ar_world_anchors_enumerate_anchors_f.
Then, as described in the https://developer.apple.com/documentation/visionos/tracking-points-in-world-space documentation, I walk around and hold down the Digital Crown to reposition the current space. This resets the world origin to my current position.
When this happens, anchor updates arrive. In most cases, the anchor updates return the new transform (using ar_world_anchor_get_origin_from_anchor_transform) but sometimes I get an anchor update that reports the transform of the anchor from before the world origin was repositioned. Meaning instead of staying in place in the physical world, the world anchor moves relative to me.
I can work around this by calling ar_world_tracking_provider_copy_all_world_anchors_f which provides me with the correct transform, but this async method also adds some noticeable delay to the anchor updates.
Is this already a known issue?