In ARKit for visionOS, I can track the user's head with a HeadAnchor, but it will not give the location. However, I can get the device's transform by calling queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
on a WorldTrackingProvider
.
Why the difference? - if I know the device's transform, I effectively know the head's transform.
Apple has a neat sample project that shows have an entity follow based on head movements. It touches on the detail between the AnchorEntity and the DeviceAnchor.
https://developer.apple.com/documentation/visionos/placing-entities-using-head-and-device-transform
Hopefully, visionOS 3 will bring SpatialTrackingSession data to the head AnchorEntity position, just like we have with hand anchors now. (Feedback: FB16870381)