World color information in visionOS so physical objects could serve as inputs

I would like to know whether it will be possible to access high-resolution textures coming from ARKit scene reconstruction or by having access to camera frames. In session 10091 it appears that ARFrame (with its camera data) is no longer available on visionOS.

The use cases I have in mind are along the lines of:

  • Having a paper card with a QR code on a physical table and use pixel data to recognize the code and place a corresponding virtual object on top
  • Having physical board game components recognized and used as inputs: for example you control white chess pieces physically while your opponent's black pieces are projected virtually onto your table
  • Having a user draw a crude map on physical paper and being able to use this as an image to be processed/recognized

These examples all have in common that the physical objects serve directly as inputs to the application without having to manipulate a virtual representation.

In an ideal privacy-preserving way it would be possible to ask ARKit to provide texture information a specially-defined volume in physical space or at least a given recognized surface (e.g. a table or a wall).

Replies

They said that the only camera access is when the user takes a picture or captures video explicitly, no ambient capture.