How to create screen-space meshes selectively in RealityKit AR Mode Using New OrthographicCameraComponent?

I'd like to create meshes in RealityKit ( AR mode on iPad ) in screen-space, i.e. for UI.

I noticed a lot of useful new functionality in RealityKit for the next OS versions, including the OrthographicCameraComponent here: https://developer.apple.com/documentation/realitykit/orthographiccameracomponent?changes=_3

I think this would help, but I need AR worldtracking as well as a regular perspective camera to work with the 3D elements.

Firstly, can I have a camera attached selectively to a few entities, just for those entities? This could be the orthographic camera. Secondly, can I make it so those entities are always rendered in-front, in screenspace? (They'd need to follow the camera.)

If I can't have multiple cameras, what can be done in that case?

Is it actually better to use a completely different view / API for layering on-top of RealityKit? I would much rather keep everything in RealityKit, however, for simplicity.

How to create screen-space meshes selectively in RealityKit AR Mode Using New OrthographicCameraComponent?
 
 
Q