I would like to implement the following but I am not sure if this is a supported use case based on the current documentation:
- Run one ARKitSession with a WorldTrackingProvider in Swift for mixed immersion Metal rendering (to get the device anchor for the layer renderer drawable & view matrix)
- Run another ARKitSession with a WorldTrackingProvider and a CameraFrameProvider in a different library (that is part of the same app) using the ARKit C API and using the transforms from the anchors in that session to render objects in the Swift application part.
- In general, is this a supported use case or is it necessary to have one shared ARKitSession?
- Assuming this is supported, will the (device) anchors from both WorldTrackingProviders reference the same world coordinate system?
- Are there any performance downsides to having multiple ARKitSessions?
Thanks