Metal (Compositor Services) or RealityKit on visionOS

I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).

Answered by Vision Pro Engineer in 830784022

Hello @lijiaxu , Thank you for your question and this thread! I left an answer to your question in your other thread, but I'll post it here as well so it doesn't get missed.

You should check out the sample for the article Interacting with virtual content blended with passthrough. This app shows how to track the users hands (using ARKit) and then render content using Metal+Compositor Services. Additionally, check out Drawing fully immersive content using Metal for a more in depth article on how to create your own render loop using Compositor Services.

HoverEffectComponent is a part of RealityKit, and there is not an equivalent API using compositor services. I strongly recommend filing an enhancement request for this feature specifically using Feedback Assistant.

ARKit and RealityKit are separate frameworks, so if you choose not to use RealityKit and render your content with Compositor Services, you can still use ARKit to track your user's hands, do scene reconstruction, etc. However anything related to RealityKit's ECS (which includes things like HoverEffectComponent, PhysicsBodyComponent, CollisionComponent , etc.) will not be available to you, and you'll need your own solutions for things like physics interactions, for example.

Accepted Answer

Hello @lijiaxu , Thank you for your question and this thread! I left an answer to your question in your other thread, but I'll post it here as well so it doesn't get missed.

You should check out the sample for the article Interacting with virtual content blended with passthrough. This app shows how to track the users hands (using ARKit) and then render content using Metal+Compositor Services. Additionally, check out Drawing fully immersive content using Metal for a more in depth article on how to create your own render loop using Compositor Services.

HoverEffectComponent is a part of RealityKit, and there is not an equivalent API using compositor services. I strongly recommend filing an enhancement request for this feature specifically using Feedback Assistant.

ARKit and RealityKit are separate frameworks, so if you choose not to use RealityKit and render your content with Compositor Services, you can still use ARKit to track your user's hands, do scene reconstruction, etc. However anything related to RealityKit's ECS (which includes things like HoverEffectComponent, PhysicsBodyComponent, CollisionComponent , etc.) will not be available to you, and you'll need your own solutions for things like physics interactions, for example.

Hello @Vision Pro Engineer , Thank you for your response. I would like to inquire further about the matter. Given that Metal is primarily utilized for graphics processing, it lacks the capabilities to play sound. In this context, I am curious to know if Compositor Services can fulfill the requirements for playing spatial audio. Additionally, I would like to understand whether the ARKit function, which is designed to perceive the user’s surroundings, such as Plane detection, will be subject to any limitations (For example, the geometry of PlanAnchor cannot be accessed). I eagerly anticipate your response. Thank you for your attention to this matter.

@lijiaxu

Correct, you will need to spatialize audio without using the APIs available in RealityKit.

If you are unable to use RealityKit's APIs for spatialized audio, check out PHASE, Apple's sound spatialization framework. See Playing sound from a location in a 3D scene to learn more about how to spatialize audio without RealityKit. PHASE has additional functionality compared to what is available in RealityKit. For example, you can have a mesh that occludes sound.

Apologies, I'm not sure I understand the second question. ARKit should behave the same way in an app that uses RealityKit vs an app that uses CompositorServices, because these are all separate frameworks. You may be forgetting to add the NSWorldSensingUsageDescription entitlement to your app. See Tracking specific points in world space for clarification on how this feature works.

Thanks for your questions!

Metal (Compositor Services) or RealityKit on visionOS
 
 
Q