Unity on VisionOS development - best practice on structuring a project

Hello,

I am experimenting with Unity to develop a mixed reality (MR) application for visionOS. I would like to understand the best approach for structuring my project:

  • Should I build the entire experience in Unity (both Windows and Volumes)?
  • Or is it better to create only certain elements (e.g., Volumes) in Unity while managing Windows separately in Xcode?

Also, how well do interactions (e.g pinch, grab…) created in Unity integrate with Xcode?

If I use the PolySpatial plugin, does that allow me to manage all interactions entirely within Unity, or would I still need to handle/integrate part of it in Xcode?

What's worked best for you? Please let me know if you have any recommendations, Thanks!

Hi @Mir46 , thanks for your question!

When building a visionOS app in Unity, the vast majority of your development will happen in the Unity Editor, not Xcode, and you will write code in C#, not Swift. You will not have access to RealityKit APIs to create volumes or windows directly, but you can create them in Unity with Unity's APIs. When building your app, you will build in Unity, which will generate an Xcode project that hopefully will build and run without any further development required (however you may need to configure things like entitlements and developer teams in Xcode).

You do have the ability to create native plugins that can be used in Unity to communicate with native Apple APIs. I recommend reading Unity's official documentation on this topic, to see if this is relevant to you.

Pinch and grab interactions will be handled by Unity's APIs. You will need to use Unity's InputSystem plugin. You can use the familiar EnchancedTouch API provided by Unity, as it works the same way as it would on a 2D screen, just with a third spatial dimension.

You mentioned PolySpatial: this will be necessary for mixed reality apps. You can create apps for visionOS in Unity without PolySpatial, but these apps will be "Full VR" without any passthrough.

Because most of the technologies you'll be using will be made by Unity, you will likely find more specific help on Unity's developer forums, so I recommend asking there as well. Good luck!

Hi, I have a few questions related to this topic. I'm working on a mixed reality app and I’m wondering:

  1. Is it possible to use only Unity’s Input System package to manage interactions, or do I also need to use the package/samples provided in the XR Interaction Toolkit?
  2. Regarding the Input System package, are there any specific sample you’d recommend importing for reference?

Thanks!

@fdell Thank you for your question. I strongly recommend asking these questions on Unity's official forums because Apple does not develop the technologies (UnityEngine.InputSystem, PolySpatial) you're referring to, and I don't have any special insight. Unfortunately I'm unable to speak with any authority on this topic.

From one developer to another, Unity's InputSystem package will work for detecting gaze and pinch gestures on visionOS when using PolySpatial (these behave a lot like other Touches in Unity's API).

Unity on VisionOS development - best practice on structuring a project
 
 
Q