I have a RealityView displaying a Reality Composer Pro scene in window. Things are generally working fine, but the content seems to be appearing in front of and blocking the VisionOS window, rather than being contained inside it. Do I need to switch to a volumetric view for this to work? My scene simply contains a flat display which renders 3D content (it has a material that sends different imagery to each eye).
Custom 3D Window Using RealityView
Hi @aharriscrowne !
Yes, it sounds like you want to use a volumetric window. I recommend checking out the Hello World sample, as it contains a mixture of different windows and view types that demonstrates opening and closing volumetric windows and immersive spaces in the same app. In particular, observe Globe.swift, which is rendered in a volumetric window, and contains an "Earth" view (Earth.swift), which uses a RealityView to render a model of the Earth.
Generally speaking, as a developer, you don't have control over window placement in your apps as this is something that visionOS handles. You can review the documentation at this link for more details on positioning and sizing windows. In the case of a window blocking your immersive content, there isn't a one-size-fits-all solution, but I can suggest some options. Moving content to a volumetric window is one possibility. You can also simply close the obstructing window until it makes sense to bring it back. If your window contains necessary UI and controls for your users, consider moving that UI to a new Entity using attachments to position your SwiftUI content someplace out of the way (check out this WWDC for some info on attachments, which starts at 2:05).
Let me know if that helps!
Hi thanks for the reply; I think i wasn’t clear about what’s going on. I have a window with a RealityView in it. Currently that RealityView presents a Reality Composer scene. When I look at that window in the compiled app, the contents sit physically in front of the actual window, and moving them back in the scene has no effect at all.
Since posting this, I have experimented with doing a findEntity in the scene and pulling out a Transform which parents a ModelEntity. Doing that allows me to manipulate the depth of the ModelEntity relative to that Transform. But it is surprising that I can’t do the same thing with the scene itself; I have to extract scene elements to adjust their depth.
So you have a SwiftUI window with a RealityView rendering a scene you created in Reality Composer Pro, correct?
This seems like expected behavior, since visionOS will attempt to place 3D content on the z-axis between the window and the user when contained in a window. You can manually place your content after it is loaded in (and it sounds like you have done so using findEntity
), but my suspicion is that you may want to use an immersive space rather than a window to display your content.
If there is something you need to do that is not achievable, please consider filing a feature request using Feedback Assistant. Thank you for your questions!