Hello,
I'm trying to view the components of an Entity I'm creating in RealityKit by reading from a USDZ file. I have the following code snippet in my app.
if let appleEntity = try? Entity.loadModel(named: "apple_tile") {
let c = appleEntity.components
for comp in c { // <- compiler error here
print(comp)
}
}
The compiler error I'm receiving says "For-in loop requires 'Entity.ComponentSet' to conform to 'Sequence'". However, I thought this was the case, according to the documentation for Entity.ComponentSet?
Curious if anyone else has had this problem. Running XCode 15.4, and my Swift version is
xcrun swift -version
swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4)
Target: x86_64-apple-macosx14.0
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material.
Each plane is sized based on the bounds of the anchor provided by ARKit.
let mesh = MeshResource.generatePlane(
width: anchor.geometry.extent.width,
depth: anchor.geometry.extent.height
)
Then I'm using this to position each entity.
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off.
Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is.
When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be.
Can you see what I'm getting wrong here? Full code below
struct Example068: View {
@State var session = ARKitSession()
@State private var planeAnchors: [UUID: Entity] = [:]
@State private var planeColors: [UUID: Color] = [:]
var body: some View {
RealityView { content in
} update: { content in
for (_, entity) in planeAnchors {
if !content.entities.contains(entity) {
content.add(entity)
}
}
}
.task {
try! await setupAndRunPlaneDetection()
}
}
func setupAndRunPlaneDetection() async throws {
let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted])
if PlaneDetectionProvider.isSupported {
do {
try await session.run([planeData])
for await update in planeData.anchorUpdates {
switch update.event {
case .added, .updated:
let anchor = update.anchor
if planeColors[anchor.id] == nil {
planeColors[anchor.id] = generatePastelColor()
}
let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!)
planeAnchors[anchor.id] = planeEntity
case .removed:
let anchor = update.anchor
planeAnchors.removeValue(forKey: anchor.id)
planeColors.removeValue(forKey: anchor.id)
}
}
} catch {
print("ARKit session error \(error)")
}
}
}
private func generatePastelColor() -> Color {
let hue = Double.random(in: 0...1)
let saturation = Double.random(in: 0.2...0.4)
let brightness = Double.random(in: 0.8...1.0)
return Color(hue: hue, saturation: saturation, brightness: brightness)
}
private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity {
let mesh = MeshResource.generatePlane(
width: anchor.geometry.extent.width,
depth: anchor.geometry.extent.height
)
var material = PhysicallyBasedMaterial()
material.baseColor.tint = UIColor(color)
let entity = ModelEntity(mesh: mesh, materials: [material])
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
return entity
}
}
Topic:
Spatial Computing
SubTopic:
ARKit
A popover presented from a view that is used as an attachment is properly displayed in preview mode in the canvas but not so at runtime. I was wondering if it is supported at all.
I am trying to create an object in immersive space that is partially transparent (~50% opacity). I have implemented this in a few different ways including creating a model entity and setting its opacity component to 0.5, and creating a custom material with blending set to a transparent opacity of 0.5. These both work partially, as they behaved as intended for many cases, but seemingly randomly would act like occlusion material and block any other immersive content behind them, showing the real world instead.
Some notes: I am using RealityKit to render the semi-transparent object and an opaque object that is behind the semi-transparent object. I am using VisionOS 2.1, and am updating the location of the semi-transparent object often. Both objects are ModelEntities.
I would appreciate any guidance on how to implement this. Please let me know if there are any other questions.
Hello,
I am experimenting with Unity to develop a mixed reality (MR) application for visionOS. I would like to understand the best approach for structuring my project:
Should I build the entire experience in Unity (both Windows and Volumes)?
Or is it better to create only certain elements (e.g., Volumes) in Unity while managing Windows separately in Xcode?
Also, how well do interactions (e.g pinch, grab…) created in Unity integrate with Xcode?
If I use the PolySpatial plugin, does that allow me to manage all interactions entirely within Unity, or would I still need to handle/integrate part of it in Xcode?
What's worked best for you? Please let me know if you have any recommendations, Thanks!
Topic:
Spatial Computing
SubTopic:
General
Tags:
Vision
Reality Composer Pro
visionOS
iPad and iOS apps on visionOS
Description:
I'm developing a travel/panorama viewing app for visionOS that allows users to view 360° panoramic images in an immersive space. When users enter panorama viewing mode, I want to provide a fully immersive experience where the main interface window and Earth 3D globe window are hidden.
I've implemented the app following Apple's documentation on Creating Fully Immersive Experiences, but when users enter the immersive space, both the main window and the Earth 3D window remain visible, diminishing the immersive experience.
Implementation Details:
My app has three main components:
A main content window showing panorama thumbnails
A 3D globe window (volumetric) showing locations
An immersive space for viewing 360° panoramas
I'm using .immersionStyle(selection: $panoImageView, in: .full) to create a fully immersive experience, but other windows remain visible.
Relevant Code:
@main
struct Travel_ImmersiveApp: App {
@StateObject private var appModel = AppModel()
@State private var panoImageView: ImmersionStyle = .full
var body: some Scene {
WindowGroup {
ContentView()
.environmentObject(appModel)
}
.windowStyle(.automatic)
.defaultSize(width: 1280, height: 825)
WindowGroup(id: "Earth") {
Globe3DView()
.environmentObject(appModel)
.onAppear {
appModel.isGlobeWindowOpen = true
appModel.globeWindowOpen = true
}
.onDisappear {
if !appModel.shouldCloseApp {
appModel.handleGlobeWindowClose()
}
}
}
.windowStyle(.volumetric)
.defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters)
.windowResizability(.contentSize)
ImmersiveSpace(id: "ImmersiveView") {
ImmersiveView()
.environmentObject(appModel)
}
.immersionStyle(selection: $panoImageView, in: .full)
}
}
Opening the Immersive Space:
func getPanoImageAndOpenImmersiveSpace() async {
appModel.clearMemoryCache()
do {
let canView = appModel.canViewImage(image)
if canView {
let downloadedImage = try await appModel.getPanoramaImage(for: image) { progress in
Task { @MainActor in
cardState = .loading(progress: progress)
}
}
await MainActor.run {
appModel.updateCurrentImage(image, panoramaImage: downloadedImage)
}
if !appModel.immersiveSpaceOpened {
try await openImmersiveSpace(id: "ImmersiveView")
await MainActor.run {
appModel.immersiveSpaceOpened = true
cardState = .normal
}
} else {
await MainActor.run {
appModel.updateImmersiveView = true
cardState = .normal
}
}
} else {
await MainActor.run {
appModel.errorMessage = "You do not have permission to view this image."
cardState = .normal
}
}
} catch {
// Error handling
}
}
Immersive View Implementation:
struct ImmersiveView: View {
@EnvironmentObject var appModel: AppModel
var body: some View {
RealityView { content in
let rootEntity = Entity()
content.add(rootEntity)
Task {
if let selectedImage = appModel.selectedImage,
appModel.canViewImage(selectedImage) {
await loadPanorama(for: rootEntity)
}
}
} update: { content in
if appModel.updateImmersiveView,
let selectedImage = appModel.selectedImage,
appModel.canViewImage(selectedImage),
let rootEntity = content.entities.first {
Task {
await loadPanorama(for: rootEntity)
appModel.updateImmersiveView = false
}
}
}
.onAppear {
print("ImmersiveView appeared")
}
.onDisappear {
appModel.resetImmersiveState()
}
}
// loadPanorama implementation...
}
What I've Tried
Set immersionStyle to .full as recommended in the documentation
Confirmed that the immersive space is properly opened and displaying panoramas
Verified that the state management for the immersive space is working correctly
Questions
How can I ensure that when the user enters the immersive panorama viewing experience, all other windows (main interface and Earth 3D globe) are automatically hidden?
Is there a specific API or approach I'm missing to properly implement a fully immersive experience that hides all other windows?
Do I need to manually dismiss the windows when opening the immersive space, and if so, what's the best approach for doing this?
Any guidance or sample code would be greatly appreciated. Thank you!
I’ve submitted the following feedback:
FB13820942 (List Outline View Not Using Accent Color on Disclosure Caret for visionOS)
I’d appreciate help on this to see if I’m doing something wrong or indeed it’s the way visionOS currently works and it’s a suggested feedback.
Is there any way to render a RealityView to an Image/UIImage like we used to be able to do using SCNView.snapshot() ?
ImageRenderer doesn't work because it renders a SwiftUI view hierarchy, and I need the currently presented RealityView with camera background and 3D scene content the way the user sees it
I tried UIHostingController and UIGraphicsImageRenderer like
extension View {
func snapshot() -> UIImage {
let controller = UIHostingController(rootView: self)
let view = controller.view
let targetSize = controller.view.intrinsicContentSize
view?.bounds = CGRect(origin: .zero, size: targetSize)
view?.backgroundColor = .clear
let renderer = UIGraphicsImageRenderer(size: targetSize)
return renderer.image { _ in
view?.drawHierarchy(in: view!.bounds, afterScreenUpdates: true)
}
}
}
but that leads to the app freezing and sending an infinite loop of
[CAMetalLayer nextDrawable] returning nil because allocation failed.
Same thing happens when I try
return renderer.image { ctx in
view.layer.render(in: ctx.cgContext)
}
Now that SceneKit is deprecated, I didn't want to start a new app using deprecated APIs.
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces.
This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there?
I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
Hello,
For GuessTogether source code, it seems like the code assumes that you're already in a FaceTime call before pressing the custom SharePlay button (labeled "Play Guess Together"). If not already on a FaceTime call, my Apple Vision Pro and the visionOS simulator both do nothing after throwing warnings. Is this intended behavior?
If so, how do I make it so that pressing the button can also initiate FaceTime calls? Is this allowed?
Thank you!
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again.
This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
Hello,
A lot of the RealityKit APIs (Ex. LowLevelMesh, LowLevelTexture, etc.) are marked with MainActor so they needed to be accessed on the main thread.
This creates issues when we need to perform expensive GPU related operations since now we need to perform those on the main thread. This results in bottlenecks and hangs in our application. We would like to use a multi-threaded approach to solve these problems which is difficult to do here. We are constantly streaming data whether the app is just appearing or the user is interacting with our application so we need to be able to perform these operations on a separate thread.
Any advice on how to achieve this using RealityKit?
Thank you.
Hi!
I attempted to run a sample project for detecting human pose in photos, which can be found here:
https://developer.apple.com/documentation/vision/detecting-human-body-poses-in-3d-with-vision
The project works perfectly when run on my Macbook Pro M1, but it fails on Apple Vision Pro. After selecting the photo an endless loading screen is presented and the following output is produced in the console:
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Network path is nil: (null)
Failed to initialize 2D Detection Algorithm.
Failed to initialize 2D Pose Estimation Algorithm.
Failed to initialize algorithm modules
Unable to perform the request: Error Domain=com.apple.Vision Code=9 "Async status object reported as failed but without an error" UserInfo={NSLocalizedDescription=Async status object reported as failed but without an error}.
de-activating session 70138 after timeout
It seems that VNDetectHumanBodyPose3DRequest is failing on Vision Pro for some reason. Are there any additional requirements for running Vision framework on VisionOS, that I might be missing?
In several visionOS apps, we readjust our scenes to the user's eye level (their heads). But, we have encountered issues whereby the WorldTrackingProvider returns bad/incorrect positions for the first x number of frames.
See below code which you can copy paste in any Immersive Space. Relaunch the space and observe the numberOfBadWorldInfos value is inconsistent.
a. what is the most reliable way to get the devices's position?
b. is this indeed a bug?
c. are we using worldInfo improperly?
d. as a workaround, in our apps we set to 10 the number of frames to let pass before using worldInfo, should we set our threshold differently?
import ARKit
import Combine
import OSLog
import SwiftUI
import RealityKit
import RealityKitContent
let SUBSYSTEM = Bundle.main.bundleIdentifier!
struct ImmersiveView: View {
let logger = Logger(subsystem: SUBSYSTEM, category: "ImmersiveView")
let session = ARKitSession()
let worldInfo = WorldTrackingProvider()
@State var sceneUpdateSubscription: EventSubscription? = nil
@State var deviceTransform: simd_float4x4? = nil
@State var numberOfBadWorldInfos = 0
@State var isBadWorldInfoLoged = false
var body: some View {
RealityView { content in
try? await session.run([worldInfo])
sceneUpdateSubscription = content.subscribe(to: SceneEvents.Update.self) { event in
guard let pose = worldInfo.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else {
return
}
// `worldInfo` does not return correct values for the first few frames (exact number of frames is unknown)
// - known SO: https://stackoverflow.com/questions/78396187/how-to-determine-the-first-reliable-position-of-the-apple-vision-pro-device
deviceTransform = pose.originFromAnchorTransform
if deviceTransform!.columns.3.y < 1.6 {
numberOfBadWorldInfos += 1
logger.warning("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)")
} else {
if !isBadWorldInfoLoged {
logger.info("\(#function) \(#line) deviceTransform.columns.3.y \(deviceTransform!.columns.3.y), numberOfBadWorldInfos \(numberOfBadWorldInfos)")
}
isBadWorldInfoLoged = true // stop logging.
}
}
}
}
}
I have a RealityView displaying a Reality Composer Pro scene in window. Things are generally working fine, but the content seems to be appearing in front of and blocking the VisionOS window, rather than being contained inside it. Do I need to switch to a volumetric view for this to work? My scene simply contains a flat display which renders 3D content (it has a material that sends different imagery to each eye).
Hi there,
I'm trying to merge the mesh anchor into a single mesh, but couldn't find any resources on this. Here is the code where I make the mesh from each mesh anchor, and assigned it to a model component with a shader graph material.
func run(_ sceneRec: SceneReconstructionProvider) async {
for await update in sceneRec.anchorUpdates {
switch update.event {
case .added, .updated:
// Get or create entity for this anchor
let anchorEntity = anchors[update.anchor.id] ?? {
let entity = ModelEntity()
root?.addChild(entity)
anchors[update.anchor.id] = entity
return entity
}()
// Remove any existing children
for child in anchorEntity.children {
child.removeFromParent()
}
// Generate the mesh from the anchor
guard let mesh = try? await MeshResource(from: update.anchor) else { return }
guard let shape = try? await ShapeResource.generateStaticMesh(from: update.anchor) else { continue }
print("Mesh added, vertices: \(update.anchor.geometry.vertices.count), bounds: \(mesh.bounds)")
// Get the material to use
var material: RealityKit.Material
if isMaterialLoaded, let loadedMaterial = self.shaderMaterial {
material = loadedMaterial
} else {
// Use a temporary material until the shader loads
var tempMaterial = UnlitMaterial()
tempMaterial.color = .init(tint: .purple.withAlphaComponent(0.5))
material = tempMaterial
}
await MainActor.run {
anchorEntity.components.set(ModelComponent(mesh: mesh, materials: [material]))
anchorEntity.setTransformMatrix(update.anchor.originFromAnchorTransform, relativeTo: nil)
// Add collision component with static flag - required for spatial interactions
anchorEntity.components.set(CollisionComponent(
shapes: [shape],
isStatic: true,
filter: .default
))
// Make entity interactive - enables spatial taps, drags, etc.
anchorEntity.components.set(InputTargetComponent())
let shadowComponent = GroundingShadowComponent(
castsShadow: true,
receivesShadow: true
)
anchorEntity.components.set(shadowComponent)
}
I then use a spatial tap gesture to set the position parameter in the shader graph material that creates a nice gradient from the tap position on the mesh to the rest of the mesh.
SpatialTapGesture()
.targetedToAnyEntity()
.onEnded { value in
let tappedEntity = value.entity
// Check if the tapped entity is a child of tracking.meshAnchors
if isChildOfMeshAnchors(entity: tappedEntity) {
// Get local position (in the entity's coordinate space)
let localPosition = value.location3D
// Convert to world position (scene coordinate space)
let worldPosition = value.convert(localPosition, from: .local, to: .scene)
print("Tapped mesh anchor at local position: \(localPosition)")
print("Tapped mesh anchor at world position: \(worldPosition)")
// Update the material parameter with the tap position
updateMaterialTapPosition(entity: tappedEntity, position: worldPosition)
} else {
print("Tapped entity is not a mesh anchor")
}
}
}
My issue is that because there are several mesh anchors, the gradient often gets cut off by the edge of the mesh generated from the mesh anchor as suppose to a nice continuous gradient across the entire scene reconstructed mesh I couldn't find any documentations on how to merge mesh from mesh anchors, any tips would be helpful! Thank you!
The goal is to achieve precise joint tracking for clinical assessment. The Doctor is wearing the AVP and observing the Patients movement.
Do you have any recommended best practices for integrating real-time joint tracking and displaying them on the patient within visionOS?
We attempted to use VNHumanBodyPose3DObservation, which theoretically should work, but we are unable to display the detected joints in an Immersive Space for real-time validation. This makes it difficult for the doctor to ensure accurate tracking and if possible a photo or video of the Range of Motion assessment would be needed for the patient record.
Are there alternative methods to achieve precise real-time joint tracking without requiring main camera access (com.apple.developer.arkit.main-camera-access.allow)?
Hello,
I am looking to create a shader to update an entity's rendering. As a basic example say I want to recolour an entity, but leave its original textures showing through:
I understand with VisionOS I need to use Reality Composer Pro to create the shader, but I'm lost as how to reference the original colour that I'm trying to update in the node graph. All my attempts appear to completely override the textures in the entity (and its sub-entities) that I want to impact. Also the tutorials / examples I've looked at appear to create materials, not add an effect on top of existing materials.
Any hints or pointers?
Assuming this is possible, I've been trying to load the material in code, and apply to an entity. But do I need to do this to all child entities, or just the topmost?
do {
let entity = MyAssets.createModelEntity(.plane) // Loads from bundle and performs config
let material = try await ShaderGraphMaterial(named: "/Root/TestMaterial", from: "Test", in: realityKitContentBundle)
entity.applyToChildren {
$0.components[ModelComponent.self]?.materials = [material]
}
root.addChild(entity)
} catch {
fatalError(error.localizedDescription)
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona?
Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I
Any help is very welcome, thanks.
Hello,
There are three issues I am running into with a default template project + additional minimal code changes:
the Sphere_Left entity always overlaps the Sphere_Right entity.
when I release the Sphere_Left entity, it does not remain sticking to the Sphere_Right entity
when I release the Sphere_Left entity, it distances itself from the Sphere_Right entity
When I manipulate the Sphere_Right entity, these above 3 issues do not occur: I get a correct and expected behavior.
These issues are simple to replicate:
Create a new project in XCode
Choose visionOS -> App, then click Next
Name your project, and leave all other options as defaults: Initial Scene: Window, Immersive Space Renderer: RealityKit, Immersive Space: Mixed, then click Next
Save you project anywhere...
Replace the entire ImmersiveView.swift file with the below code.
Run.
Try to manipulate the left sphere, you should get the same issues I mentioned above
If you restart the project, and manipulate only the right sphere, you should get the correct expected behaviors, and no issues.
I am running this in macOS 26, XCode 26, on visionOS 26, all released lately.
ImmersiveView Code:
//
// ImmersiveView.swift
//
import OSLog
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
private let logger = Logger(subsystem: "com.testentitiessticktogether", category: "ImmersiveView")
@State var collisionBeganUnfiltered: EventSubscription?
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
// Add manipulation components
setupManipulationComponents(in: immersiveContentEntity)
collisionBeganUnfiltered = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in
Task { @MainActor in
handleCollision(entityA: collisionEvent.entityA, entityB: collisionEvent.entityB)
}
}
}
}
}
private func setupManipulationComponents(in rootEntity: Entity) {
logger.info("\(#function) \(#line) ")
let sphereNames = ["Sphere_Left", "Sphere_Right"]
for name in sphereNames {
guard let sphere = rootEntity.findEntity(named: name) else {
logger.error("\(#function) \(#line) Failed to find \(name) entity")
assertionFailure("Failed to find \(name) entity")
continue
}
ManipulationComponent.configureEntity(sphere)
var manipulationComponent = ManipulationComponent()
manipulationComponent.releaseBehavior = .stay
sphere.components.set(manipulationComponent)
}
logger.info("\(#function) \(#line) Successfully set up manipulation components")
}
private func handleCollision(entityA: Entity, entityB: Entity) {
logger.info("\(#function) \(#line) Collision between \(entityA.name) and \(entityB.name)")
guard entityA !== entityB else { return }
if entityB.isAncestor(of: entityA) {
logger.debug("\(#function) \(#line) \(entityA.name) already under \(entityB.name); skipping reparent")
return
}
if entityA.isAncestor(of: entityB) {
logger.info("\(#function) \(#line) Skip reparent: \(entityA.name) is an ancestor of \(entityB.name)")
return
}
reparentEntities(child: entityA, parent: entityB)
entityA.components[ParticleEmitterComponent.self]?.burst()
}
private func reparentEntities(child: Entity, parent: Entity) {
let childBounds = child.visualBounds(relativeTo: nil)
let parentBounds = parent.visualBounds(relativeTo: nil)
let maxEntityWidth = max(childBounds.extents.x, parentBounds.extents.x)
let childPosition = child.position(relativeTo: nil)
let parentPosition = parent.position(relativeTo: nil)
let currentDistance = distance(childPosition, parentPosition)
child.setParent(parent, preservingWorldTransform: true)
logger.info("\(#function) \(#line) Set \(child.name) parent to \(parent.name)")
child.components.remove(ManipulationComponent.self)
logger.info("\(#function) \(#line) Removed ManipulationComponent from child \(child.name)")
if currentDistance > maxEntityWidth {
let direction = normalize(childPosition - parentPosition)
let newPosition = parentPosition + direction * maxEntityWidth
child.setPosition(newPosition - parentPosition, relativeTo: parent)
logger.info("\(#function) \(#line) Adjusted position: distance was \(currentDistance), now \(maxEntityWidth)")
}
}
}
fileprivate extension Entity {
func isAncestor(of other: Entity) -> Bool {
var current: Entity? = other.parent
while let node = current {
if node === self { return true }
current = node.parent
}
return false
}
}
#Preview(immersionStyle: .mixed) {
ImmersiveView()
.environment(AppModel())
}