In the WWDC session titled "Deep dive into volumes and immersive spaces", the developers discussed adding a Spatial Tracking Session and an Anchor Entity to detect the floor. They then glossed over some important details. They added a spatial tap gesture to let the user place content relative to the floor anchor, but they left a lot of information.
.gesture(
SpatialTapGesture(
coordinateSpace: .immersiveSpace
)
.targetedToAnyEntity()
.onEnded { value in
handleTapOnFloor(value: value)
}
)
My understanding is that an entity has to have input and collision components for gestures like this to work. How can we add a collision to an AnchorEntity when we don't know its size or shape?
I've been trying for days to understand what is happening here and I just don't get it. It is even more frustrating that the example project that Apple released does not contain any of these features.
I would like to be able
- Detect the floor plane
- Get the position/transform of the floor plane
- Add a collider to the floor plane
- Enable collisions and physics on the floor plane
- Enable gestures on the floor plane
It seems to me that the Anchor Entity is placed as an entirely arbitrary position. It has absolutely no relationship to the rectangle with the floor label that I can see in the Xcode visualization. It is just a point, not a plane or rect that I can use.
I've tried manually calculating the collision shape after the anchor is detected, but nothing that I have tried works. I can't tap on the floor with gestures. I can't drop entities onto the floor. I can't seem to do ANYTHING at all with this floor anchor other than place entity at the totally arbitrary location somewhere on the floor.
Is there anyway at all with Spatial Tracking Session and Anchor Entity to get the actual plane that was detected?
struct FloorExample: View {
@State var trackingSession: SpatialTrackingSession = SpatialTrackingSession()
@State var subject: Entity?
@State var floor: AnchorEntity?
var body: some View {
RealityView { content, attachments in
let session = SpatialTrackingSession()
let configuration = SpatialTrackingSession.Configuration(tracking: [.plane])
_ = await session.run(configuration)
self.trackingSession = session
let floorAnchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: SIMD2(x: 0.1, y: 0.1)))
floorAnchor.anchoring.physicsSimulation = .none
floorAnchor.name = "FloorAnchorEntity"
floorAnchor.components.set(InputTargetComponent())
floorAnchor.components.set(CollisionComponent(shapes: .init()))
content.add(floorAnchor)
self.floor = floorAnchor
// This is just here to let me see where visinoOS decided to "place" the floor anchor.
let floorPlaced = ModelEntity(
mesh: .generateSphere(radius: 0.1),
materials: [SimpleMaterial(color: .black, isMetallic: false)])
floorAnchor.addChild(floorPlaced)
if let scene = try? await Entity(named: "AnchorLabsFloor", in: realityKitContentBundle) {
content.add(scene)
if let subject = scene.findEntity(named: "StepSphereRed") {
self.subject = subject
}
// I can see when the anchor is added
_ = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in
event.anchor.generateCollisionShapes(recursive: true) // this doesn't seem to work
print("**anchor changed** \(event)")
print("**anchor** \(event.anchor)")
}
// place the reset button near the user
if let panel = attachments.entity(for: "Panel") {
panel.position = [0, 1, -0.5]
content.add(panel)
}
}
} update: { content, attachments in
} attachments: {
Attachment(id: "Panel", {
Button(action: {
print("**button pressed**")
if let subject = self.subject {
subject.position = [-0.5, 1.5, -1.5]
// Remove the physics body and assign a new one - hack to remove momentum
if let physics = subject.components[PhysicsBodyComponent.self] {
subject.components.remove(PhysicsBodyComponent.self)
subject.components.set(physics)
}
}
}, label: {
Text("Reset Sphere")
})
})
}
}
}