I used the extensions above and got the classification for each face. But how do I apply a unique material based on the classification? For example, I want the faces of a table to be an unlit material, wall to be a simple material, so on and so forth. Can I break the mesh based on classification value into several sub mesh then assign them as MeshResorces for entitie?
Post
Replies
Boosts
Views
Activity
I realized having headAnchor = worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) right after the run ar session is problematic, I moved this line to system component in Immersive view, it works fine now. root.components.set(ClosureComponent(closure: { deltaTime in
let headAnchor = tracking.worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
let translation = ConvertTrackingTransform
.GetHeadPosition(headAnchor)
sphere.transform.translation = translation
// print("Head position (translation)")
}))`
I also got scene reconstruction to work at the same time.
I'm having the same issue, world tracking is initialized, but it stops running.
I also have scene reconstruaction and hand tracking, but had both commented out.
Tracking script
import ARKit
import SwiftUI
@Observable
@MainActor class Tracking{
let arSession = ARKitSession()
let handTracking = HandTrackingProvider()
let worldTracking = WorldTrackingProvider()
let sceneReconstruction = SceneReconstructionProvider()
var latestLeftHand: HandAnchor?
var meshAnchors = Entity()
var latestRightHand: HandAnchor?
var headAnchor: DeviceAnchor?
func StartTracking() async {
guard WorldTrackingProvider.isSupported else {
print("HandTrackingProvider is not supported on this device.")
return
}
let meshGenerator = MeshAnchorGenerator(root: meshAnchors)
do {
print("head tracking \(worldTracking.state)")
try await arSession.run([worldTracking])
// try await arSession.run([handTracking, worldTracking,sceneReconstruction])
} catch let error as ARKitSession.Error {
print("Encountered an error while running providers: \(error.localizedDescription)")
} catch let error {
print("Encountered an unexpected error: \(error.localizedDescription)")
}
if worldTracking.state != .running {
print("world tracking not running, stop the session.")
}
headAnchor = worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime())
print("start head tracking")
await meshGenerator.run(sceneReconstruction)
print("start scene reconstruction")
for await anchorUpdate in handTracking.anchorUpdates {
switch anchorUpdate.anchor.chirality {
case .left:
self.latestLeftHand = anchorUpdate.anchor
case .right:
self.latestRightHand = anchorUpdate.anchor
}
print("start scene reconstruction")
}
}
}
ImmersiveVIew
import RealityKit
import RealityKitContent
import ARKit
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
@Environment(Tracking.self) var tracking
var body: some View {
let meshAnchors = Entity()
RealityView { content in
// Add the initial RealityKit content
// content.add(meshAnchors)
let sphere = ModelEntity()
if let mat = try? await ShaderGraphMaterial(
named: "/Root/Material",
from: "Blur",
in: realityKitContentBundle
){
sphere.components.set([
ModelComponent(
mesh: .generateSphere(radius: 1),
materials: [mat]),
CollisionComponent(shapes: [
ShapeResource.generateSphere(radius: 0.02)
], mode: .trigger)
])
}
let root = Entity()
root.addChild(sphere)
content.add(root)
//
root.components.set(ClosureComponent(closure: { deltaTime in
let translation = ConvertTrackingTransform
.GetHeadPosition(tracking.headAnchor)
sphere.transform.translation = translation
// print("Head position \(translation)")
}))
}
.task{
await tracking.StartTracking()
}
}
}
App
ImmersiveSpace(id: appModel.immersiveSpaceID) {
ImmersiveView()
.environment(appModel)
.environment(Tracking())
.onAppear {
appModel.immersiveSpaceState = .open
avPlayerViewModel.play()
}
.onDisappear {
appModel.immersiveSpaceState = .closed
avPlayerViewModel.reset()
}
}
.immersionStyle(selection: .constant(.mixed), in: .full, .mixed)
}
Thanks for the response! If I were to implement a design as below:
where my hand can grab entityA that sits inside entityB that is transparent, when the hand is close to the entity B bound, a spotlight effect will fire. Seems like having both entity A and Entity B as input target wont work? Should I shoot a raycast from the index finger and link the hit position on the entity B collider to a custom shader material? I'm still quite new to Swift, wondering if there is a simple solution?