has anyone gotten their 3d Models to render in seperate windows, i tried following the code in the video for creating a seperate window group, but i get a ton of obsecure errors, i was able to get it to render in my 2d windows, but when i try making a seperate window group i get errors
Post not yet marked as solved
Hi,
I implemented it as shown in the link below, but it does not animate.
https://developer.apple.com/videos/play/wwdc2023/10080/?time=1220
The following message was displayed
No bind target found for played animation.
import SwiftUI
import RealityKit
struct ImmersiveView: View {
var body: some View {
RealityView { content in
if let entity = try? await ModelEntity(named: "toy_biplane_idle") {
let bounds = entity.model!.mesh.bounds.extents
entity.components.set(CollisionComponent(shapes: [.generateBox(size: bounds)]))
entity.components.set(HoverEffectComponent())
entity.components.set(InputTargetComponent())
if let toy = try? await ModelEntity(named: "toy_drummer_idle") {
let orbit = OrbitAnimation(
name:"orbit",
duration: 30,
axis:[0, 1, 0],
startTransform: toy.transform,
bindTarget: .transform,
repeatMode: .repeat)
if let animation = try? AnimationResource.generate(with: orbit) {
toy.playAnimation(animation)
}
content.add(toy)
}
content.add(entity)
}
}
}
}
Is this a know bug or is there a funamental misunderstanding on my part?
In the screenshot I've attached below I would expect the blue box to be perpendicular to the floor. It is the yAxisEntity in my code which I instatniate with a mesh of height 3. Instead it runs parallel to the floor what I'd expect to the z axis.
Here is my code
struct ImmerisveContentDebugView: View {
@Environment(ViewModel.self) private var model
@State var wallAnchor: AnchorEntity = {
return AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: SIMD2<Float>(0.1, 0.1)))
}()
@State var originEntity: Entity = {
let originMesh = MeshResource.generateSphere(radius: 0.2)
return ModelEntity(mesh: originMesh, materials: [SimpleMaterial(color: .orange, isMetallic: false)])
}()
@State var xAxisEntity: Entity = {
let line = MeshResource.generateBox(width: 3, height: 0.1, depth: 0.1)
return ModelEntity(mesh: line, materials: [SimpleMaterial(color: .red, isMetallic: false)])
}()
@State var yAxisEntity: Entity = {
let line = MeshResource.generateBox(width: 0.1, height: 3, depth: 0.1)
return ModelEntity(mesh: line, materials: [SimpleMaterial(color: .blue, isMetallic: false)])
}()
@State var zAxisEntity: Entity = {
let line = MeshResource.generateBox(width: 0.1, height: 0.1, depth: 3)
return ModelEntity(mesh: line, materials: [SimpleMaterial(color: .green, isMetallic: false)])
}()
var body: some View {
RealityView { content in
content.add(wallAnchor)
wallAnchor.addChild(originEntity)
wallAnchor.addChild(xAxisEntity)
wallAnchor.addChild(yAxisEntity)
wallAnchor.addChild(zAxisEntity)
}
}
}
And here is what the simualtor renders
Post not yet marked as solved
One thing that was not very clear for me on the WWDC videos regarding VisionOS app development was:
If I want to trigger an action (let's say change the scene) using the user's relative position to do so, am I going to be able to do it?
Example: If the user comes too close to an object, it starts to play some animation.
Reference video:
wwdc2023-10080