RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Post

Replies

Boosts

Views

Activity

RayCasting to Surface Returns Inconsistent Results?
On Xcode 15.1.0b2 when rayacsting to a collision surface, there appears to be a tendency for the collisions to be inconsistent. Here are my results. Green cylinders are hits, and red cylinders are raycasts that returned no collision results. NOTE: This raycast is triggered by a tap gesture recognizer registering on the cube... so it's weird to me that the tap would work, but the raycast not collide with anything. Is this something that just performs poorly in the simulator? My RayCasting command is: guard let pose = self.arSessionController.worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { print("FAILED TO GET POSITION") return } let transform = Transform(matrix: pose.originFromAnchorTransform) let locationOfDevice = transform.translation let raycastResult = scene.raycast(from: locationOfDevice, to: destination, relativeTo: nil) where destination is retrieved in a tap gesture handler via: let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) Any findings would be appreciated.
2
0
464
Nov ’23
RealityKit: ECS System.update() method not being called every frame on hardware
Hi, I'm trying to use the ECS System class, and noticing on hardware, that update() is not being called every frame as its described in the Simulator, or as described in the documentation. To reproduce, simply create a System like so: class MySystem : System { var count : Int = 0 required init(scene: Scene) { } func update(context: SceneUpdateContext) { count = count + 1 print("Update \(count)") } } Then, inside the RealityView, register the System with: MySystem.registerSystem() Notice that while it'll reliably be called every frame in Simulator, it'll run for a few seconds on hardware, freeze, then only be called when indirectly doing something like moving a window or performing other visionOS actions that are analogous to those that call "invalidate" to refresh a window in other operating systems. Thanks in advance, -Rob.
1
2
425
Nov ’23
How to access custom Mesh buffers inside a custom ShaderGraphMaterial?
My app is built using SceneKit/Metal on iOS, iPadOS, and tvOS, and users have generated tons of content. To bring that content to visionOS with fidelity, I have to port a particle emitter system. I have been able to successfully re-use my Metal code via CompsitorServices, but I'd like to get it working in RealityKit for all the UX/UI affordances it can provide. To that end, I have also been successful in getting the particle geometry data rendering via a Component/System that replaces mesh geometry content in real time. The last major step is to find a way to color and texture each particle via a ShaderGraphMaterial. Like any good particle emitter system, particle colors can change and vary over time. In Metal, the shader look like this: fragment half4 CocosFragmentFunctionDefaultTextureColor(const CocosFragData in [[stage_in]], texture2d<half> cc_MainTexture [[texture(0)]], sampler cc_MainTextureSampler [[sampler(0)]]) { return in.color * cc_MainTexture.sample(cc_MainTextureSampler, in.texCoord); } Basically I multiply a texture sample with a vertex color. Fairly simple stuff in GL shader-speak. So, how do I achieve this via ShaderGraphMaterial? In another post, I see that I can pass in vertex colors via a custom mesh buffer like so: let vertexColor: MeshBuffers.Semantic = MeshBuffers.custom("vertexColor", type: SIMD4<Float>.self) let meshResource = MeshDescriptor() meshResource[vertexColor] = ... Unfortunately, that doesn't appear to work for me. I'm sure I missed a step, but what I really want/need is a way to access this custom buffer from inside a ShaderGraphMaterial and multiply it against a sample of the texture. How? Any pointers, or sample code, or sample Reality Composer Pro project would be most appreciated!
1
3
524
Nov ’23
How to implement a drag/grab effect for an entity in a RealityView?
I want to implement a simple object-grabbing effect for an entity defined inside a RealityView. I'm not sure what the recommended way is. The following can move the object but the coordinates seem to be messed up. RealityView { self.myEntity = ModelEntity(...) }.gesture( DragGesture(minimumDistance: 0) .onChanged { value in let trans = value.translation3D self.myEntity.move( to: Transform( scale: SIMD3(repeating: 1.0), rotation: simd_quaternion(0, 0, 0, 1), translation: SIMD3<Float>(Float(trans.x), Float(trans.y, -Float(trans.z))), relativeTo: cards[item]) } My wild guess is value.translation3D is defined in view space and move(to..) should use 3D space? I saw RealityCoordinateSpaceConverting but no idea how to use it. Any suggestions/links/examples are appreciated :)
0
0
292
Nov ’23
Check whether an Entity is currently undergoing any collisions
I'm working on a game where it would be helpful to know whether a given Entity is currently colliding with any other Entities. The collider shape is not guaranteed to be simple — they're each constructed with multiple ShapeResources for accuracy. The Entity in question does not have a physics body, can be dragged freely, and should be able to overlap with other Entities with or without physics bodies, but all with CollisionComponents. The problem I'm running into is that using CollisionEvents.Began and CollisionEvents.Ended creates a situation where an Entity can be dragged over another, briefly switches to my "overlapping" state (the red semitransparent object), but then immediately switches back as soon as the object is dragged any further (the pink semitransparent object)— indicating CollisionEvents.Ended is being called while the Entities are still colliding. Both should be in the "overlapping" state on the left. tl;dr — Is there a simple way I'm unaware of to check whether there are any currently active collisions on an Entity? Or some other way of thinking about this that may be beneficial?
1
0
262
Oct ’23
Anchoring a view in VisionOS
Hi community, I'm developing a VisionOS app where I want to anchor a View so that it follows users' movement. The View contains clickable Buttons. However I'm been looking through docs/wikis and it seems currently, we can only anchor an Entity instance. Any ideas about how I can anchor a view? References: https://www.youtube.com/watch?v=NZ-TJ8Ln7NY https://www.reddit.com/r/visionosdev/comments/152ycqr/using_anchors_in_the_vision_os_simulator/ https://developer.apple.com/documentation/realitykit/entity
1
0
790
Oct ’23
How to set a size for an entity that is composed by a 3d model?
Hello Everyone, I'm facing a challenge related to resizing an entity built from a 3D model. Although I can manipulate the size of the mesh, the entity's overall dimensions seem to remain static and unchangeable. Here's a snippet of my code: let giftEntity = try await Entity(named: "gift") I've come across an operator that allows for scaling the entity. However, I'm uncertain about the appropriate value to employ, especially since the realityView is encapsulated within an HStack, which is further nested inside a ScrollView. Would anyone have experience or guidance on this matter? Any recommendations or resources would be invaluable. Thank you in advance for your assistance!
1
0
810
Oct ’23
Placing Item on plane, position is great, but trouble with rotation
The Goal My goal is to place an item where the user taps on a plane, and have that item match the outward facing normal-vector where the user tapped. In beta 3 a 3D Spatial Tap Gesture now returns an accurate Location3D, so determining the position to place an item is working great. I simply do: let worldPosition: SIMD3<Float> = value.convert(value.location3D, from: .local, to: .scene) The Problem Now, I notice that my entities aren't oriented correctly: The placed item always 'faces' the camera. So if the camera isn't looking straight on the target plane, then the orientation of the new entity is off. If I retrieve the transform of my newly placed item it says the rotation relative to 'nil' is 0,0,0, which.... doesn't look correct? I know I'm dealing with different Coordinate systems of the plane being tapped, the world coordinate system, and the item being placed and I'm getting a bit lost in it all. Not to mention my API intuition is still pretty low, so quats are still new to me. So, I'm curious, what rotation information can I use to "correct" the placed entity's orientation? What I tried: I've tried investigating the tap-target-entity like so: let rotationRelativeToWorld = value.entity.convert(transform: value.entity.transform, to: nil).rotation I believe this returns the rotation of the "plane entity" the user tapped, relative to the world. While that get's me the following, I'm not sure if it's useful? rotationRelativeToWorld: ▿ simd_quatf(real: 0.7071068, imag: SIMD3<Float>(-0.7071067, 6.600024e-14, 6.600024e-14)) ▿ vector : SIMD4<Float>(-0.7071067, 6.600024e-14, 6.600024e-14, 0.7071068) If anyone has better intuition than me about the coordinated spaces involved, I would appreciate some help. Thanks!
2
0
773
Oct ’23
Is it possible to get 6DOF information from PhotogrammetrySession.Pose with iOS's ObjectCaptureSession?
I am trying to extract the 6DOF (six degrees of freedom) information from the PhotogrammetrySession.Pose using the ObjectCaptureSession in iOS. In the API documentation for PhotogrammetrySession.Pose, it is mentioned that it supports iOS 17 and later. However, in the GuidedCapture sample program, the following comment is written: case .modelEntity(_, _), .bounds, .poses, .pointCloud: // Not supported yet break Does this mean it's impossible to get 6DOF information from PhotogrammetrySession.Pose at this time? Or is there any other way to achieve this? Any guidance would be greatly appreciated.
0
0
434
Oct ’23
collision between entities that has anchor is not working properly?
Hi, I'm new to realitykit and still learning. I'm trying to implement a feature on visionOS that triggers specific logic when the user's head comes into contact with another entity. When two entities are added directly to the realityView, I am able to subscribe to their collision event correctly, but when I add one of the entities to an anchorEntity that is anchored on the user's head, I am unable to receive the collision subscription. , and I found that if an entity declares that it obeys the hasAnchor protocol, it cannot participate in collision detection normally either. Why does this happen? Is this a feature or a bug? Here is how I subscribe to collision events: collisionSubscription = content.subscribe(to: CollisionEvents.Began.self, on: nil, componentType: nil) { collisionEvent in print("💥 Collision between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)") } the following two entity collides fine: @State private var anotherEntity : Entity = CollisionEntity(model: MeshResource.generateSphere(radius: 1), materials: [SimpleMaterial(color: .white, isMetallic: false)], position: [-2,1,0]) @State private var headEntity : Entity = CollisionEntity(model: MeshResource.generateSphere(radius: 0.5), materials: [SimpleMaterial(color: .yellow, isMetallic: false)], position: [0, -0.35, -3]) but with anchoring, I can't get collision notifications @State private var anotherEntity : Entity = CollisionEntity(model: MeshResource.generateSphere(radius: 1), materials: [SimpleMaterial(color: .white, isMetallic: false)], position: [-2,1,0]) @State private var headEntity : Entity = { let headAnchor = AnchorEntity(.head) headAnchor.addChild( CollisionEntity(model: MeshResource.generateSphere(radius: 0.5), materials: [SimpleMaterial(color: .yellow, isMetallic: false)], position: [0, -0.35, -3])) return headAnchor } Any information or suggestion are welcomed, thanks!
2
0
579
Oct ’23
SharePlay (GroupActivities) vs MultipeerConnectivityService in visionOS Apps
SharePlay & Group Activities I was able to implement entity position synchronisation via SharePlay (Group Activities) in my visionOS app by following the tutorials on SharePlay in the "Draw Together" app from these WWDC sessions: https://developer.apple.com/wwdc21/10187 https://developer.apple.com/wwdc22/10140 While referencing the sample code at: https://developer.apple.com/documentation/groupactivities/drawing_content_in_a_group_session MultipeerConnectivityService However, it seems that RealityKit has something called MultipeerConnectivityService for Entity position synchronisation and it seems to be a pretty robust solution that will sync not only positions but also other things like Codable components. 🤔 See docs at: https://developer.apple.com/documentation/realitykit/multipeerconnectivityservice Call for help Can anyone share example code that implements MultipeerConnectivityService ? I wonder if this is the recommended approach by Apple? I must say, writing custom messages to sync the Entity positions via Group Activities was very hard 😅 I was just thinking what I should do for all the entity components now...
1
0
451
Oct ’23
PhotogrammetrySession crashes on iPhone 15 Pro Max
My app NFC.cool is using the object capture API and I fully developed the feature with an iPhone 13 Pro Max. On that phone everything works fine. No I have a new iPhone 15 Pro Max and I get crashes when the photogrammetry session is at around 1%. This happens when I completed all three scan passes. When I prematurely end a scan with around 10 images the reconstruction runs fine and I get a 3D model. com.apple.corephotogrammetry.tracking:0 (40): EXC_BAD_ACCESS (code=1, address=0x0) Any one else seeing these crashes?
1
0
478
Oct ’23
Convert Blender to .usdz
I have a blender project, for simplicity a black hole. The way that it is modeled is a sphere on top of a round plane, and then a bunch of effects on that. I have tried multiple ways: convert to USD from the file menu convert to obj and then import But all of them have resulted in just the body, not any effects. Does anybody know how to do this properly? I seem to have no clue except for going through the Reality Converter Pro (which I planned on going through already - but modeling it there)
0
1
427
Oct ’23
How can I determine the precise starting position of the portal?
Hi, I've been working on a spatial image design, guided by this Apple developer video: https://developer.apple.com/videos/play/wwdc2023/10081?time=792. I've hit a challenge: I'm trying to position a label to the left of the portal. Although I've used an attachment for the label within the content, pinpointing the exact starting position of the portal to align the label is proving challenging. Any insights or suggestions would be appreciated. Below is the URL of the image used: https://cdn.polyhaven.com/asset_img/primary/rural_asphalt_road.png?height=780 struct PortalView: View { let radius = Float(0.3) var world = Entity() var portal = Entity() init() { world = makeWorld() portal = makePortal(world: world) } var body: some View { RealityView { content, attachments in content.add(world) content.add(portal) if let attachment = attachments.entity(for: 0) { portal.addChild(attachment) attachment.position.x = -radius/2.0 attachment.position.y = radius/2.0 } } attachments: { Attachment(id: 0) { Text("Title") .background(Color.red) } } } func makeWorld() -> Entity { let world = Entity() world.components[WorldComponent.self] = .init() let imageEntity = Entity() var material = UnlitMaterial() let texture = try! TextureResource.load(named: "road") material.color = .init(texture: .init(texture)) imageEntity.components.set( ModelComponent(mesh: .generateSphere(radius: radius), materials: [material]) ) imageEntity.position = .zero imageEntity.scale = .init(x: -1, y: 1, z: 1) world.addChild(imageEntity) return world } func makePortal(world: Entity) -> Entity { let portal = Entity() let portalMaterial = PortalMaterial() let planeMesh = MeshResource.generatePlane(width: radius, height: radius, cornerRadius: 0) portal.components[ModelComponent.self] = .init(mesh: planeMesh, materials: [portalMaterial]) portal.components[PortalComponent.self] = .init( target: world ) return portal } } #Preview { PortalView() }
0
0
306
Oct ’23
With RealityKit in visionOS, is it possible to display floating views near entities?
Hey folks, I'm wondering if I can make views float near some entities, right above their heads, and have them always face towards the user. Embarrassingly, I've spent some time going through the documentation, but haven’t come across any API that would allow me to achieve this. I thought there might be a component to attach a view, or some other mechanism to provide a callback of sorts, letting me create a view for an entity whenever needed. However, I haven't found any such API so far. Any insights or guidance would be super appreciated. Thank you!
1
0
491
Oct ’23