RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How do I use RoomAnchor?
What I want to do: I want to turn only the walls of a room into RealityKit Entities that I can collide with, or turn into occlusion surfaces. This requires adding and maintaining RealityKit entities that with mesh information from the RoomAnchor. It also requires creating a "collision shape" from the mesh information. What I've explored: A RoomAnchor can provide me MeshAnchor.Geometry's that match only the "wall" portions of a Room. I can use this mesh information to create RealityKit entities and add them to my immersive view. But those Mesh's don't come with UUIDs, so I'm not sure how I could know which entities meshes need to to be updated as the RoomAnchor is updated. As such I just keep adding duplicate wall entities. A RoomAnchor also provides me with the UUIDs of its plane anchors, but no way to connect those to the provided meshes that I've discovered so far. Here is how I add the green walls from the RoomAnchor wall meshes. Note: I don't like that I need to wrap this in a task to satisfy the async nature of making a shape from a mesh. could be stuck with it, though. Warning: this code will keep adding walls, even if there are duplicates and will likely cause performance issues :D. func updateRoom(_ anchor: RoomAnchor) async throws { print("ROOM ID: \(anchor.id)") anchor.geometries(of: .wall).forEach { mesh in Task { let newEntity = Entity() newEntity.components.set(InputTargetComponent()) realityViewContent?.addEntity(newEntity) newEntity.components.set(PlacementUtilities.PlacementSurfaceComponent()) collisionEntities[anchor.id]?.components.set(OpacityComponent(opacity: 0.2)) collisionEntities[anchor.id]?.transform = Transform(matrix: anchor.originFromAnchorTransform) // Generate a mesh for the plane do { let contents = MeshResource.Contents(planeGeometry: mesh) let meshResource = try MeshResource.generate(from: contents) // Make this plane occlude virtual objects behind it. // entity.components.set(ModelComponent(mesh: meshResource, materials: [OcclusionMaterial()])) collisionEntities[anchor.id]?.components.set(ModelComponent(mesh: meshResource, materials: [SimpleMaterial.init(color: .green, roughness: 1.0, isMetallic: false)])) } catch { print("Failed to create a mesh resource for a plane anchor: \(error).") return } // Generate a collision shape for the plane (for object placement and physics). var shape: ShapeResource? = nil do { let vertices = anchor.geometry.vertices.asSIMD3(ofType: Float.self) shape = try await ShapeResource.generateStaticMesh(positions: vertices, faceIndices: anchor.geometry.faces.asUInt16Array()) } catch { print("Failed to create a static mesh for a plane anchor: \(error).") return } if let shape { let collisionGroup = PlaneAnchor.verticalCollisionGroup collisionEntities[anchor.id]?.components.set(CollisionComponent(shapes: [shape], isStatic: true, filter: CollisionFilter(group: collisionGroup, mask: .all))) // The plane needs to be a static physics body so that objects come to rest on the plane. let physicsMaterial = PhysicsMaterialResource.generate() let physics = PhysicsBodyComponent(shapes: [shape], mass: 0.0, material: physicsMaterial, mode: .static) collisionEntities[anchor.id]?.components.set(physics) } collisionEntities[anchor.id]?.components.set(InputTargetComponent()) } } }
1
0
355
Jun ’24
How to ship SDK RealityKit entity components that can be using and applied within a customer's application?
We deliver an SDK that enables rich spatial computing experiences. We want to enable our customers to develop apps using Swift or RealityComposer Pro. Composer allows the creation of custom components from the add components button in the inspector panel. These source files are dropped into the RealityComposer package and directory. We would like to be able to have our customers import our SDK components into their applications RealityComposer package and have our components be visible to be applied by our customer into their scene compositions. How can we achieve this? We believe this will lead to a risk ecosystem components extensions for RealityComposer Pro.
4
0
357
Jun ’24
RealityKit on iOS: New anchor entity takes ages to show up
I'm implementing an AR app with Image Tracking capabilities. I noticed that it takes very long for the entities I want to overlay on a detected image to show up in the video feed. When debugging using debugOptions.insert(.showAnchorOrigins), I realized that the image is actually detected very quickly, the anchor origins show up almost immediately. And I can also see that my code reacts with adding new anchors for my ModelEntities there. However, it takes ages for these ModelEntities to actually show up. Only if I move the camera a lot, they will appear after a while. What might be the reason for this behaviour? I also noticed that for the first image target, a huge amount of anchors are being created. They start from the image and go all up towards the user. This does not happen with subsequent (other) image targets.
1
0
229
Jun ’24
State-of-the-Art 3D (no AR) on macOS using RealityKit?
What is the current recommendation for creating high-quality 3D content? The context is a hobbyist, specialised CAD app for macOS (with an iPadOS companion) that is mostly 2D but also offers a 3D visualization option (currently OpenGL). Somewhere down the line there might be an AR view but at the moment - certainly for macOS - it's purely generated 3D visualization, all rendered content. So starting with a rewrite of the 3D visualization in 2024 targeting macOS Sequoia/iPadOS 18 is RealityKit the suggested way forward? Cheers, Jay
4
0
494
Jun ’24
Tap gesture collisions not fully detecting for distant large entities
I am working on an app where we are attempting to place large entities quite far away from the user, when trying to recognise a tap gesture on them though the gesture isn't being picked up for part of the model. It seems as though the larger and further a model is placed the more offset the collision shape seems to be. It responds to taps in a region that shrinks towards the bottom right. The actual size of the collision shape appears to be correct when viewed with the collision shape debug visualisation. I've been able to replicate this behaviour in the simulator and on a physical device. It's hard to explain in words, there's a video in the README for the repo here I've been able to replicate the issue in a simple sample app. Not sure if I might be using it wrong or if it is expected behaviour for tap gestures to be a bit off when places a large distance from the user. Appreciate any help, thanks. struct ImmersiveView: View { @State private var tapCount = 0 var body: some View { RealityView { content in let sphere = ModelEntity(mesh: .generateSphere(radius: 50), materials: [UnlitMaterial(color: .red)]) sphere.setPosition([500, 0, 0], relativeTo: nil) sphere.components.set([ InputTargetComponent(), CollisionComponent(shapes: [.generateBox(width: 250, height: 250, depth: 250)]), ]) content.add(sphere) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in tapCount += 1 print(tapCount) } ) } } ``
2
0
492
Jun ’24
Entering full immersion without grab bar and close icon
I'd like to enter a fully immersive scene without the grab bar and close icon. The full immersion app that comes with Xcode doesn't exit the immersion state when "x" is hit, but all the grab etc disappears - if I could only do that programmatically! I've tried conditionally removing the View that launches the ImmersiveSpace, but the WindowGroup seems to be the thing that puts out the UI I'm trying to hide... WindowGroup { if(gameState.immersiveSpaceOpened){ ContentView() .environmentObject(gameState) } }
1
0
283
Jun ’24
Does anyone know if HDR video is supported in a RealityView?
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial. I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float. I don't believe it's displaying HDR properly or behaving like a raw AVPlayer. Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough? Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself? Thank you
1
0
463
Jun ’24
RealityView and Anchors
Hello, In the documentation for an ARView we see a diagram that shows that all Entity's are connected via AnchorEntity's to the Scene: https://developer.apple.com/documentation/realitykit/arview What happens when we are using a RealityView? Here the documentation suggests we direclty add Entity's: https://developer.apple.com/documentation/realitykit/realityview/ Three questions: Do we need to add Entitys to an AnchorEntity first and add this via content.add(...)? Is an Entity ignored by physics engine if attached via an Anchor? If both the AnchorEntity and an attached Entity is added via content.add(...), does is its anchor's position ignored?
1
0
451
May ’24
GroundingShadowComponent tanks performance even when set to false
Working on a vision OS app. I've noticed that even when castsShadow is false, performance goes down the drain when there are more than a few dozen entities that have GroundingShadowComponent. I managed to hard crash the Vision Pro with about 200 or so entities that each had two ModelEntities with GroundingShadowComponent attached but set to castShadows = false. My solution is to add and remove the GroundingShadowComponent from entities as needed, but I thought maybe someone at Apple might want to look into this. I don't expect great performance with that many entities casting shadows, but I'd think turning the shadow off would effectively disable the component and not incur a performance penalty.
0
0
452
May ’24
glassMaterialEffect not working in immersive view with a skybox
I seem to be running into an issue where the .glassBackgroundEffect modifier doesn't seem to render correctly. The issue is occurring when attached to a view shown in a RealityKit immersive view with a Skybox texture. The glass effect is applied but doesn't let any of the colour of the skybox behind it though. I have created a sample project which is just the immersive space template with the addition of a skybox texture and an attachment with the glassBackgroundEffect modifier. The RealityView itself is struct ImmersiveView: View { var body: some View { RealityView { content, attachments in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) let attachment = attachments.entity(for: "foo")! let leftSphere = immersiveContentEntity.findEntity(named: "Sphere_Left")! attachment.position = [0, 0.2, 0] leftSphere.addChild(attachment) // Add an ImageBasedLight for the immersive content guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return } let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) // Put skybox here. See example in World project available at var skyboxMaterial = UnlitMaterial() let skyboxTexture = try! await TextureResource(named: "pano") skyboxMaterial.color = .init(texture: .init(skyboxTexture)) let skyboxEntity = Entity() skyboxEntity.components.set(ModelComponent(mesh: .generateSphere(radius: 1000), materials: [skyboxMaterial])) skyboxEntity.scale *= .init(x: -1, y: 1, z: 1) content.add(skyboxEntity) } } update: { _, _ in } attachments: { Attachment(id: "foo") { Text("Hello") .font(.extraLargeTitle) .padding(48) .glassBackgroundEffect() } } } } The effect is shown bellow I've tried this both in the simulator and in a physical device and get the same behaviour. Not sure if this is an issue with RealityKit or if I'm just holding it wrong, would greatly appreciate any help. Thanks.
1
0
344
May ’24
How to get selected usdz model thumbnail image using QuickLookThumbnailing
I am doing below code for getting thumbnail from usdz model using the QuickLookThumbnailing, But don't get the proper out. guard let url = Bundle.main.url(forResource: resource, withExtension: withExtension) else{ print("Unable to create url for resource.") return } let request = QLThumbnailGenerator.Request(fileAt: url, size: size, scale: 10.0, representationTypes: .all) let generator = QLThumbnailGenerator.shared generator.generateRepresentations(for: request) { thumbnail, type, error in DispatchQueue.main.async { if thumbnail == nil || error != nil { print(error) }else{ let tempImage = Image(uiImage: thumbnail!.uiImage) print(tempImage) self.thumbnailImage = Image(uiImage: thumbnail!.uiImage) print("=============") } } } } Below Screen Shot for selected model : Below is the thumbnail image, which not come with guitar but get only usdz icon.
1
0
354
May ’24