RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Post

Replies

Boosts

Views

Activity

OpacityComponent does not work on device
I've been trying to animate the OpacityComponent to fade in/out entities in my scene. I've tried animating the component with an AnimationResource as well as tried animating with a custom System. Both worked fine in the simulator, but failed on device. AnimationResource: When I animated the opacity of an entity using an animation with an opacity bind target, the entity would not change opacity until I physically looked away from the object. It's almost as if the device keeps an entity visible for as long as you keep looking at it, but once you look away it plays the animation. System: I created a custom system that manually changes the opacity over time, however, on device the gradual fade of the entity doesn't work. Instead, the entity literally pops in/out of view instead of fading. Can someone explain exactly how this component is supposed to be used? The simulator plays the animations exactly the way I would expect, but on device it's completely different. Edit: I'm trying to change the opacity of entities with a VideoMaterial added to a ModelComponent. The fade animations are performed at certain points in the video that are triggered by an AVPlayer time boundary observer.
1
0
339
Mar ’24
Photogrammetry CLI Tool Error
Hi all, I took bunch of photos using Apple's 'Capture Sample' iOS app. Even though the all images in .HEIC/HEIF file format that CLI tool logs the bunch of the following errors and couldn't find any solution. 1-) HEIF file is expected. 2-) *** Assertion failure in OCReturn OCNonModularSPI_CMPhoto_readResolution(const OCHeicReadHandle, const NSURL *__strong, uint64_t *, uint64_t *)(), CMPhoto+NonModularSPI.m:1271
1
1
384
Mar ’24
physx cache crash using generated static collision with many entities
I'm using RealityKit for a scene with many static and dynamic ModelEntitys simulating physics. When all the entities have simple collision generated from .generateCollisionShapes I don't see any issues, but for some entities I need much more complex and accurate collision. For this I've been using ShapeResource.generateStaticMesh with the mesh's data (2769 positions, 16272 face indices in this case), which works exactly as desired with a low entity count. However once there are 600+ dynamic entities introducing even one static entity with complex collision will reliably trigger a crash when colliding with one of the dynamic entities (not necessarily on first contact, but inevitably after multiple collisions). If I arbitrarily limit the number of entities to a max of around 500 it seems to prevent the issue from happening, though the likelihood seems to increase with the number of entities so there may be a low probability of it triggering even at 500 entities that I haven't hit while testing. If physx imposes some kind of entity or collision face/shape limit or something like that I'd at least like to know exactly what it is, but ideally there's a way to work around this. Right now my "fix" is just arbitrarily restricting the entity count in a way that limits what my app can do. The crash triggers inside 0x00000001a6790dfc in physx::PxcDiscreteNarrowPhasePCM(physx::PxcNpThreadContext&, physx::PxcNpWorkUnit const&, physx::Gu::Cache&, physx::PxsContactManagerOutput&) () which looks like this (crash line has an -> arrow at the bottom) CoreRE`physx::PxcDiscreteNarrowPhasePCM: ... 0x1a6790df0 <+668>: mov x1, x24 0x1a6790df4 <+672>: bl 0x1a67913d8 ; physx::PxcNpCacheStreamPair::reserve(unsigned int) 0x1a6790df8 <+676>: ldrb w8, [x23] -> 0x1a6790dfc <+680>: str w8, [x0, #0x20]
1
0
211
Mar ’24
Adding ModelComponent to Reality Composer Pro's "Primitive Shape" entity
Is there a way to give a "Primitive Shape" entity created through Reality Composer Pro a ModelComponent? I have a custom ShaderGraphMaterial assigned to a primitive shape in my RC Pro scene hierarchy, and I'd like to tweak the inputs of this material programatically. I found a great example of the behavior I'm looking for here: https://developer.apple.com/videos/play/wwdc2023/10273/?time=1862 @State private var sliderValue: Float = 0.0 Slider(value: $sliderValue, in: (0.0)...(1.0)) .onChange(of: sliderValue) { _, _ in guard let terrain = rootEntity.findEntity(named: "DioramaTerrain"), var modelComponent = terrain.components[ModelComponent.self], var shaderGraphMaterial = modelComponent.materials.first as? ShaderGraphMaterial else { return } do { try shaderGraphMaterial.setParameter(name: "Progress", value: .float(sliderValue)) modelComponent.materials = [shaderGraphMaterial] terrain.components.set(modelComponent) } catch { } } } However, when I try applying this example to my use-case, my project's equivalent to this line fails to execute: var modelComponent = terrain.components[ModelComponent.self] The only difference I can see between my case and this example is my entity is a primitive shape, whereas the example uses a model reference to a .usdz file. Is there some way to update a primitive shape entity to contain this ModelComponent in its set of components so I can reference + update its materials programmatically?
1
0
360
Feb ’24
Non-convex (torus) collision shapes for VisionOS/RealityKit
I have tried entity.generateCollisionShapes (generating simple box-shaped collision) and ShapeResource.generateConvex(from: entity) (generating convex-shaped collision, as the name suggests). Unfortunately neither suits my case, where I have a torus entity inside which no collision should happen with other entities - namely, smaller entities should be able to "fall through" the torus, thus the title of this post. Was wondering if there's any solution that I overlooked. Thanks 🙏
2
0
353
Feb ’24
Vision OS Torus Collision Shape
Hi, I have a usdz asset of a torus / hoop shape that I would like to pass another Reality Kit Entity cube-like object through (without touching the torus) in VisionOS. Similar to how a basketball goes through a hoop. Whenever I pass the cube through, I am getting a collision notification, even if the objects are not actually colliding. I want to be able to detect when the objects are actually colliding, vs when the cube passes cleanly through the opening in the torus. I am using entity.generateCollisionShapes(recursive: true) to generate the collision shapes. I believe the issue is in the fact that the collision shape of the torus is a rectangular box, and not the actual shape of the torus. I know that the collision shape is a rectangular box because I can see this in the vision os simulator by enabling "Collision Shapes" Does anyone know how to programmatically create a torus in collision shape in SwiftUI / RealityKit for VisionOS. Followup, can I create a torus in reality kit, so I don't even have to use a .usdz asset?
3
1
740
Dec ’23
Adding custom material to sceneReconstruction mesh
I wanted to add a custom material over the mesh detected by the sceneReconstruction provider but i can't find a way to convert the meshAnchor to a usable MeshResource func processReconstructionUpdates() async { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor)
 else { continue } switch update.event { case .added: let entity = ModelEntity( mesh: **somehow get the mesh from mesh anchor here**, materials: [material] ) contentEntity.addChild(entity) case .updated: ... case .removed: ... @unknown default: fatalError("Unsupported anchor event") } } }
2
0
370
Feb ’24
How can I simultaneously apply the drag gesture to multiple entities?
I wanted to drag EntityA while also dragging EntityB independently. I've tried to separate them by entity but it only recognizes the latest drag gesture RealityView { content, attachments in ... } .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } ) .gesture( DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } ) also tried using the simultaneously but didn't work too, maybe i'm missing something .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } .simultaneously(with: DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } )
0
1
280
Feb ’24
How to use drag gestures on objects with inverted normals?
I want to build a panorama sphere around the user. The idea is that the users can interact with this panorama, i.e. pan it around and select markers placed on it, like on a map. So I set up a sphere that works like a skybox, and inverted its normal, which makes the material is inward facing, using this code I found online: import Combine import Foundation import RealityKit import SwiftUI extension Entity { func addSkybox(for skybox: Skybox) { let subscription = TextureResource .loadAsync(named: skybox.imageName) .sink(receiveCompletion: { completion in switch completion { case .finished: break case let .failure(error): assertionFailure("\(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) let sphere = ModelComponent(mesh: .generateSphere(radius: 5), materials: [material]) self.components.set(sphere) /// flip sphere inside out so the texture is inside self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, 1.0, 0.0) }) components.set(Entity.SubscriptionComponent(subscription: subscription)) } struct SubscriptionComponent: Component { var subscription: AnyCancellable } } This works fine and is looking awesome. However, I can't get a gesture work on this. If the sphere is "normally" oriented, i.e. the user drags it "from the outside", I can do it like this: import RealityKit import SwiftUI struct ImmersiveMap: View { @State private var rotationAngle: Float = 0.0 var body: some View { RealityView { content in let rootEntity = Entity() rootEntity.addSkybox(for: .worldmap) rootEntity.components.set(CollisionComponent(shapes: [.generateSphere(radius: 5)])) rootEntity.generateCollisionShapes(recursive: true) rootEntity.components.set(InputTargetComponent()) content.add(rootEntity) } .gesture(DragGesture().targetedToAnyEntity().onChanged({ _ in log("drag gesture") })) But if the user drags it from the inside (i.e. the negative x scale is in place), I get no drag events. Is there a way to achieve this?
1
0
295
Feb ’24
How to use the video playing in an AVPlayerViewController as a light source
I want to implement an immersive environment similar to AppleTV's Cinema environment for the video that plays in my app - currently, I want to use an AVPlayerViewController so that I don't have to build a control view or deal with aspect ratios (which I would have to do if I used VideoMaterial). To do this, it looks like I'll need to use the imagery from the video stream itself as an image for an ImageBasedLightComponent, but the API for that class seems restrict its input to an EnvironmentResource, which looks like it's meant to use an equirectangular still image that has to be part of the app bundle. Does anyone know how to achieve this effect? Where the "light" from the video being played in an AVPlayerViewController's player can be cast on 3D objects in the RealityKit scene? Is AppleTV doing something wild like combining an AVPlayerViewController and VideoMaterial? Where the VideoMaterial is layered onto the objects in the scene to simulate a light source? Thanks in advance!
0
0
456
Feb ’24
PhotogrammetrySession chops off feet
We have a custom photo booth for taking photos of people for use with photogrammetry - the usual vertical cylinder of cameras with the human subject stood in the middle. We've found that often the lower legs of the subject are missing - this is particularly likely if the subject is wearing dark pants. The API for PhotogrammetrySession is really very limited, but we've tried all the combinations or detail and sensitivity and object masking we can think of - nothing results in a reliable scan. Personally I think this is related to the automatic isolation of the subject, rather than the photogrammetry itself. Often we get just the person, perfectly modelled. Occasionally we get everything the cameras can see - including the booth itself and the room it's in! But sometimes we get this footless result. Is there anything we can try to improve the situation?
0
0
184
Feb ’24
Is it possible to have a custom light blend with environment light in RealityKit?
I was able to add a spotlight effect to my entities using ImageBasedLightComponent and the sample code. However, I noticed that whenever you set ImageBasedLightComponent the environmental lighting is completely turned off. Is it possible to merge them somehow? So imagine you have a toy in a the real world, and you shine a flashlight on it. The environment light should still have an effect right?
0
0
302
Feb ’24
Occlusion material and progressive ImmersiveSpace
In a progressive ImmersiveSpace, I created an object (a cylinder) and applied an OcclusionMaterial to it. It does hide my virtual content behind it, but does not show the content of my room. The cylinder just appears black. In progressive (or full?) ImmersiveSpace, is it possible to apply occlusion material (or something else), so I can see the room behind the virtual content? Basically, I want to punch a hole through the virtual content and see the room behind it. As a practical example, imagine being in a progressive ImmersiveSpace, but you have a plane with an occlusion mesh applied to it above your Apple Magic Keyboard so you can see your keyboard. Is this possible?
0
0
320
Feb ’24
Portals and ImmersiveSpace?
I've added a simple visionOS Portal to an app's initial WindowGroup (a window with an attached portal is all that is displayed), but I've had troubles adding a portal to an ImmersiveSpace. For example, using the boilerplate code that Xcode creates for a mixed spatial experience, I'd like to turn on & off the ImmersiveSpace which has a portal in it. So far, the portal isn't showing up. Is it possible to add a portal to an ImmersiveSpace? Are there any restrictions on where portals can be added?
1
0
322
Feb ’24
VisionOS VideoMaterial on 3D Mesh
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this? You can look at 10:34 in this video. https://developer.apple.com/documentation/realitykit/videomaterial
1
0
259
Feb ’24