RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit subtopic

Post

Replies

Boosts

Views

Activity

RealityKit SIMD3<Float> precision decreases with distance?
The farther away the center of a large entity is, the less accurate the positioning is? For example I am changing only the y-axis position of an entity that is tens of meters long, but i notice x and z drifting slowly the farther away the center of the entity is. I would not expect the x and z to move. It might be compounding rounding errors somewhere, or maybe the RealityKit engine is deciding not to be super precise about distant objects? Otherwise I just have a bug somewhere.
5
0
538
Mar ’25
VISION : getting real geometry reflections in reality kit ?
as in the environments we have real tiem reflections of movies on a screen or reflections of the surrounding hood in the background... could i get a metallic surface getting accurate reflections of a box on top ? i don't mean getting a rpobe or hdr cubemap, i mean the same accurate reflections as the water of the mt hood with movie i'm wacthing in other app
1
0
89
Mar ’25
Reality Composer Pro Transparent Textures
Hey everyone, I am currently developing an app in visionOS and using RealityComposerPro create scenes in put in my app. I have a humanoid model with hair strands, and each strand of hair has an opacity map. However, some reflections are still visible even though the opacity is zero. There are also some weird culling among hair strands (in the left circle) and weird reflections in hair cards (in the right circle). Here's my settings for the materials. Since all the hair strands are interconnected with each other, it is hard to decide the drawing order in Xcode, so I am wondering if there's an easier way to handle transparency objects. Please let me know if you know anything helpful, much appreciated!
0
0
92
Apr ’25
Is Using Metal Compute Shaders for Efficient Resource Copying to RealityKit the Best Approach for Streaming Data in Real-Time Rendering?
Hi Apple, In VisionOS, for real-time streaming of large 3D scenes, I plan to create Metal buffers and textures in multiple threads and then use a compute shader on the main thread to copy the Metal resources into RealityKit, minimizing main thread usage. Given that most of RealityKit's default APIs require execution on the main actor (main thread), it is not ideal for streaming data. Is this approach the best way to handle streaming data and real-time rendering? Thank you very much.
0
0
76
Apr ’25
How to configure RealityKit entities for animations on a modular character?
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app. The character has customization such as clothing items and hair and all objects are properly weighted to the rig. The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting entity hierarchy, viewed in Reality Composer Pro: My problem is that when I export with the Armature Modifier applied to the objects, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities is no longer as simple as removing the entity with the corresponding name. What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
1
0
51
May ’25
How to add and remove child entities to a rigged entity in RealityKit?
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app. The character has customization such as clothing items and hair and all objects are properly weighted to the rig. The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting hierarchy: Before exporting for the animation (armature modifier applied), I simply had to store the Model entities and swap them in but now when I export with the Armature Modifier applied, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities and applying new materials to them is no longer as simple. Here's a demo blend file and usdc export with a setup like mine, having an animated bone to swing a cube and sphere, to be swapped so that only one is visible https://www.dropbox.com/scl/fo/be2q6qcztc83z7c4gj1w0/AMapxWc_ip2KZ8oTOYDUMv8?rlkey=rcdaggcxq06dyen09mw5mqmem&st=bnc0d7j0&dl=0 This is how I'm loading the entity and removing a part, with the demo files import SwiftUI import RealityKit struct SwapDemoView: View { var body: some View { RealityView { content in let camera = PerspectiveCamera() camera.transform.translation = SIMD3(x: 0, y: 0.1, z: 3) guard let root = try? await Entity(named: "simpleSwapDemo") else { fatalError("simpleSwapDemo.usdc is not present") } print(root) // Get initial hierarchy guard let cube = root.findEntity(named: "Cube") else { fatalError("Entity cube doesn't exist") } cube.removeFromParent() // <-- Cube is still visible after removal print(root) // Get hierarchy to confirm removal of cube let resource = root.availableAnimations[0] root.playAnimation(resource.repeat()) content.add(root) content.add(camera) } .background(.white) } } And this is what the entity hierarchy looks like in RealityKit before cube removal ▿ 'root' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : ModelEntity, children: 2 ⟐ SynchronizationComponent ⟐ ModelComponent ⟐ SkeletalPosesComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : Entity ⟐ SynchronizationComponent ⟐ Transform ▿ 'Primitives' : Entity, children: 2 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity ⟐ SynchronizationComponent ⟐ Transform ▿ 'Cube' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Cube' : Entity ⟐ SynchronizationComponent ⟐ Transform And here's the hierarchy after removal ▿ 'root' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : ModelEntity, children: 2 ⟐ SynchronizationComponent ⟐ ModelComponent ⟐ SkeletalPosesComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : Entity ⟐ SynchronizationComponent ⟐ Transform ▿ 'Primitives' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity ⟐ SynchronizationComponent ⟐ Transform And this is the result: What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
1
0
103
May ’25
Improving person segmentation and occlusion quality in RealityKit
I’m building an app that uses RealityKit and specifically ARConfiguration.FrameSemantics.personSegmentationWithDepth. The goal is to insert an AR object into the scene behind a person, and an additional AR object in front of the person, while being as photo realistic as possible. Through testing, I’ve noticed that many times, the edges of the person segmentation mask are not well matched to the actual person, and parts of the person are transparent, with the AR object bleeding through. It’s sort of like a “bad green screen” effect, which I’d expect to see a little bit, but not to this extent. I’ve been testing on iPhone 16, iPhone 14 Pro, iPad Pro 12.9 inch 6th Generation, and iPhone 12 Pro, with similar results across all devices. I’m wondering what else I can do to improve this… either code changes, platform (like different iPhone models), or environment (like lighting, distance, etc). Attaching some example screen grabs and a minimum reproducible code sample. Appreciate any insights! import ARKit import SwiftUI import RealityKit struct RealityViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.environment.sceneUnderstanding.options.insert(.occlusion) arView.renderOptions.insert(.disableMotionBlur) arView.renderOptions.insert(.disableDepthOfField) let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = [.horizontal] if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) { configuration.frameSemantics.insert(.personSegmentationWithDepth) } arView.session.run(configuration) arView.session.delegate = context.coordinator context.coordinator.arView = arView } func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, ARSessionDelegate { var parent: RealityViewContainer var floorAnchor: ARPlaneAnchor? init(_ parent: RealityViewContainer) { self.parent = parent } func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { if let arView,floorAnchor == nil { for anchor in anchors { if let horizontalPlaneAnchor = anchor as? ARPlaneAnchor, horizontalPlaneAnchor.alignment == .horizontal, horizontalPlaneAnchor.transform.columns.3.y < arView.cameraTransform.translation.y { // filter out ceiling floorAnchor = horizontalPlaneAnchor let backgroundEntity = BackgroundEntity() let anchorEntity = AnchorEntity(anchor: horizontalPlaneAnchor) anchorEntity.addChild(background) let foregroundEntity = ForegroundEntity() backgroundEntity.addChild(foregroundEntity) arView.scene.addAnchor(anchorEntity) arView.installGestures([.rotation, .translation], for: backgroundEntity) break // Stop after adding the first horizontal plane (floor) } } } } } }
1
0
86
May ’25
How to use CharacterControllerComponent.
I am trying to implement a ChacterControllerComponent using the following URL. https://developer.apple.com/documentation/realitykit/charactercontrollercomponent I have written sample code, but PhysicsSimulationEvents.WillSimulate is not executed and nothing happens. import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { let gravity: SIMD3<Float> = [0, -50, 0] let jumpSpeed: Float = 10 enum PlayerInput { case none, jump } @State private var testCharacter: Entity = Entity() @State private var myPlayerInput = PlayerInput.none var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) testCharacter = immersiveContentEntity.findEntity(named: "Capsule")! testCharacter.components.set(CharacterControllerComponent()) let _ = content.subscribe(to: PhysicsSimulationEvents.WillSimulate.self, on: testCharacter) { event in print("subscribe run") let deltaTime: Float = Float(event.deltaTime) var velocity: SIMD3<Float> = .zero var isOnGround: Bool = false // RealityKit automatically adds `CharacterControllerStateComponent` after moving the character for the first time. if let ccState = testCharacter.components[CharacterControllerStateComponent.self] { velocity = ccState.velocity isOnGround = ccState.isOnGround } if !isOnGround { // Gravity is a force, so you need to accumulate it for each frame. velocity += gravity * deltaTime } else if myPlayerInput == .jump { // Set the character's velocity directly to launch it in the air when the player jumps. velocity.y = jumpSpeed } testCharacter.moveCharacter(by: velocity * deltaTime, deltaTime: deltaTime, relativeTo: nil) { event in print("playerEntity collided with \(event.hitEntity.name)") } } } } } } The scene is loaded from RCP. It is simple, just a capsule on a pedestal. Do I need a separate code to run testCharacter from this state?
0
0
100
May ’25
iOS Simulator can only render 1 RealityView
I'm using RealityView in my iOS game mxied with SwiftUI. For the following 2 example usages, the simulator will only render the first RealityView, and the second one is either super laggy or show a black model. Running on the real device is all good, just simualtor has this issue. Have a TabView and each tab has a RealityView. Have a root view and detail view connected via a push navigation, both root and detail have a RealityView. In the Simulator, the second RealityView is going to be very choppy and basically unusable, but on a real iPhone everything looks great. Is this a known simulator issue or I did something bad?
0
0
96
Jun ’25
ProjectiveTransformCameraComponent with custom matrix
I'm looking to create an effect on iOS that tracks the user's face position with ARKit and shifts nearer/more prominent geometry in the scene around while more "distant" geometry stays fixed to the XY plane - making it look like the geometry on screen "sticks out" I've managed to implement most of this successfully, but it's not perfect when using PerspectiveCameraComponent in RealityKit because as I shift the camera (and change its field of view based on the user's distance) the backplane changes its orientation (it's always orthogonal to camera's direction). I've tried adopting ProjectiveTransformCameraComponent instead. The idea is that the camera shifts around the scene, mirroring the user's head's position, looking at (0,0,0) and the back plane is adjusted to be parallel with the X,Y plane (animation replicated in Blender below). However, I can't manage to set up ProjectiveTransformCameraComponent with an appropriate matrix or update its transform property in a RealityKit System correctly. I also tried setting many simpler projection matrices as described in a number of guides on camera projection matrices on the internet and all I get is a blank view. Does anyone have some guidance on what the projection matrix that ProjectiveTransformCameraComponent expects is meant to look like or how I would go about accomplishing my goal?
0
0
109
Jun ’25
How to customize shader code for visionos ?
Hello experts, I'm trying to implement a material with custom shader code, but I saw that visionOS doesn't allow you to inject custom Metal functions or use CustomMaterial like iOS/macOS, nor can you directly write Metal Shading Language (.metal) and use it through ShaderGraphMaterial. So my question is, if i want to implement your own shader code, how should i do it?
1
0
398
Jul ’25
visionOS + Unity PolySpatial: Is 15,970 MeshFilters the True Upper Limit for Industrial Scenes?
Breaking Through PolySpatial's ~8k Object Limit – Seeking Alternative Approaches for Large-Scale Digital Twins Confirmed: PolySpatial make Doubles MeshFilter Count – Hard Limit at ~8k Active Objects (15.9k Total) Project Context & Research Goals I’m developing an industrial digital twin application for Apple Vision Pro using Unity’s PolySpatial framework (RealityKit rendering in Unbounded_Volume mode). The scene contains complex factory environments with: Production line equipment Many fragmented grid objects need to be merged.) Dynamic product racks (state-switchable assets) Animated worker avatars To optimize performance, I’m systematically testing visionOS’s rendering capacity limits. Through controlled stress tests, I’ve identified a critical threshold: Key Finding When the total MeshFilter count reaches 15,970 (system baseline + 7,985 user-created objects × 2 due to PolySpatial cloning), the application crashes consistently. This suggests: PolySpatial’s mirroring mechanism effectively doubles GameObject overhead An apparent hard limit exists around ~8k active mesh objects in practice Objectives for This Discussion Verify if others have encountered similar limits with PolySpatial/RealityKit Understand whether this is a: Memory constraint (per-app allocation) Render pipeline limit (Metal draw calls) Unity-specific PolySpatial behavior Explore optimization strategies beyond brute-force object reduction Why This Matters Industrial metaverse applications require rendering thousands of interactive objects . Confirming these limits will help our team: Design safer content guidelines Prioritize GPU instancing/LOD investments Potentially contribute back to PolySpatial’s optimization I’d appreciate insights from engineers who’ve: Pushed similar large-scale scenes in visionOS Worked around PolySpatial’s cloning overhead Discovered alternative capacity limits (vertices/draw calls)
2
0
534
Jul ’25
RealityKit and USDZ: Winding Order Issue with Negatively Scaled Meshes
Hi all, I've encountered a potential issue with how the winding order of geometry is handled when their transformations involve negative scaling. I created a simple test asset, a single triangle, to demonstrate this. The triangle's vertices are defined in a counter-clockwise ("right-handed") winding order, and its transform has a negative scale on the X-axis. According to the OpenUSD specification, this negative determinant in the transformation matrix should effectively reverse the winding order of the geometry: However, any given gprim's local-to-world transformation can flip its effective orientation, when it contains an odd number of negative scales. This condition can be reliably detected using the (Jacobian) determinant of the local-to-world transform: if the determinant is less than zero, then the gprim's orientation has been flipped, and therefore one must apply the opposite handedness rule when computing its surface normals (or just flip the computed normals) for the purposes of hidden surface detection and lighting calculations. When I view the asset in tools like Blender or Preview on macOS, it behaves as expected. The triangle's effective orientation is flipped to CW. However, when the same asset is viewed in Reality Composer Pro or with QuickLook on iOS, its effective orientation remains CCW. In other words, the triangle faces the opposite direction. My questions for the community and Apple are: Is this behavior in RealityKit a known issue? If this is a known issue, is there official guidance for DCC tools on how to export USDZ assets to ensure they appear correctly in the Apple ecosystem? Any insights or recommendations would be greatly appreciated.
2
0
293
Jul ’25
Regression: RealityKit spatial audio crackles and pops on iOS 26.0 beta 5 (FB19423059)
RealityKit spatial audio crackles and pops on iOS 26.0 beta 5. It works correctly on iOS 18.6 and visionOS 26.0 beta 5. The APIs used are AudioPlaybackController, Entity.prepareAudio, Entity.play Videos of the expected and observed behavior are attached to the feedback FB19423059. The audio should be a consistent, repeating sound, but it seems oddly abbreviated and the volume varies unexpectedly. Thank you for investigating this issue.
0
0
226
Aug ’25
moveCharacter reports collision with itself
I'm running into an issue with collisions between two entities with a character controller component. In the collision handler for moveCharacter the collision has both hitEntity and characterEntity set to the same object. This object is the entity that was moved with moveCharacter() The below example configures 3 objects. stationary sphere with character controller falling sphere with character controller a stationary cube with a collision component if the falling sphere hits the stationary sphere then the collision handler reports both hitEntity and characterEntity to be the falling sphere. I would expect that the hitEntity would be the stationary sphere and the character entity would be the falling sphere. if the falling sphere hits the cube with a collision component the the hit entity is the cube and the characterEntity is the falling sphere as expected. Is this the expected behavior? The entities act as expected visually however if I want the spheres to react differently depending on what character they collided with then I am not getting the expected results. IE: If a player controlled character collides with a NPC then exchange resource with NPC. if player collides with enemy then take damage. import SwiftUI import RealityKit struct ContentView: View { @State var root: Entity = Entity() @State var stationary: Entity = createCharacter(named: "stationary", radius: 0.05, color: .blue) @State var falling: Entity = createCharacter(named: "falling", radius: 0.05, color: .red) @State var collisionCube: Entity = createCollisionCube(named: "cube", size: 0.1, color: .green) //relative to root @State var fallFrom: SIMD3<Float> = [0,0.5,0] var body: some View { RealityView { content in content.add(root) root.position = [0,-0.5,0.0] root.addChild(stationary) stationary.position = [0,0.05,0] root.addChild(falling) falling.position = fallFrom root.addChild(collisionCube) collisionCube.position = [0.2,0,0] collisionCube.components.set(InputTargetComponent()) } .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { tap in let tapPosition = tap.entity.position(relativeTo: root) falling.components.remove(FallComponent.self) falling.teleportCharacter(to: tapPosition + fallFrom, relativeTo: root) }) .toolbar { ToolbarItemGroup(placement: .bottomOrnament) { HStack { Button("Drop") { falling.components.set(FallComponent(speed: 0.4)) } Button("Reset") { falling.components.remove(FallComponent.self) falling.teleportCharacter(to: fallFrom, relativeTo: root) } } } } } } @MainActor func createCharacter(named name: String, radius: Float, color: UIColor) -> Entity { let character = ModelEntity(mesh: .generateSphere(radius: radius), materials: [SimpleMaterial(color: color, isMetallic: false)]) character.name = name character.components.set(CharacterControllerComponent(radius: radius, height: radius)) return character } @MainActor func createCollisionCube(named name: String, size: Float, color: UIColor) -> Entity { let cube = ModelEntity(mesh: .generateBox(size: size), materials: [SimpleMaterial(color: color, isMetallic: false)]) cube.name = name cube.generateCollisionShapes(recursive: true) return cube } struct FallComponent: Component { let speed: Float } struct FallSystem: System{ static let predicate: QueryPredicate<Entity> = .has(FallComponent.self) && .has(CharacterControllerComponent.self) static let query: EntityQuery = .init(where: predicate) let down: SIMD3<Float> = [0,-1,0] init(scene: RealityKit.Scene) { } func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { let speed = entity.components[FallComponent.self]?.speed ?? 0.5 entity.moveCharacter(by: down * speed * deltaTime, deltaTime: deltaTime, relativeTo: nil) { collision in if collision.hitEntity == collision.characterEntity { print("hit entity has collided with itself") } print("\(collision.characterEntity.name) collided with \(collision.hitEntity.name) ") } } } } #Preview(windowStyle: .volumetric) { ContentView() }
1
0
114
Aug ’25
How to apply the same SystemImage to both mainEmitter and spawnedEmitter without clipping in ParticleEmitterComponent?
Hi everyone, I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation. In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues: The image applied to mainEmitter appears clipped or cropped. The image on spawnedEmitter does not update to the selected SystemImage. What I want to achieve: Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping. Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size. Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated. Thanks in advance!
0
0
407
3w