RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit subtopic

Post

Replies

Boosts

Views

Activity

iOS Simulator can only render 1 RealityView
I'm using RealityView in my iOS game mxied with SwiftUI. For the following 2 example usages, the simulator will only render the first RealityView, and the second one is either super laggy or show a black model. Running on the real device is all good, just simualtor has this issue. Have a TabView and each tab has a RealityView. Have a root view and detail view connected via a push navigation, both root and detail have a RealityView. In the Simulator, the second RealityView is going to be very choppy and basically unusable, but on a real iPhone everything looks great. Is this a known simulator issue or I did something bad?
0
0
93
Jun ’25
ModelEntity(named:in:) fails to load USD file from RealityKitContent bundle with misleading error?
My experience has been that ModelEntity(named:in:) can be used to load a USD file with a simple structure consisting of entities and model entities, and, critically, it will flatten the entity hierarchy down to a single ModelEntity, presumably reducing the number of draw calls. However, can anyone verify that the following is true? If ModelEntity(named:in:) is used to load a USD file from a RealityKit content bundle, it may fail when the USD file contains more complex data, such as shader graph material definitions, or perhaps for some other reason. I am not sure. AND the error that ModelEntity(named:in:) throws in this case is Cannot load RealityKitContent entity: Failed to find resource with name "<name>" in bundle which would literally suggest that the file does not exist, instead of what I assume the error actually is, which is "the file exists but its entity hierarchy could not be flattened to a single ModelEntity" ? Is that an accurate description of the known behavior of ModelEntity:named:in:)? I understand that I could use Entity(named:in:) instead, without the flattening feature. My question is really more about the seemingly misleading error message. Thank you for any clarification you can provide.
2
0
128
May ’25
How to use CharacterControllerComponent.
I am trying to implement a ChacterControllerComponent using the following URL. https://developer.apple.com/documentation/realitykit/charactercontrollercomponent I have written sample code, but PhysicsSimulationEvents.WillSimulate is not executed and nothing happens. import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { let gravity: SIMD3<Float> = [0, -50, 0] let jumpSpeed: Float = 10 enum PlayerInput { case none, jump } @State private var testCharacter: Entity = Entity() @State private var myPlayerInput = PlayerInput.none var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) testCharacter = immersiveContentEntity.findEntity(named: "Capsule")! testCharacter.components.set(CharacterControllerComponent()) let _ = content.subscribe(to: PhysicsSimulationEvents.WillSimulate.self, on: testCharacter) { event in print("subscribe run") let deltaTime: Float = Float(event.deltaTime) var velocity: SIMD3<Float> = .zero var isOnGround: Bool = false // RealityKit automatically adds `CharacterControllerStateComponent` after moving the character for the first time. if let ccState = testCharacter.components[CharacterControllerStateComponent.self] { velocity = ccState.velocity isOnGround = ccState.isOnGround } if !isOnGround { // Gravity is a force, so you need to accumulate it for each frame. velocity += gravity * deltaTime } else if myPlayerInput == .jump { // Set the character's velocity directly to launch it in the air when the player jumps. velocity.y = jumpSpeed } testCharacter.moveCharacter(by: velocity * deltaTime, deltaTime: deltaTime, relativeTo: nil) { event in print("playerEntity collided with \(event.hitEntity.name)") } } } } } } The scene is loaded from RCP. It is simple, just a capsule on a pedestal. Do I need a separate code to run testCharacter from this state?
0
0
97
May ’25
Improving person segmentation and occlusion quality in RealityKit
I’m building an app that uses RealityKit and specifically ARConfiguration.FrameSemantics.personSegmentationWithDepth. The goal is to insert an AR object into the scene behind a person, and an additional AR object in front of the person, while being as photo realistic as possible. Through testing, I’ve noticed that many times, the edges of the person segmentation mask are not well matched to the actual person, and parts of the person are transparent, with the AR object bleeding through. It’s sort of like a “bad green screen” effect, which I’d expect to see a little bit, but not to this extent. I’ve been testing on iPhone 16, iPhone 14 Pro, iPad Pro 12.9 inch 6th Generation, and iPhone 12 Pro, with similar results across all devices. I’m wondering what else I can do to improve this… either code changes, platform (like different iPhone models), or environment (like lighting, distance, etc). Attaching some example screen grabs and a minimum reproducible code sample. Appreciate any insights! import ARKit import SwiftUI import RealityKit struct RealityViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.environment.sceneUnderstanding.options.insert(.occlusion) arView.renderOptions.insert(.disableMotionBlur) arView.renderOptions.insert(.disableDepthOfField) let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = [.horizontal] if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) { configuration.frameSemantics.insert(.personSegmentationWithDepth) } arView.session.run(configuration) arView.session.delegate = context.coordinator context.coordinator.arView = arView } func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, ARSessionDelegate { var parent: RealityViewContainer var floorAnchor: ARPlaneAnchor? init(_ parent: RealityViewContainer) { self.parent = parent } func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { if let arView,floorAnchor == nil { for anchor in anchors { if let horizontalPlaneAnchor = anchor as? ARPlaneAnchor, horizontalPlaneAnchor.alignment == .horizontal, horizontalPlaneAnchor.transform.columns.3.y < arView.cameraTransform.translation.y { // filter out ceiling floorAnchor = horizontalPlaneAnchor let backgroundEntity = BackgroundEntity() let anchorEntity = AnchorEntity(anchor: horizontalPlaneAnchor) anchorEntity.addChild(background) let foregroundEntity = ForegroundEntity() backgroundEntity.addChild(foregroundEntity) arView.scene.addAnchor(anchorEntity) arView.installGestures([.rotation, .translation], for: backgroundEntity) break // Stop after adding the first horizontal plane (floor) } } } } } }
1
0
81
May ’25
Pink Screen with VideoMaterial in ARKit
Hi everyone, I'm developing an ARKit app using RealityKit and encountering an issue where a video displayed on a 3D plane shows up as a pink screen instead of the actual video content. Here's a simplified version of my setup: func createVideoScreen(video: AVPlayerItem, canvasWidth: Float, canvasHeight: Float, aspectRatio: Float, fitsWidth: Bool = true) -> ModelEntity { let width = (fitsWidth) ? canvasWidth : canvasHeight * aspectRatio let height = (fitsWidth) ? canvasWidth * (1/aspectRatio) : canvasHeight let screenPlane = MeshResource.generatePlane(width: width, depth: height) let videoMaterial: Material = createVideoMaterial(videoItem: video) let videoScreenModel = ModelEntity(mesh: screenPlane, materials: [videoMaterial]) return videoScreenModel } func createVideoMaterial(videoItem: AVPlayerItem) -> VideoMaterial { let player = AVPlayer(playerItem: videoItem) let videoMaterial = VideoMaterial(avPlayer: player) player.play() return videoMaterial } Despite following the standard process, the video plane renders pink. Has anyone encountered this before, or does anyone know what might be causing it? Thanks in advance!
16
3
1.1k
May ’25
How to add and remove child entities to a rigged entity in RealityKit?
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app. The character has customization such as clothing items and hair and all objects are properly weighted to the rig. The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting hierarchy: Before exporting for the animation (armature modifier applied), I simply had to store the Model entities and swap them in but now when I export with the Armature Modifier applied, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities and applying new materials to them is no longer as simple. Here's a demo blend file and usdc export with a setup like mine, having an animated bone to swing a cube and sphere, to be swapped so that only one is visible https://www.dropbox.com/scl/fo/be2q6qcztc83z7c4gj1w0/AMapxWc_ip2KZ8oTOYDUMv8?rlkey=rcdaggcxq06dyen09mw5mqmem&st=bnc0d7j0&dl=0 This is how I'm loading the entity and removing a part, with the demo files import SwiftUI import RealityKit struct SwapDemoView: View { var body: some View { RealityView { content in let camera = PerspectiveCamera() camera.transform.translation = SIMD3(x: 0, y: 0.1, z: 3) guard let root = try? await Entity(named: "simpleSwapDemo") else { fatalError("simpleSwapDemo.usdc is not present") } print(root) // Get initial hierarchy guard let cube = root.findEntity(named: "Cube") else { fatalError("Entity cube doesn't exist") } cube.removeFromParent() // <-- Cube is still visible after removal print(root) // Get hierarchy to confirm removal of cube let resource = root.availableAnimations[0] root.playAnimation(resource.repeat()) content.add(root) content.add(camera) } .background(.white) } } And this is what the entity hierarchy looks like in RealityKit before cube removal ▿ 'root' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : ModelEntity, children: 2 ⟐ SynchronizationComponent ⟐ ModelComponent ⟐ SkeletalPosesComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : Entity ⟐ SynchronizationComponent ⟐ Transform ▿ 'Primitives' : Entity, children: 2 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity ⟐ SynchronizationComponent ⟐ Transform ▿ 'Cube' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Cube' : Entity ⟐ SynchronizationComponent ⟐ Transform And here's the hierarchy after removal ▿ 'root' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : ModelEntity, children: 2 ⟐ SynchronizationComponent ⟐ ModelComponent ⟐ SkeletalPosesComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : Entity ⟐ SynchronizationComponent ⟐ Transform ▿ 'Primitives' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity ⟐ SynchronizationComponent ⟐ Transform And this is the result: What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
1
0
101
May ’25
How to configure RealityKit entities for animations on a modular character?
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app. The character has customization such as clothing items and hair and all objects are properly weighted to the rig. The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting entity hierarchy, viewed in Reality Composer Pro: My problem is that when I export with the Armature Modifier applied to the objects, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities is no longer as simple as removing the entity with the corresponding name. What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
1
0
48
May ’25
Reality Composer Pro 2.0 shader graphs can't be loaded on visionOS 1
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface: On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity. However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console: If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2. Is it a known issue that Reality Composer 2 can't be used with visionOS 1? Is this intentional behavior? I've submitted feedback as FB14828873, including a sample project and repro steps. If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]." Thank you.
7
0
1.4k
May ’25
You cannot debug in simulator
I can't create any breakpoint in my Xcode after I upgraded to macOS 15.4 macOS: Version 15.4 (24E248) visionOS Simulator: 2.3 Xcode: Version 16.2 (16C5032a) My app works well without any breakpoints. But if I create any breakpoint it shows me this: Couldn't find the Objective-C runtime library in loaded images. Message from debugger: The LLDB RPC server has crashed. You may need to manually terminate your process. The crash log is located in ~/Library/Logs/DiagnosticReports and has a prefix 'lldb-rpc-server'. Please file a bug and attach the most recent crash log.
3
5
357
Apr ’25
RealityKit fails with EXC_BAD_ACCESS at CMClockGetAnchorTime in the simulator
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView. I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085 I've included a crash log with the feedback. If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior. Please let me know if there's anything I can do to help. Thank you.
1
0
551
Apr ’25
Is Using Metal Compute Shaders for Efficient Resource Copying to RealityKit the Best Approach for Streaming Data in Real-Time Rendering?
Hi Apple, In VisionOS, for real-time streaming of large 3D scenes, I plan to create Metal buffers and textures in multiple threads and then use a compute shader on the main thread to copy the Metal resources into RealityKit, minimizing main thread usage. Given that most of RealityKit's default APIs require execution on the main actor (main thread), it is not ideal for streaming data. Is this approach the best way to handle streaming data and real-time rendering? Thank you very much.
0
0
74
Apr ’25
Reality Composer Pro Transparent Textures
Hey everyone, I am currently developing an app in visionOS and using RealityComposerPro create scenes in put in my app. I have a humanoid model with hair strands, and each strand of hair has an opacity map. However, some reflections are still visible even though the opacity is zero. There are also some weird culling among hair strands (in the left circle) and weird reflections in hair cards (in the right circle). Here's my settings for the materials. Since all the hair strands are interconnected with each other, it is hard to decide the drawing order in Xcode, so I am wondering if there's an easier way to handle transparency objects. Please let me know if you know anything helpful, much appreciated!
0
0
77
Apr ’25
Portals do not occlude CollisionComponent and InputTargetComponent
Hello If you add a ModelEntity to a world inside a portal, the drawing of the model will be occluded properly to the portal bounds. However the invisible shape of the InputTargetComponent and CollisionComponent are not occluded. They are able to cross the portal, and if you have gestures on your ModelEntity you can trigger them in areas outside the portal bounds. This happens even if the ModelEntity has no PortalCrossingComponent.
0
1
48
Mar ’25
VISION : getting real geometry reflections in reality kit ?
as in the environments we have real tiem reflections of movies on a screen or reflections of the surrounding hood in the background... could i get a metallic surface getting accurate reflections of a box on top ? i don't mean getting a rpobe or hdr cubemap, i mean the same accurate reflections as the water of the mt hood with movie i'm wacthing in other app
1
0
83
Mar ’25
Gestures not working correctly when setting the fov orientation to .horizontal
Hi there, I've discovered an issue with gesture handling in RealityKit when setting the camera’s fieldOfViewOrientation to horizontal. For instance, if I render a simple box at the center of the view with a collision shape that exactly matches its dimensions, the actual hit area behaves as if it's smaller than the box. Additionally, when attempting to drag the box away from the center, the hit area appears misaligned—offset slightly towards the center. Since the default fieldOfViewOrientation is vertical and everything works as expected in that mode, it seems that the gesture system might be assuming a vertical FOV. Given that the API allows setting it to horizontal, perhaps gestures should function correctly regardless of the orientation? Thank you!
1
0
443
Mar ’25
How can I simultaneously apply the drag gesture to multiple entities?
I wanted to drag EntityA while also dragging EntityB independently. I've tried to separate them by entity but it only recognizes the latest drag gesture RealityView { content, attachments in ... } .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } ) .gesture( DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } ) also tried using the simultaneously but didn't work too, maybe i'm missing something .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } .simultaneously(with: DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } )
1
1
617
Mar ’25
Game Center breaks RealityView world tracking
Has anyone come across the issue that setting GKLocalPlayer.local.authenticateHandler breaks a RealityView's world tracking on iOS / iPadOS 18 beta 5? I'm in the process of upgrading my app to make use of the much appreciated RealityView unification, using RealityView not only on visionOS but now also on iOS and iPadOS. In my RealityView, I enable world tracking on iOS like this: content.camera = .worldTracking However, device position and orientation were ignored (the camera remained static) and there was no camera pass-through. Then I discovered that the issue disappeared when I remove the line GKLocalPlayer.local.authenticateHandler = { viewController, error in // ... some more code ... } So I filed FB14731139 and hope that it will be resolved before the release of iOS / iPadOS 18.
5
1
834
Mar ’25
RealityKit SIMD3<Float> precision decreases with distance?
The farther away the center of a large entity is, the less accurate the positioning is? For example I am changing only the y-axis position of an entity that is tens of meters long, but i notice x and z drifting slowly the farther away the center of the entity is. I would not expect the x and z to move. It might be compounding rounding errors somewhere, or maybe the RealityKit engine is deciding not to be super precise about distant objects? Otherwise I just have a bug somewhere.
5
0
533
Mar ’25