RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts

Post

Replies

Boosts

Views

Activity

Presenting images in RealityKit sample No Longer Builds
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit Expected / Previous: Application builds and runs on device, working as described in the documentation. Reality: Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen. This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g) Is anyone else experiencing this issue?
4
0
189
Aug ’25
How to Achieve Volumetric Lighting (Light Shafts) in RealityKit on visionOS?
Hello everyone, I am currently developing an experience for visionOS using RealityKit and I would like to achieve volumetric light effects, such as visible light rays or shafts through fog or dust. I found this GitHub project: https://github.com/robcupisz/LightShafts, which demonstrates the kind of visual style I am aiming for. I would like to know if there is a way to create similar effects using RealityKit on visionOS. So far, I have experimented with DirectionalLight, SpotLight, ImageBasedLight, and custom materials (e.g., additive blending on translucent meshes), but none of these approaches can replicate the volumetric light shaft look shown in the repository above. Questions: Is there a recommended technique or workaround in RealityKit to simulate light shafts or volumetric lighting? Is creating a custom mesh (e.g., cone or volume geometry with gradient alpha and additive blending) the only feasible method? Are there any examples, best practices, or sample projects from Apple or other developers that showcase a similar visual style? Any advice or hints would be greatly appreciated. Thank you in advance!
9
1
802
Aug ’25
RealityKit fullscreen layer
Hi! I'm currently trying to render another XR scene in front of a RealityKit one. Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0. Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes? Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know. Thanks in advance and have a good day!
2
0
286
Jul ’25
Struggles with attaching a ModelEntity to the skeleton joints of another ModelEntity
In SceneKit, when creating an .scn file from a rigged model, the framework created an SCNNode for each bone/joint, so you could add and remove child nodes directly to and from joints, and like any other SCNNode, you could access world position and world orientation for each joint. The analog would be for joints to be accessible as child entities of a ModelEntity in RealityKit. I am unable to proceed with migrating my project from SceneKit because of this, as there does not seem to be a way to even access the true world position of a joint with the current jointNames/jointTransforms paradigm. The translation information from the given transforms is insufficient to determine the location of a joint at any given time, and other approaches like creating a GeometricPin for the given joint name and attaching it to another entity do not seem to work. So conveniently being able to attach an item to the hand of a rigged model was trivial in SceneKit and now feels impossible in RealityKit. I am not the first person to notice this, and am feeling demoralized about proceeding with RealityKit with such a critical piece of functionality blocked https://stackoverflow.com/questions/76726241/how-do-i-attach-an-entity-to-a-skeletons-joint-in-realitykit Will this be addressed in some way?
5
2
717
Jul ’25
BlendShapes don’t animate while playing animation in RealityKit
Hi everyone, I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender). The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible. The exact same blend shape animation works perfectly when no animation is playing. In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit Still, as soon as an animation starts, the shape keys don’t animate anymore. Here’s the test project on GitHub that demonstrates the issue clearly: https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing. Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates? Thanks in advance for any insights.
3
3
257
Jul ’25
USDZ Security
I am working on an app that will allow a user to load and share their model files (usdz, usda, usdc). I'm looking at security options to prevent bad actors. Are there security or validation methods built into ARKit/RealityKit/CloudKit when loading models or saving them on the cloud? I want to ensure no one can inject any sort of exploit through these file types.
0
0
457
Jul ’25
RealityKit generates an excessive amount of logging
During regular use, RealityKit generates an excessive amount of internal logging that is not actionable by third party developers. When developing an iOS RealityKit/ARKit app, this makes the Xcode console challenging to use for regular work. (FB19173812) See screenshots below. Xcode does have an option for filtering out logging from specific SDKs, but enabling this feature to suppress the logging of RealityKit and related SDKs like PHASE is something developers have to do dozens of times each day. After a year of developing a RealityKit app, this process becomes frustrating. If SDKs like Foundation, UIKit, and SwiftUI generated as much logging as RealityKit and related SDKs, Xcode's console would be unusable. Is there any way to disable the logging of RealityKit and PHASE permanently? Thank you for any help you provide.
1
0
292
Jul ’25
Entities moved with Manipulation Component in visionOS Beta 4 are clipped by volume bounds
In Beta 1,2, and 3, we could pick up and inspect entities, bringing them closer while moving them outside of the bounds of a volume. As of Beta 4, these entities are now clipped by the bounds of the volume. I'm not sure if this is a bug or an intended change, but I files a Feedback report (FB19005083). The release notes don't mention a change in behavior–at least not that I can find. Is this an intentional change or a bug? Here is a video that shows the issue. https://youtu.be/ajBAaSxLL2Y In the previous versions of visionOS 26, I could move these entities out of the volume and inspect them close up. Releasing would return them to the volume. Now they are clipped as soon as they reach the end of the volume. I haven't had a chance to test with windows or with the SwiftUI modifier version of manipulation.
1
4
361
Jul ’25
RealityKit battery drain intuition?
Hi all, Is there a standard, good intuition about RealityKit battery drain vs how the Xcode profiler tends to display things? I have a realitykit ARview that i have permanently in my view hierarchy and the profiler is saying constantly 40% CPU spin, ~160mb increase of memory usage compared to when not using this view, and battery increased from Low to middle of the yellow bar for "high". Yet the actual battery drain attributed to my app is basically an order of magnitude lower than the battery drain of something like Tiktok for the same amount of foreground time (15% vs 2%) So I'm a little confused whether or not to trust the Xcode battery profiler when working with RealityKit and ARViews (unless Tiktok is in very high usage all the time!). Is this a largely ignorable signal, and more generally, how useful is this profiler? Doesn't seem immediately intuitive to me at first use. Thanks!
0
0
136
Jul ’25
When placing a TextField within a RealityViewAttachment, the virtual keyboard does not appear in front of the user as expected.
Hello, Thank you for your time. I have a question regarding visionOS app development. When placing a SwiftUI TextField inside RealityView.attachments, we found that focusing on the field does not bring up the virtual keyboard in front of the user. Instead, the keyboard appears around the user’s lower abdomen area. However, when placing the same TextField in a regular SwiftUI layer outside of RealityView, the keyboard appears in the correct position as expected. This suggests that the issue is specific to RealityView.attachments. We are currently exploring ways to have the virtual keyboard appear directly in front of the user when using TextField inside RealityViewAttachments. If there is any method to explicitly control the keyboard position or any known workarounds—including alternative UI approaches—we would greatly appreciate your guidance. Best regards, Sadao Tokuyama
3
1
604
Jul ’25
Portal crossing causes inconsistent lighting and visual artifacts between virtual and real spaces (visionOS 2.0)
Hello, I'm working with the new PortalComponent introduced in visionOS 2.0, and I've encountered some issues when transitioning entities between virtual and real-world spaces using crossingMode. Specifically: Lighting inconsistency: When CG content (ModelEntities with PhysicallyBasedMaterial) crosses the portal from virtual space into the real environment, the way light reflects on the objects changes noticeably. This causes a jarring visual effect, as the same material appears differently depending on the space it's in. Unnatural transition visuals: During the transition, the CG models often appear to "emerge from the wall," especially when crossing from virtual to real. This ruins the immersive illusion and feels visually unnatural. IBL adjustment attempts: I’ve tried adding an ImageBasedLightComponent to the world entity, and while it slightly improves the lighting consistency, the issue still remains to a noticeable degree. My goal is to create a seamless visual experience when CG entities cross between spaces, without sudden lighting shifts or immersion-breaking geometry reveals. Has anyone else experienced similar issues? Is there a recommended setup or workaround to better control lighting and visual fidelity when using crossingMode with portals in visionOS 2.0? Any guidance would be greatly appreciated. Thank you!
5
0
207
Jul ’25
How To Move Camera with PS5 Game Controller
Hello again! So, I am trying my best to figure out how to get a simple, fully Immersive scene that is navigable via the PS5 controller to work on Vision Pro. I am even using ChatGPT (both externally AND in Xcode when it makes fix recommendations) however, I seem to be moving around in circles, having to constantly send error messages back to ChatGPT, then try the new "confirmed to work" fixed code, only to have more errors. I also have explicitly let the AI know that I'm using all the current betas, only to learn there has been lots of stuff deprecated (of course!). This is my initial query to ChatGPT: "How can I create a fully immersive VR experience for Apple Vision Pro, using a PS5 game controller to navigate the user via first person view through a 3d scene?" And then down the rabbit hole the past few days. Seriously, I thought AI was supposed to be helpful! LOL Per Xcode logs, I'm seeing thumbstick movement, but simply cannot get the camera to move. Is this even possible?! Is it a camera defined in Reality Composer Pro? Or, does it need to be defined outside. I'm trying to follow along with the AI, but it seems to go back and forth with different solutions. Here is the latest it's spitting out: __ Thanks for this super clear log — and you’re absolutely right: the gamepad input is working, CameraRoot is moving as expected… but your actual camera view is not changing. This strongly suggests the issue is that RealityKit is not using your custom camera (PerspectiveCameraComponent) for rendering the view. ⸻ 🔍 What’s happening Even though CameraRoot is moving (with correct logs), RealityKit will default to its own internal camera unless: 1. You explicitly assign your camera entity to be the one used, or 2. Your camera is the only one and RealityKit picks it up automatically. But in visionOS 2.0 / Xcode 26, there seems to be a bug or behavior change where RealityKit no longer auto-uses your custom PerspectiveCameraComponent in RealityView unless it’s: • properly set with content.camera, or • part of a .usda/Reality Composer camera hierarchy that’s already active. Unfortunately: 🟥 RealityView.camera = .entity(cameraRoot) ❌ Is no longer allowed (we confirmed this breaks with a type error) ⸻ ✅ The Reliable Workaround: Attach your camera to the Immersive scene itself Since we’re loading a .usda scene (Reality Composer Pro), here’s the most reliable path forward: ✔️ Option A: Embed your camera into the Reality Composer Pro scene 1. Open your Reality Composer Pro scene 2. Add a Camera entity directly in your Immersive.usda scene 3. Name it something like "PlayerCamera" 4. Position it at [0, 1.6, 0] 5. Leave it selected as the default camera Then in your Swift code: • Find the camera entity with .findEntity(named:) • Move that entity instead of creating a new one. ⸻ ✔️ Option B: Force RealityKit to use your custom camera (via transform parenting) Since direct camera assignment is unavailable, we can use a trick: Move the whole RealityView scene via a parent entity with a camera. But let’s go with Option A, since you’ve already got a Reality Composer Pro scene. __ Can any developer over there can make a suggestion on how to move forward? I JUST want to create a sandbox template so I can start exploring 3d scenes created in Maya and sent over to headset. I'll deal with animation next, but come on, why is this so difficult to get working? I'm not a programmer, but have been trying to wrap my head around Xcode and SwiftUI. This needs to be much simpler. Or, you need to provide us creatives with better sample templates and non programmer speak on how to set this up properly. Ideally, you HIRE us 3d professionals to work side by side with the programmers to help make these tools useable - especially Reality Composer Pro. Seriously, I am making a concerted effort to use the native tools, even though I would love to be porting Unreal Engine scenes over. If anyone can help point me in the right direction, coming from a 3D Creator/Animator/Modeler perspective, I, and my fellow peers in the XR/AR/VR community would greatly appreciate it. Thank you.
8
0
698
Jul ’25
Unable to Drag 3D Model(Entity) in visionOS when UITextView Is in the Background
I'm running into an issue in Xcode when working on a visionOS app. Whenever I try to drag a 3D model entity in my scene, the drag gesture doesn't work if there's a UITextView (or SwiftUI TextEditor) in background of the 3D entity. It seems like the UITextView is intercepting the gesture or preventing the drag interaction from reaching the 3D content. Interestingly, when the 3D entity is placed infront of the ScrollView, the drag works as expected. Has anyone else experienced this behavior? Is this a known limitation or a bug in the current tooling? Any workarounds or fixes would be appreciated. Thanks!
0
0
163
Jul ’25
Bouncy ball in RealityKit - game
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would. With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses. Ball Physics Setup (Following Apple Forum Recommendations) // From PhysicsManager.swift private func createBallEntityRealityKit() -> Entity { let ballRadius: Float = 0.05 let ballEntity = Entity() ballEntity.name = "bouncingBall" // Mesh and material let mesh = MeshResource.generateSphere(radius: ballRadius) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: .cyan) material.roughness = .float(0.3) material.metallic = .float(0.8) ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) // Physics setup from Apple Developer Forums let physics = PhysicsBodyComponent( massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball material: PhysicsMaterialResource.generate( staticFriction: 0.8, dynamicFriction: 0.6, restitution: 1.0 // Perfect elasticity, yet still loses energy ), mode: .dynamic ) ballEntity.components.set(physics) ballEntity.components.set(PhysicsMotionComponent()) // Collision setup let collisionShape = ShapeResource.generateSphere(radius: ballRadius) ballEntity.components.set(CollisionComponent(shapes: [collisionShape])) return ballEntity } Ground Plane Physics // From GroundPlaneView.swift let groundPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 1.0 // Perfect bounce ), mode: .static ) entity.components.set(groundPhysics) Wall Physics // From WalledBoxManager.swift let wallPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 0.85 // Slightly less than ground ), mode: .static ) wall.components.set(wallPhysics) Collision Detection // From GroundPlaneView.swift content.subscribe(to: CollisionEvents.Began.self) { event in guard physicsMode == .realityKit else { return } let currentTime = Date().timeIntervalSince1970 guard currentTime - lastCollisionTime > 0.1 else { return } if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" { let normal = event.collision.normal // Distinguish between wall and ground collisions if abs(normal.y) < 0.3 { // Wall bounce print("Wall collision detected") } else if normal.y > 0.7 { // Ground bounce print("Ground collision detected") } lastCollisionTime = currentTime } } Issues Observed Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce Wall Sliding: Ball tends to slide down walls instead of bouncing naturally No Damping Control: Comments mention damping values but they don't seem to affect the physics Change in mass also doesn't do much. Custom Physics System (Workaround) I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values: // From BouncingBallComponent.swift struct BouncingBallComponent: Component { var velocity: SIMD3<Float> = .zero var angularVelocity: SIMD3<Float> = .zero var bounceState: BounceState = .idle var lastBounceTime: TimeInterval = 0 var bounceCount: Int = 0 var peakHeight: Float = 0 var totalFallDistance: Float = 0 enum BounceState { case idle case falling case justBounced case bouncing case settled } } Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)? Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior? Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit? Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
8
0
499
Jul ’25
GestureComponent does not support DragGesture
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not. I already checked this post, which points to this seemingly outdated sample code I assume that example is deprecated in favour of the now built in version of GestureComponent. Nonetheless, there are no compiler warnings or errors, it just fails silently. TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight. RealityView { content in let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1))) testEntity.position = SIMD3<Float>(0,0,-1) testEntity.components.set(InputTargetComponent()) testEntity.components.set(CollisionComponent( shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))] )) let testGesture = TapGesture() .onEnded { value in print("Tapped") } testEntity.components.set(GestureComponent(testGesture)) let dragGesture = DragGesture() .onEnded { value in print("Dragged") } testEntity.components.set(GestureComponent(dragGesture)) content.add(testEntity) }
3
1
360
Jul ’25
Subject: Handling Z-Up Blender USDZ Models in RealityKit (visionOS) for Transform Updates
Hello everyone, I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender. My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit. The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis. When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context. My questions are: What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world? Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach? Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated! Thank you.
1
1
298
Jul ’25
How to Configure angularLimitInYZ for PhysicsSphericalJoint in RealityKit (Pendulum/Swing Behavior)
Hello RealityKit developers, I'm currently working on physics simulations in my visionOS app and am trying to adapt the concepts from the official sample Simulating physics joints in your RealityKit app. In the sample, a sphere is connected to the ceiling using a PhysicsRevoluteJoint to create a hinge-like simulation. I've successfully modified this setup to use a PhysicsSphericalJoint instead. The basic replacement works as expected: pin1 (attached to the sphere) rotates freely around pin0 (attached to the ceiling), much like a ball-and-socket joint should, removing all translational degrees of freedom. My challenge lies with the PhysicsSphericalJoint's angularLimitInYZ property. The documentation mentions that this property allows limiting the rotation around the Y and Z axes, defining an "elliptical cone shape around the x-axis of pin0." However, I'm struggling to understand how to specify these values to achieve a desired rotational limit. If I have a sphere that is currently capable of rotating 360 degrees around pin0 (like a free-spinning ball on a string), how would I use angularLimitInYZ to restrict its rotation to a certain height or angular range, preventing it from completing a full circle? Specifically, I'm trying to achieve a "swing" like behavior where the sphere oscillates back and forth but cannot rotate completely overhead or underfoot. What values or approach should I use for the angularLimitInYZ tuple to define such a restricted pendulum-like motion? Any insights, code examples, or explanations on how to properly configure angularLimitInYZ for this kind of behavior would be incredibly helpful! The following code is modified from the sample. extension MainView { func addPinsTo(ballEntity: Entity, attachmentEntity: Entity) throws { let hingeOrientation = simd_quatf(from: [1, 0, 0], to: [0, 0, 1]) let attachmentPin = attachmentEntity.pins.set( named: "attachment_hinge", position: .zero, orientation: hingeOrientation ) let relativeJointLocation = attachmentEntity.position( relativeTo: ballEntity ) let ballPin = ballEntity.pins.set( named: "ball_hinge", position: relativeJointLocation, orientation: hingeOrientation ) // Create a PhysicsSphericalJoint between the two pins. let revoluteJoint = PhysicsSphericalJoint(pin0: attachmentPin, pin1: ballPin) try revoluteJoint.addToSimulation() } } The following image is a screenshot of the operation when changing to PhysicsSphericalJoint. Thank you in advance for your assistance.
1
0
242
Jul ’25