RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit subtopic

Post

Replies

Boosts

Views

Activity

RealityKit generates an excessive amount of logging
During regular use, RealityKit generates an excessive amount of internal logging that is not actionable by third party developers. When developing an iOS RealityKit/ARKit app, this makes the Xcode console challenging to use for regular work. (FB19173812) See screenshots below. Xcode does have an option for filtering out logging from specific SDKs, but enabling this feature to suppress the logging of RealityKit and related SDKs like PHASE is something developers have to do dozens of times each day. After a year of developing a RealityKit app, this process becomes frustrating. If SDKs like Foundation, UIKit, and SwiftUI generated as much logging as RealityKit and related SDKs, Xcode's console would be unusable. Is there any way to disable the logging of RealityKit and PHASE permanently? Thank you for any help you provide.
1
0
343
Jul ’25
moveCharacter reports collision with itself
I'm running into an issue with collisions between two entities with a character controller component. In the collision handler for moveCharacter the collision has both hitEntity and characterEntity set to the same object. This object is the entity that was moved with moveCharacter() The below example configures 3 objects. stationary sphere with character controller falling sphere with character controller a stationary cube with a collision component if the falling sphere hits the stationary sphere then the collision handler reports both hitEntity and characterEntity to be the falling sphere. I would expect that the hitEntity would be the stationary sphere and the character entity would be the falling sphere. if the falling sphere hits the cube with a collision component the the hit entity is the cube and the characterEntity is the falling sphere as expected. Is this the expected behavior? The entities act as expected visually however if I want the spheres to react differently depending on what character they collided with then I am not getting the expected results. IE: If a player controlled character collides with a NPC then exchange resource with NPC. if player collides with enemy then take damage. import SwiftUI import RealityKit struct ContentView: View { @State var root: Entity = Entity() @State var stationary: Entity = createCharacter(named: "stationary", radius: 0.05, color: .blue) @State var falling: Entity = createCharacter(named: "falling", radius: 0.05, color: .red) @State var collisionCube: Entity = createCollisionCube(named: "cube", size: 0.1, color: .green) //relative to root @State var fallFrom: SIMD3<Float> = [0,0.5,0] var body: some View { RealityView { content in content.add(root) root.position = [0,-0.5,0.0] root.addChild(stationary) stationary.position = [0,0.05,0] root.addChild(falling) falling.position = fallFrom root.addChild(collisionCube) collisionCube.position = [0.2,0,0] collisionCube.components.set(InputTargetComponent()) } .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded { tap in let tapPosition = tap.entity.position(relativeTo: root) falling.components.remove(FallComponent.self) falling.teleportCharacter(to: tapPosition + fallFrom, relativeTo: root) }) .toolbar { ToolbarItemGroup(placement: .bottomOrnament) { HStack { Button("Drop") { falling.components.set(FallComponent(speed: 0.4)) } Button("Reset") { falling.components.remove(FallComponent.self) falling.teleportCharacter(to: fallFrom, relativeTo: root) } } } } } } @MainActor func createCharacter(named name: String, radius: Float, color: UIColor) -> Entity { let character = ModelEntity(mesh: .generateSphere(radius: radius), materials: [SimpleMaterial(color: color, isMetallic: false)]) character.name = name character.components.set(CharacterControllerComponent(radius: radius, height: radius)) return character } @MainActor func createCollisionCube(named name: String, size: Float, color: UIColor) -> Entity { let cube = ModelEntity(mesh: .generateBox(size: size), materials: [SimpleMaterial(color: color, isMetallic: false)]) cube.name = name cube.generateCollisionShapes(recursive: true) return cube } struct FallComponent: Component { let speed: Float } struct FallSystem: System{ static let predicate: QueryPredicate<Entity> = .has(FallComponent.self) && .has(CharacterControllerComponent.self) static let query: EntityQuery = .init(where: predicate) let down: SIMD3<Float> = [0,-1,0] init(scene: RealityKit.Scene) { } func update(context: SceneUpdateContext) { let deltaTime = Float(context.deltaTime) for entity in context.entities(matching: Self.query, updatingSystemWhen: .rendering) { let speed = entity.components[FallComponent.self]?.speed ?? 0.5 entity.moveCharacter(by: down * speed * deltaTime, deltaTime: deltaTime, relativeTo: nil) { collision in if collision.hitEntity == collision.characterEntity { print("hit entity has collided with itself") } print("\(collision.characterEntity.name) collided with \(collision.hitEntity.name) ") } } } } #Preview(windowStyle: .volumetric) { ContentView() }
1
0
163
Aug ’25
RealityKit - How to change camera target in response of a touch event?
Hello, I’m porting my UIKit/SceneKit app to SwiftUI/RealityKit and I’m wondering how to change the camera target programmatically. I created a simple scene in Reality Composer Pro with two spheres. My goal is straightforward: when the user taps a sphere, the camera should look at it as the main target. Following Apple’s videos, I implemented the .gesture modifier and it is printing the tapped sphere correctly, but updating my targetEntity state doesn’t change anything, so the camera won't update its target. Is there a way to access the scene content at that level? Or what else should I do? Here’s my current code implementation: Thanks!
1
0
242
Sep ’25
scenekit is deprecated but realitykit is for special computing what should i use ?
hello i am new to apple ecosystem and development i have some coding experience with c# now i like to develop my game for iphone 16 and up(due to ability to run ai models) but i am having hard time figuring out what to use there is a lot of resources for scene-kit but on its doc page it says its deprecated so i look at the reality-kit docs and tutorials and its strictly tells how to develop for visionos and i am really confused about this since there is no tutorials that shows how to develop a game for ios with reality-kit that does not focus visionos. i just want to develop for iphone 16 and up but i cant find resources focuses at that.
1
0
210
Oct ’25
ARView ignores multi-touch events
Hi, How to enable multitouch on ARView? Touch functions (touchesBegan, touchesMoved, ...) seem to only handle one touch at a time. In order to handle multiple touches at a time with ARView, I have to either: Use SwiftUI .simultaneousGesture on top of an ARView representable Position a UIView on top of ARView to capture touches and do hit testing by passing a reference to ARView Expected behavior: ARView should capture all touches via touchesBegan/Moved/Ended/Cancelled. Here is what I tried, on iOS 26.1 and macOS 26.1: ARView Multitouch The setup below is a minimal ARView presented by SwiftUI, with touch events handled inside ARView. Multitouch doesn't work with this setup. Note that multitouch wouldn't work either if the ARView is presented with a UIViewController instead of SwiftUI. import RealityKit import SwiftUI struct ARViewMultiTouchView: View { var body: some View { ZStack { ARViewMultiTouchRepresentable() .ignoresSafeArea() } } } #Preview { ARViewMultiTouchView() } // MARK: Representable ARView struct ARViewMultiTouchRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARViewMultiTouch(frame: .zero) let anchor = AnchorEntity() arView.scene.addAnchor(anchor) let boxWidth: Float = 0.4 let boxMaterial = SimpleMaterial(color: .red, isMetallic: false) let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial]) box.name = "Box" box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)])) anchor.addChild(box) return arView } func updateUIView(_ uiView: ARView, context: Context) { } } // MARK: ARView class ARViewMultiTouch: ARView { required init(frame: CGRect) { super.init(frame: frame) /// Enable multi-touch isMultipleTouchEnabled = true cameraMode = .nonAR automaticallyConfigureSession = false environment.background = .color(.gray) /// Disable gesture recognizers to not conflict with touch events /// But it doesn't fix the issue gestureRecognizers?.forEach { $0.isEnabled = false } } required dynamic init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { for touch in touches { /// # Problem /// This should print for every new touch, up to 5 simultaneously on an iPhone (multi-touch) /// But it only fires for one touch at a time (single-touch) print("Touch began at: \(touch.location(in: self))") } } } Multitouch with an Overlay This setup works, but it doesn't seem right. There must be a solution to make ARView handle multi touch directly, right? import SwiftUI import RealityKit struct MultiTouchOverlayView: View { var body: some View { ZStack { MultiTouchOverlayRepresentable() .ignoresSafeArea() Text("Multi touch with overlay view") .font(.system(size: 24, weight: .medium)) .foregroundStyle(.white) .offset(CGSize(width: 0, height: -150)) } } } #Preview { MultiTouchOverlayView() } // MARK: Representable Container struct MultiTouchOverlayRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> UIView { /// The view that SwiftUI will present let container = UIView() /// ARView let arView = ARView(frame: container.bounds) arView.autoresizingMask = [.flexibleWidth, .flexibleHeight] arView.cameraMode = .nonAR arView.automaticallyConfigureSession = false arView.environment.background = .color(.gray) let anchor = AnchorEntity() arView.scene.addAnchor(anchor) let boxWidth: Float = 0.4 let boxMaterial = SimpleMaterial(color: .red, isMetallic: false) let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial]) box.name = "Box" box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)])) anchor.addChild(box) /// The view that will capture touches let touchOverlay = TouchOverlayView(frame: container.bounds) touchOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight] touchOverlay.backgroundColor = .clear /// Pass an arView reference to the overlay for hit testing touchOverlay.arView = arView /// Add views to the container. /// ARView goes in first, at the bottom. container.addSubview(arView) /// TouchOverlay goes in last, on top. container.addSubview(touchOverlay) return container } func updateUIView(_ uiView: UIView, context: Context) { } } // MARK: Touch Overlay View /// A UIView to handle multi-touch on top of ARView class TouchOverlayView: UIView { weak var arView: ARView? override init(frame: CGRect) { super.init(frame: frame) isMultipleTouchEnabled = true isUserInteractionEnabled = true } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { let totalTouches = event?.allTouches?.count ?? touches.count print("--- Touches Began --- (New: \(touches.count), Total: \(totalTouches))") for touch in touches { let location = touch.location(in: self) /// Hit testing. /// ARView and Touch View must be of the same size if let arView = arView { let entity = arView.entity(at: location) if let entity = entity { print("Touched entity: \(entity.name)") } else { print("Touched: none") } } } } override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) { let totalTouches = event?.allTouches?.count ?? touches.count print("--- Touches Cancelled --- (Cancelled: \(touches.count), Total: \(totalTouches))") } }
1
0
311
Nov ’25
TapGesture stops responding on ViewAttachmentComponent after disabling or removing and re-adding the Entity (visionOS 26)
Issue When an Entity with a ViewAttachmentComponent is: disabled using isEnabled = false removed using removeFromParent() and then enabled or added back again, the attached SwiftUI view is rendered correctly, but tap interactions stop working. Specifically: Button actions inside the attached view do not fire TapGesture closures on child views do not respond Expected Behavior Tap interactions inside the attached view should continue to work after the Entity is re-enabled or re-added. Actual Behavior After being disabled or removed once, all tap interactions stop responding. Comparison When displaying the same SwiftUI view using RealityViewAttachments, this issue does not occur. Removing and re-displaying the attachment still allows taps to work correctly. Reproduction Attached sample code reproduces the issue: A RealityView with an Entity that has a ViewAttachmentComponent The attached SwiftUI view contains a Toggle The toggle updates isEnabled on the Entity After toggling off and on, tap interactions stop responding Environment Xcode 26 visionOS 26 Question Is this expected behavior of ViewAttachmentComponent, or a bug? Is there a recommended way to temporarily hide or disable an Entity with ViewAttachmentComponent without breaking tap interactions? import SwiftUI import RealityKit struct GestureTestView: View { @State var sampleEnabled = true @State var sampleEntity: Entity? var body: some View { RealityView { contents, attachments in // After deleting and re-displaying it, taps no longer respond. let sample = Entity(components: ViewAttachmentComponent(rootView: SampleView())) // Executed successfully //let sample = attachments.entity(for: "SampleView")! contents.add(sample) sample.position = [0, 1.2, -1] sampleEntity = sample let toggleButton = Entity(components: ViewAttachmentComponent(rootView: ToggleButtonView(isOn: $sampleEnabled))) contents.add(toggleButton) toggleButton.position = [0, 1, -1] } update: { _, _ in // run update closure print(sampleEnabled) // update sample entity enable sampleEntity?.isEnabled = sampleEnabled } attachments: { Attachment(id: "SampleView") { SampleView() } } } } struct ToggleButtonView: View { @Binding var isOn: Bool var body: some View { VStack { Toggle(isOn: $isOn) { Text("Toggle") } } .padding() .glassBackgroundEffect() } } struct SampleView: View { var body: some View { VStack { Button { print("Hello, World!") } label: { Text("Hello, World!") .padding() } } .padding() .glassBackgroundEffect() } } #Preview(immersionStyle: .mixed) { GestureTestView() }
1
0
146
3d
VisionOS VideoMaterial on 3D Mesh
I'm trying to get video material to work on an imported 3D asset, and this asset is a USDC file. There's actually an example in this WWDC video from Apple. You can see it running on the flag in this airplane, but there are no examples of this, and there are no other examples on the internet. Does anybody know how to do this? You can look at 10:34 in this video. https://developer.apple.com/documentation/realitykit/videomaterial
2
0
953
Dec ’25
Walking an entity around an immersive space in visionOS like the window drag bar
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
2
0
1.1k
Dec ’25
Subdivision shows in RealityComposerPro but not when loaded in Simulator
Hello, I am trying to use the subdivision mesh rendering option. I can see it working in RealityComposerPro: But not when loading asset and displaying in Simulator: Using this code: import SwiftUI import RealityKit import RealityKitContent struct AirspaceView: View { // MARK: - VIEW BODY var body: some View { RealityView { content in if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) { content.add(a) } } } } Any ideas why?
2
1
570
Feb ’25
Morphing Animation Not Playing in an Xcode VisionOS App
I have a 3D model with morphing animation that works correctly in Blender. I exported this model as a USDZ file and tried to display it in an Xcode-developed visionOS app, but the morphing animation does not play. What I Have Tried: Morphing animation works correctly in Blender. After exporting to USDZ, the morphing animation does not play in the Xcode app. Linear motion animations (such as object movement) work fine. Behavior in Reality Converter: GLB files do not display. USDZ files load, but morphing animations do not play. What I Want to Know: Is there a way to play morphing animations in an Xcode-developed app? Does RealityKit support morphing animations? Can morphing animations be played in an Xcode-developed app? If RealityKit does not support morphing animations, what alternative methods can be used to play them? I am looking for a way to use the existing animations without recreating them. Additional Information: I have both the Blender file (where animations work) and the USDZ file (where animations do not play). I am developing a visionOS app using Xcode. Any advice or solutions would be greatly appreciated. Thank you in advance!
2
0
509
Feb ’25
ModelEntity(named:in:) fails to load USD file from RealityKitContent bundle with misleading error?
My experience has been that ModelEntity(named:in:) can be used to load a USD file with a simple structure consisting of entities and model entities, and, critically, it will flatten the entity hierarchy down to a single ModelEntity, presumably reducing the number of draw calls. However, can anyone verify that the following is true? If ModelEntity(named:in:) is used to load a USD file from a RealityKit content bundle, it may fail when the USD file contains more complex data, such as shader graph material definitions, or perhaps for some other reason. I am not sure. AND the error that ModelEntity(named:in:) throws in this case is Cannot load RealityKitContent entity: Failed to find resource with name "<name>" in bundle which would literally suggest that the file does not exist, instead of what I assume the error actually is, which is "the file exists but its entity hierarchy could not be flattened to a single ModelEntity" ? Is that an accurate description of the known behavior of ModelEntity:named:in:)? I understand that I could use Entity(named:in:) instead, without the flattening feature. My question is really more about the seemingly misleading error message. Thank you for any clarification you can provide.
2
0
179
May ’25
Can spatial scene function be used outside of the Photo App?
I'm new here so I don't know what's this function belongs to which topic... Sorry about that! I watched the WWDC stream and I am really interested in this function, I'm wondering if this function could be used in my apps. I looked up the document but I find it only support visionOS(i'm not sure about that, but I saw the demo is base on the visionOS)
2
0
215
Jun ’25
How to detect which entity was tapped?
Hi, I'm rewriting my game from SceneKit to RealityKit, and I'm having trouble implementing the following scenario: I tap on the iPhone screen to select an Entity that I want to drag. If an Entity was tapped, it should then be possible to drag it left, right, etc. SceneKit solution: func CGPointToSCNVector3(_ view: SCNView, depth: Float, point: CGPoint) -> SCNVector3 { let projectedOrigin = view.projectPoint(SCNVector3Make(0, 0, Float(depth))) let locationWithz = SCNVector3Make(Float(point.x), Float(point.y), Float(projectedOrigin.z)) return view.unprojectPoint(locationWithz) } and then I was calling: SCNView().hitTest(location, options: [SCNHitTestOption.firstFoundOnly:true]) the code was called inside of the UIPanGestureRecognizer in my UIViewController. Could I reuse that code or should I go with the SwiftUI approach - something like that: var body: some View { RealityView { .... } .gesture(TapGesture().onEnded { }) ? I already have this code: @State private var location: CGPoint? .onTapGesture { location in self.location = location } I'm trying to identify the entity that was tapped within the RealityView like that: RealityView { content in let box: ModelEntity = createBox() // for now there is only one box, however there will be many boxes content.add(box) let anchor = AnchorEntity(world: [0, 0, 0]) content.add(anchor) _ = content.subscribe(to: SceneEvents.Update.self) { event in //TODO: find tapped entity, so that it could be dragged inside of the DragGesture() } Any help would be appreciated. I also noticed that if I create a TapGesture like that: TapGesture(count: 1) .targetedToAnyEntity() and add it to my view using .gesture() then it is not triggered.
2
0
187
Aug ’25
RealityKit Blend Modes
I have 2 planes with textures on. I want these planes to intersect [ –|– ], and I want the blend mode to be additive. Currently I get z fighting on the planes, and I can't see how to set blend modes. I've done this before in Unity and Godot in a fairly straight forward manner. How do I accomplish this with RealityKit, preferably using code only (my scene is quite dynamic)? Do I need to do it with a shader manually? How can I stop the z fighting?
2
0
758
Aug ’25
RealityKit migration
Hey there, I’m currently planning to use RealityKit in a new multiplatform app I’m building. Unfortunately, I noticed that WatchOS is not supported for RealityKit, while SceneKit is getting deprecated. However, I’d like to maintain the same codebase across platforms. What are my options?
2
0
429
Oct ’25
RealityView postProcess effect depth texture
Hello, Question re: iOS RealityView postProcess. I've got a working postProcess kernel and I'd like to add some depth-based effects to it. Theoretically I should be able to just do: encoder.setTexture(context.sourceDepthTexture, index: 1) and then in the kernel: texture2d<float, access::read> depthIn [[texture(1)]] ... outTexture.write(depthIn.read(gid), gid); And I consistently see all black rendered to the view. The postProcess shader works, so that's not the issue. It just seems to not be receiving actual depth information. (If I set a breakpoint at the encoder setTexture step, I can see preview the color texture of the scene, but the context's depthTexture looks like all NaN / blank.) I've looked at all the WWDC samples, but they include ARView for all the depth sample code, which has a different set of configuration options than RealityView. So far I haven't seen anywhere to explicitly tell RealityView "include the depth information". So I'm not sure if I'm missing something there. It appears that there is indeed a depth texture being passed, but it looks blank. Is there a working example somewhere that we can reference?
2
0
556
Nov ’25