RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit subtopic

Post

Replies

Boosts

Views

Activity

ARKit body tracking (ARBodyAnchor) broken in iOS 18.0 + 18.1
I'm wondering if anyone can suggest a workaround for the broken ARKit body tracking in iOS / iPadOS 18.0 and 18.1? The orientation for foot bones (and possibly other bones) is incorrect in the ARBodyAnchor returned via ARView.session.delegate update method. It works correctly in iOS / iPadOS 17.x. The same failure occurs in a SceneKit app via a ARBodyAnchor in ARSCNViewDelegate. You can easily verify the problem using Apple’s own sample app: https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/capturing_body_motion_in_3d Any help would be greatly appreciated.
2
1
535
Nov ’24
PhotogrammetrySession on non Pro Iphone
Hello, I'm creating an app that use PhotogrammetrySession Class to build 3D objects from photographs (https://developer.apple.com/documentation/realitykit/creating-3d-objects-from-photographs). I'm wondering why this class is working only on Pro iphone (12 Pro, 13 Pro, 14 Pro, 15 Pro and 16 Pro) and none non-Pro iPhone. My app does not use Lidar so it's not the problem. I thought it could be power-related but a18 soc from iPhone 16 is more powerful than a14 bionic from iPhone 12 Pro (i could also mention iPhone 13 Pro and iPhone 14 that both have a15 bionic whereas only the first one is compatible). Did I miss something that could explain these restrictions ? Is there any plan to make this class usable by every iPhone enough powerful to run it ? Thanks in advance for answering me
0
1
573
Nov ’24
RealityKit Crash with Orthographic Camera
In macOS project with RealityKit and SwiftUI, adding OrthographicCameraComponent causes crashes in both Xcode Preview and at runtime. import SwiftUI import RealityKit struct ContentView: View { var body: some View { RealityView { content in var camera = Entity() var component = OrthographicCameraComponent() component.scale = 5 camera.position = [0, 0, 5] camera.components.set(component) content.add(camera) content.add(ModelEntity(mesh: .generateSphere(radius: 1))) } } } #Preview { ContentView() } Has anyone faced this issue or knows a fix?
5
1
660
Nov ’24
How to Disable Object Occlusion in a RealityView?
Hi everyone, I’m working on a project using RealityKit and encountering an issue with object occlusion. Specifically, I need to disable the occlusion of real-world objects (e.g., tables, walls) in my RealityView. I want virtual entities to render fully, even if real-world objects would normally block their view. I’ve explored options in ARSession and ARWorldTrackingConfiguration but haven’t found anything that affects occlusion in RealityView. I suspect there might be a setting or approach I’ve missed. Has anyone dealt with a similar scenario or knows how to achieve this? Any insights or pointers would be greatly appreciated! Thanks in advance, Nicolas
1
1
487
Nov ’24
Collision Shape not work in Reality Composer
I’m having issues getting Collision Shapes working in Reality Composer on iPadOS, or with Reality Composer Pro via Xcode on macOS? I’ve posted a video recorded through my Vision Pro showing the issue. The project i’m working on is a Dice Rolling application. The dice don’t appear to be working set as Collision Shape=Automatic, which I assume takes into account the actual silhouette of the shape. https://youtu.be/upPtQY4QOAk?si=yyx6rbSSmVkLxBLg They also don’t rest on their face when they land. Anyone experience this type of behavior and found a solution? I’m currently doing this with Reality Composer, but most likely will also be wanting to get it to work properly in Reality Composer Pro as well. Thx!
6
1
733
Dec ’24
RealityView session not terminating when leaving view on iOS
Everything works fine, except when tapping the navigation Back link and returning to the previous view, the AR session inside RealityView does not terminate. The green dot camera indicator stays on, it is still scanning the environment, and if the package has audio in it, the audio will still play, albeit extremely panned on the right channel. I have no issues terminating QuickLook or ARSCNView. I have a simple NavigationLink opening the RealityView... NavigationLink(destination: MyRealityView()) { Text("Open AR") } struct MyRealityView : View { var body: some View { RealityView { content in // Create horizontal plane anchor for the content let anchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: [0.5,0.5])) let scene = await loadEntity(named: "Scene") // Add model to anchor anchor.addChild(scene!) content.add(anchor) // View Settings content.camera = .spatialTracking } placeholder: { ProgressView() } .onDisappear { //print("RealityView is disappearing. Cleanup actions here.") } .edgesIgnoringSafeArea(.all) // Activate onTap from Reality Composer Pro .gesture(TapGesture().targetedToAnyEntity().onEnded { value in _ = value.entity.applyTapForBehaviors() }) }} I have experimented with several ways of trying to close it, and I can't figure it out. I have tried State variables and custom Back buttons. I was also trying to working with pause(), but I can't seem to get that to function either. Anyone else have this issue or know of a solution? What am I missing?
1
1
524
Jan ’25
ARMeshAnchor Data with RealityView
I want to use SwiftUI and RealityView to get AR scene understanding data (ARMeshAnchor) on iOS devices with LiDAR. The only way we can do that is by using ARSession (unless there is another way). However in previous iOS 18 builds there was this function: https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:session:arconfiguration:) , which worked with SpatialTrackingSession and a custom ARSession together. This function in the the latest iOS and Xcode has since been removed in the RealityKit framework but still there on documentation. I also wanted to get ARFaceAnchor data which I still cannot get without ARSession, the closest I can get is by using: let target = AnchoringComponent.Target.face let anchoringComponent = AnchoringComponent(target, trackingMode: .predicted) entity = Entity() entity!.components.set(anchoringComponent) But I still can't find a way to get the current frame (ARFrame) or the anchors ([ARAnchor]) in the view. Alternatively if I use if I use this function: https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:) and start the ARSession separately. The session (didUpdate and didAdd) only runs for a few frames before getting interrupted. And if I completely remove SpatialTrackingConfiguration and just run the ARSession. There still is a valid tracked entity for the AnchoringComponent.Target.face component. IF in the configuration for the ARSession I use the ARWorldTrackingConfiguration with face tracking. And I still get updated facial data each frame. But the ARSession didUpdate or didAdd functions don't get called passed the first few frames. Interestingly if I switch the RealityViewCameraContent.RealityViewCamera to .virtual. I get ARMeshAnchor and ARFaceAnchor data, but no camera feed (as expected). This with or without SpatialTrackingConfiguration. My overarching question is what is the proper way to access ARMeshAnchors and other ARAnchors created by the system and track them live while also using SwiftUI. GitHub Repo with sample project can be found here: https://github.com/bpate75/RealityViewTesting
1
1
431
Feb ’25
Subdivision shows in RealityComposerPro but not when loaded in Simulator
Hello, I am trying to use the subdivision mesh rendering option. I can see it working in RealityComposerPro: But not when loading asset and displaying in Simulator: Using this code: import SwiftUI import RealityKit import RealityKitContent struct AirspaceView: View { // MARK: - VIEW BODY var body: some View { RealityView { content in if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) { content.add(a) } } } } Any ideas why?
2
1
519
Feb ’25
RealityView IOS Navigation
I have a visionOS app that I’m adding support for IOS and will like to keep using RealityView. I know there are the following modifiers to add some navigation .realityViewCameraControls(.orbit) .realityViewCameraControls(.dolly) .realityViewCameraControls(.pan) But how can I add more than one? For example I would like to orbit with one finger, Pan with 2 fingers and dolly by pinching. Is this possible and if so can someone share some sample code on how to achieve that? Thanks, Guillermo
0
1
410
Feb ’25
Portals do not occlude CollisionComponent and InputTargetComponent
Hello If you add a ModelEntity to a world inside a portal, the drawing of the model will be occluded properly to the portal bounds. However the invisible shape of the InputTargetComponent and CollisionComponent are not occluded. They are able to cross the portal, and if you have gestures on your ModelEntity you can trigger them in areas outside the portal bounds. This happens even if the ModelEntity has no PortalCrossingComponent.
0
1
49
Mar ’25
You cannot debug in simulator
I can't create any breakpoint in my Xcode after I upgraded to macOS 15.4 macOS: Version 15.4 (24E248) visionOS Simulator: 2.3 Xcode: Version 16.2 (16C5032a) My app works well without any breakpoints. But if I create any breakpoint it shows me this: Couldn't find the Objective-C runtime library in loaded images. Message from debugger: The LLDB RPC server has crashed. You may need to manually terminate your process. The crash log is located in ~/Library/Logs/DiagnosticReports and has a prefix 'lldb-rpc-server'. Please file a bug and attach the most recent crash log.
3
5
364
Apr ’25
Feature Request: Support .reality File Export in Reality Composer Pro for Mac
I am an AR developer working on Apple Silicon Macs. Currently, Reality Composer Pro does not allow exporting .reality files, and Reality Composer (classic) is not available for Apple Silicon. This creates a gap in the workflow for ARKit/RealityKit developers who need interactive .reality files for use in Xcode projects. Having the ability to export .reality files directly from Reality Composer Pro on Mac would greatly streamline development and enable a fully native workflow on modern Macs. Alternatively, bringing Reality Composer (classic) to Apple Silicon would also resolve this issue. I have submitted this as a feature request via Feedback Assistant (FB17900386). I encourage others with similar needs to reply or submit feedback as well. Thank you!
4
1
155
Jul ’25
RealityView content scale factor
Hi, following the recent deprecation of SceneKit, I'm trying to move a couple of my SceneKit projects to RealityKit. One thing I can't seem to find is how to change the content scale factor when using a RealityView in SwiftUI. It was really easy to do in SceneKit with just a SCNView property, and it seems that it's also possible when using ARView, but I can't find a way to do it with a RealityView. Maybe it's a SwiftUI limitation?
3
1
120
Jul ’25
GestureComponent does not support DragGesture
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not. I already checked this post, which points to this seemingly outdated sample code I assume that example is deprecated in favour of the now built in version of GestureComponent. Nonetheless, there are no compiler warnings or errors, it just fails silently. TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight. RealityView { content in let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1))) testEntity.position = SIMD3<Float>(0,0,-1) testEntity.components.set(InputTargetComponent()) testEntity.components.set(CollisionComponent( shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))] )) let testGesture = TapGesture() .onEnded { value in print("Tapped") } testEntity.components.set(GestureComponent(testGesture)) let dragGesture = DragGesture() .onEnded { value in print("Dragged") } testEntity.components.set(GestureComponent(dragGesture)) content.add(testEntity) }
3
1
347
Jul ’25
Subject: Handling Z-Up Blender USDZ Models in RealityKit (visionOS) for Transform Updates
Hello everyone, I'm working on a visionOS application using RealityKit and am encountering a common coordinate system challenge when integrating 3D models created in Blender. My goal is to display and dynamically update the Transform (position, rotation, scale) of models created in Blender within RealityKit. The issue arises because Blender's default coordinate system is Z-up, and while exporting to USD/USDZ, I don't have a reliable "Y-up" export option that correctly reorients the model and its transform data for RealityKit's Y-up convention. This means I'm essentially exporting models with their "up" direction along the Z-axis. When I load these Z-up exported models into RealityKit, they are often oriented incorrectly. To then programmatically update their Transform (e.g., move them, rotate them based on game logic, or apply physics), I need to ensure that the Transform values I set align with RealityKit's Y-up system, even though the original model data was authored in a Z-up context. My questions are: What is the recommended transformation process (e.g., using simd_quatf or simd_float4x4) to convert a Transform that was conceptually defined in a Z-up coordinate system to RealityKit's Y-up coordinate system? Specifically, when I have a Transform (or its translation, rotation, scale components) from a Z-up context, how should I apply this to a RealityKit Entity so it appears and behaves correctly in a Y-up world? Are there any existing convenience APIs or helper functions within RealityKit, simd, or other Apple frameworks that simplify this Z-up to Y-up Transform conversion process? Or is a manual application of a transformation quaternion (e.g., simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])) the standard approach? Any guidance, code examples, or best practices from those who have faced similar challenges would be greatly appreciated! Thank you.
1
1
288
Jul ’25
Moving from SceneKit - fog missing
I am rewriting an unfinished SceneKit project as RealityKit (NonAR). As far as I can see, RealityKit is missing basic fog functionality? Fog was simple & easy to implement in SCeneKit (fogStartDistance / fogEndDistance / fogDensityExponent / fogColor). Are there any plans to implement something like this in RealityKit? Are there any simple workarounds?
3
1
545
3w
Showing a MTLTexture on an Entity in RealityKit
Is there any standard way of efficiently showing a MTLTexture on a RealityKit Entity? I can't find anything proper on how to , for example, generate a LowLevelTexture out of a MTLTexture. Closest match was this two year old thread. In the old SceneKit app, we would just do guard let material = someNode.geometry?.materials.first else { return } material.diffuse.contents = mtlTexture Our flow is as follows (for visualizing the currently detected object): Camera-Stream -> CoreML Segmentation -> Send the relevant part of the MLShapedArray-Tensor to a MTLComputeShader that returns a MTLTexture -> Show the resulting texture on a 3D object to the user
5
0
948
1w
RealityKit captureHighResolutionFrame from session is broken on iOS26?
A bit of background on what our app is doing: We have a RealityKit ARView session running. During this period we place objects in RealityKit. At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame. We then use captured frame and frame.camera.projectPoint to project my objects back to 2D Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame) I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames. Note: The yaw value seems to differ more if we start session in portrait but take photo in landscape. Example result for 3 captured frames: Frame captured with yaw: 1.4855177402496338 Frame captured with yaw: -0.08803760260343552 Frame captured with yaw: -0.08179682493209839 Example code: class CustomARView: ARView, ARSessionDelegate { required init(frame: CGRect) { super.init(frame: frame) } required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")} func setup() { let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap)) addGestureRecognizer(singleTap) } @objc func handleTap(_ gestureRecognizer: UIGestureRecognizer) { Task { do { let frame = try await session.captureHighResolutionFrame() print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))") } catch { } } } } struct CustomARViewUIViewRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> some UIView { let arView = CustomARView(frame: .zero) arView.setup() return arView } func updateUIView(_ uiView: UIViewType, context: Context) { } } struct ContentView: View { var body: some View { CustomARViewUIViewRepresentable() .frame(maxWidth: .infinity, maxHeight: .infinity) .ignoresSafeArea() } }
1
1
285
21h