Discuss augmented reality and virtual reality app capabilities.

Posts under AR / VR tag

48 Posts

Post

Replies

Boosts

Views

Activity

Sticky Horizontal AnchorEntity
Hi, I'm trying to use AnchorEntity for horizontal surfaces. It works when the entity is being created, but I'm looking for a way to snap this entity to the nearest surface, after translating it, for example with a DragGesture. What would be the best way to achieve this? Using raycast, creating a new anchor, trackingMode to continuous etc. Do I need to use ARKitSession as I want continuous tracking?
1
0
134
1w
Nested RealityKit entity collisions priority
Hello, I'm struggling trying to interact with a RealityKit entity nested (or at least visually nested) in a second one. I think I've the same issue as mentioned on this StackOverflow post, but I can't manage to reproduce the solution. https://stackoverflow.com/questions/79244424/how-to-prioritize-a-specific-entity-when-collision-boxes-overlap-in-realitykit What I'd like to achieve is to translate the red box using a DragGesture, while still be able to interact using a TapGesture on the sphere. Currently, I can only do one at a time. Does anyone know the solution? extension CollisionGroup { static let parent: CollisionGroup = CollisionGroup(rawValue: 1 << 0) static let child: CollisionGroup = CollisionGroup(rawValue: 1 << 1) } struct ImmersiveView: View { var body: some View { RealityView { content in let boxMesh = MeshResource.generateBox(size: 0.35) let boxMaterial = SimpleMaterial(color: .red.withAlphaComponent(0.25), isMetallic: false) let boxEntity = ModelEntity(mesh: boxMesh, materials: [boxMaterial]) let sphereMesh = MeshResource.generateSphere(radius: 0.05) let sphereMaterial = SimpleMaterial(color: .blue, isMetallic: false) let sphereEntity = ModelEntity(mesh: sphereMesh, materials: [sphereMaterial]) content.add(sphereEntity) content.add(boxEntity) boxEntity.components.set(InputTargetComponent()) boxEntity.components.set( CollisionComponent( shapes: [ShapeResource.generateBox(size: SIMD3<Float>(repeating: 0.35))], isStatic: true, filter: CollisionFilter( group: .parent, mask: .parent.subtracting(.child) ) ) ) sphereEntity.components.set(InputTargetComponent()) sphereEntity.components.set(HoverEffectComponent()) sphereEntity.components.set( CollisionComponent( shapes: [ShapeResource.generateSphere(radius: 0.05)], isStatic: true, filter: CollisionFilter( group: .child, mask: .child.subtracting(.parent) ) ) ) } } }
3
0
206
1w
How to visualize AR-dependent app if not supported on Simulator for Swift Student Challenge?
Recently, applications for the Swift Student Challenge opened up. I noticed, that when selected "where to run your app" (mine was developed in Xcode 26), and you select Xcode26, there is a note underneath it that basically says all Xcode projects will be run on the simulator. What if my project is dependent on AR? How would I let the judges test my submission?
2
0
226
1w
Lighting disabled inside PortalComponent in Progressive/Full Immersion (visionOS 26.2)
Hello, I am experiencing an issue where lighting is disabled within a Portal Component specifically when using Progressive or Full Immersion styles in visionOS 26.2. [Steps to Reproduce] Create a scene in Reality Composer Pro with custom lighting (Directional Light, IBL, etc.) and load it as an Entity. Add a PortalComponent to this Entity. Observe the Entity in an ImmersiveSpace under different Immersion Styles. [Observed Behavior] Mixed Immersion: Lighting works as expected.Progressive / Full Immersion: Lighting is disabled, causing the content to render incorrectly. [Additional Context] Without PortalComponent: The same Reality Composer Pro scene renders lighting correctly across all Immersion Styles (Mixed, Progressive, and Full).Regression: In applications built with Xcode 16.4 / visionOS 2.5, lighting inside the Portal functioned correctly. This issue appears to have emerged with Xcode 26.2 and visionOS 26.2. [Questions] Is the disabling of lighting inside Portals during Full/Progressive Immersion an intended architectural change or optimization in visionOS 26.2? Are there any workarounds, such as specific PortalComponent configurations or new API flags, to re-enable lighting in these immersion modes? Thank you.
1
0
444
1w
Showing 3D/AR content on multiple pages in iOS with RealityView
Hi. I'm trying to show 3D or AR content in multiple pages on an iOS app but I have found that if a RealityView is used earlier in a flow then future uses will never display the camera feed, even if those earlier uses do not use any spatial tracking. For example, in the following simplified view, the second page realityView will not display the camera feed even though the first view does not use it not start a tracking session. struct ContentView: View { var body: some View { NavigationStack { VStack { RealityView { content in content.camera = .virtual // showing model skipped for brevity } NavigationLink("Second Page") { RealityView { content in content.camera = .spatialTracking } } } } } } What is the way around this so that 3D content can be displayed in multiple places in the app without preventing AR content from working? I have also found the same problem when wrapping an ARView for use in SwiftUI.
0
0
232
2w
Spatial Computing, ARPointCloud (rawFeaturePoints)
https://developer.apple.com/documentation/arkit/arpointcloud https://developer.apple.com/documentation/arkit/arframe/rawfeaturepoints The point cloud (collection of points/features) main intention is a debug visualization to what the underlying tracking algorithm processes and is not designed for additional algorithms on top of that. But, we are utilizing information contained in the points/features collected by ARKit. Currently, the range of rawfeaturepoints is limited to about 10 meter from the device. We see a great chance if the range is unlock. The global localization will be more robust and accurate. ARPointCloud - Apple ARKit - FindSurface YouTube SIdQRiLj2jY
7
0
1.4k
Jan ’26
RealityKit / visionOS – Memory not released after dismissing ImmersiveSpace with USDZ models
Hi everyone, I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup. App Context The app showcases apartments in real scale using AR. Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures). Users can walk inside the apartments, and performance is good even close to hardware limits. Flow The app starts in a full immersive space (RealityView) for selecting the apartment. When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads. The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes. When the user dismisses the experience, we attempt cleanup: Nulling out all entity references. Removing ModelComponents. Clearing cached textures and skyboxes. Forcing dictionaries/collections to empty. Despite this cleanup, memory usage remains very high. Problem After dismissing the ImmersiveSpace, memory does not return to baseline. Check the attached screenshot of the profiling made using Instruments: Initial state: ~30MB (main menu). After loading models sequentially: ~3.3GB. Skybox textures bring it near ~4GB. After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB). When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure. The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected. So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures. Cleanup Code Example Here’s a simplified version of the cleanup I’m doing: func clearAllRoomEntities() { for (entityName, entity) in entityFromMarker { entity.removeFromParent() if let modelEntity = entity as? ModelEntity { modelEntity.components.removeAll() modelEntity.children.forEach { $0.removeFromParent() } modelEntity.clearTexturesAndMaterials() } entityFromMarker[entityName] = nil removeSkyboxPortals(from: entityName) } entityFromMarker.removeAll() } extension ModelEntity { func clearTexturesAndMaterials() { guard var modelComponent = self.model else { return } for index in modelComponent.materials.indices { removeTextures(from: &modelComponent.materials[index]) } modelComponent.materials.removeAll() self.model = modelComponent self.model = nil } private func removeTextures(from material: inout any Material) { if var pbr = material as? PhysicallyBasedMaterial { pbr.baseColor.texture = nil pbr.emissiveColor.texture = nil pbr.metallic.texture = nil pbr.roughness.texture = nil pbr.normal.texture = nil pbr.ambientOcclusion.texture = nil pbr.clearcoat.texture = nil material = pbr } else if var simple = material as? SimpleMaterial { simple.color.texture = nil material = simple } } } Questions Is this expected RealityKit behavior (textures/meshes cached internally)? Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used? Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently? Any guidance, best practices, or confirmation would be hugely appreciated. Thanks in advance!
9
0
2.1k
Jan ’26
ParticleEmitterComponent Position Offset Issue After iOS 26.1 Update – Seeking Solutions & Workarounds
Problem Summary After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds. Environment Details Operating System: iOS 26.1 & iOS 26.2 Framework: RealityKit Xcode Version: 16.2 (16C5032a) Expected vs. Actual Behavior Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier. Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience. Steps to Reproduce Create or open an AR application with RealityKit that uses particle components Attach a ParticleEmitterComponent to an entity via a custom system Run the application on iOS 26.1 or iOS 26.2 Observe that particles render at an offset position away from the entity Minimal Code Example Here's the setup from my test case: Custom Component & System: struct SparkleComponent4: Component {} class SparkleSystem4: System { static let query = EntityQuery(where: .has(SparkleComponent4.self)) required init(scene: Scene) {} func update(context: SceneUpdateContext) { for entity in context.scene.performQuery(Self.query) { // Only add once if entity.components.has(ParticleEmitterComponent.self) { continue } var newEmitter = ParticleEmitterComponent() newEmitter.mainEmitter.color = .constant(.single(.red)) entity.components.set(newEmitter) } } } AR Setup: let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = Entity() model.components.set(ModelComponent(mesh: boxMesh, materials: [material])) model.components.set(SparkleComponent4()) model.position = [0, 0.05, 0] model.name = "MyCube" let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2])) anchor.addChild(model) arView.scene.addAnchor(anchor) Questions for the Community Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2? Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning? Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing? Additional Information I've already submitted this issue via Feedback Assistant(FB21346746)
0
0
558
Dec ’25
ParticleEmitterComponent Position Offset Issue After iOS 26.1 Update – Seeking Solutions & Workarounds
Problem Summary After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds. Environment Details Operating System: iOS 26.1 & iOS 26.2 Framework: RealityKit Xcode Version: 16.2 (16C5032a) Expected vs. Actual Behavior Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier. Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience. Steps to Reproduce Create or open an AR application with RealityKit that uses particle components Attach a ParticleEmitterComponent to an entity via a custom system Run the application on iOS 26.1 or iOS 26.2 Observe that particles render at an offset position away from the entity Minimal Code Example Here's the setup from my test case: Custom Component & System: struct SparkleComponent4: Component {} class SparkleSystem4: System { static let query = EntityQuery(where: .has(SparkleComponent4.self)) required init(scene: Scene) {} func update(context: SceneUpdateContext) { for entity in context.scene.performQuery(Self.query) { // Only add once if entity.components.has(ParticleEmitterComponent.self) { continue } var newEmitter = ParticleEmitterComponent() newEmitter.mainEmitter.color = .constant(.single(.red)) entity.components.set(newEmitter) } } } AR Setup: let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = Entity() model.components.set(ModelComponent(mesh: boxMesh, materials: [material])) model.components.set(SparkleComponent4()) model.position = [0, 0.05, 0] model.name = "MyCube" let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2])) anchor.addChild(model) arView.scene.addAnchor(anchor) Questions for the Community Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2? Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning? Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing? Additional Information I've already submitted this issue via Feedback Assistant(FB21346746)
0
0
444
Dec ’25
Build failed with error in Reality Kit Content
I have an arguably massive project and am not sure if the issue is with the assets or my approach in the code. the error says : Tool terminated due to error "SIGNAL 6:Abort trap:6" Basically I have around 15-20 assets (usda files built out of usdz files). In the code i am loading a scene with all the usda files and then have the functions to enable and disable a particular asset when needed. This was working as intended when i am using dummy assets(with less polygons, lesser textures) But when i placed the actual assets the error appears and persists. Do I have a bad approach of loading all the scenes at once? Previously i have used an approach which loads the scenes when needed and that involved some lag before rendering the assets. But my current approach(when using dummies) works like a dime rendering and hiding the assets in realtime with no lag. Kindly suggest any workarounds.
0
0
215
Dec ’25
Augmented Reality app unable to load the image from the camera
I have an app on the App Store for many years enabling users to post text into clouds in augmented reality. Yet last week abruptly upon installing the app on the iPhone the screen started going totally dark and a list of little comprehensible logs came up of the kind: ARSCNCompositor <0x300ad0e00>: ARSCNCompositor (0, 0) initialization failed. Matting is not set up properly. many times, then RWorldTrackingTechnique <0x106235180>: Unable to update pose [PredictorFailure] for timestamp 870.392108 ARWorldTrackingTechnique <0x106235180>: Unable to predict pose [1] for timestamp 870.392108 again several times and then: ARWorldTrackingTechnique <0x106235180>: SLAM error callback: Error Domain=Slam Error Code=7 "Non fatal error occurred due to significant drop in a IMU data" UserInfo={NSDescription=Non fatal error occurred due to significant drop in a IMU data, NSLocalizedFailureReason=SlamEngineNodeGroup Failure: IMU issue: gyro data stream verification failed [Significant data drop]. Failed on timestamp: 870.413247, Last known timestamp: 865.350198, Delta: 5.063049, System timestamp: 870.415781, Delta between system and frame: 0.002534. } and then again the pose issues several times. I hoped the new beta version would have solved the issue, but it was not the case. Unfortunately I do not know if that depends on the beta version or some other issue, given the app may be not installed on the Mac simulator.
16
2
2.5k
Dec ’25
ARView environment.lighting IBL from HDR file
I have an iOS app that can display a USDZ model downloaded from the Internet (and cached locally) via an ARView. I would like to light that model with an image based light (IBL) also downloaded from the Internet. However, as far as I can tell, ARView can only create an IBL from a resource that has been compiled into the Xcode project and loaded with EnvironmentResource(named:in:) or EnvironmentResource.load(named:in:). Is there a way to create an EnvironmentResource from an HDRI via a file URL to use in ARView in iOS?
0
0
730
Nov ’25
Occlusion issues in Immersive Space - Breaking User Input Interaction
I'm developing a custom gesture-based visionOS project that uses hand tracking with collision detection spheres on fingers to register user interactions through collision components. I'm experiencing a critical occlusion issue where collision detection spheres are intermittently occluded by the background/depth buffer, causing fingers to pass through the 3D model entities without registering interactions. Detailed Description: I have added 3D entities in an immersive scene with collision spheres attached to fingers for detecting user interactions. Each sphere has: CollisionComponent with sphere shape Proper collision masks and groups configured Real-time position updates from hand joint transforms Each entity has: InputTarget components to register collisions The Issue: When users move their fingers to the entity to interact, some collision spheres (particularly on the pinkie and ring fingers) become occluded and pass directly through the 3D model without triggering collision events. Meanwhile, other fingers (like the index finger) continue to work correctly. This appears to be a depth perception/z-buffer issue between the model entity and the hand tracking collision spheres Questions: Is there a recommended approach for maintaining consistent depth ordering between hand-tracking entities and 3D models in immersive spaces to prevent occlusion issues? Should I be using AnchorEntities to anchor the entity to a plane or world position to establish a more stable depth reference? Are there specific RenderingComponent or material settings that could help ensure collision entities maintain their depth priority and don't get occluded? Could this be related to z-fighting when collision spheres and entity geometry occupy similar depth ranges? If so, what's the recommended depth bias approach? Is there a better architectural approach for implementing interactions with custom hand gesture tracking that avoids these depth perception issues? What Would Help: Implementation guidance for ensuring reliable collision detection between hand-tracked entities through custom gestures and 3D models. Best practices for depth management in immersive spaces with custom hand gesture tracking. Sample code demonstrating stable hand-to-object interaction patterns. Information about whether this is a known limitation or if there are specific APIs I should be leveraging This issue is significantly impacting the reliability of our app experience, as users cannot consistently interact with all model components. Any guidance from Apple engineers or developers who have solved similar depth/occlusion challenges would be greatly appreciated. Additional Context: This is for a productivity-focused application where accuracy and reliability are critical. Thank you for any assistance!
0
0
370
Nov ’25
Help Configuring Unity for Immersive VR on Vision Pro with Pinch Teleport
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me). So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically: visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport. XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement. visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion. I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code. Please include steps for: Setting up immersive VR (disabling spatial defaults if needed). Integrating pinch detection with ray-based teleport. Any config changes or basic scripts. Project Configuration: Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69) Installed Packages: Apple visionOS XR Plugin: 2.3.1 AR Foundation: 6.2.0 PolySpatial XR: 2.3.1 XR Core Utilities: 2.5.3 XR Hands: 1.6.1 XR Interaction Toolkit: 3.2.1 XR Legacy Input Helpers: 2.1.12 XR Plugin Management: 4.5.1 Imported Samples: Apple visionOS XR Plugin 2.3.1: Metal Sample - URP XR Hands 1.6.1 XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS Build Platform Settings: Target: Apple visionOS App Mode: Metal Rendering with Compositor Services Selected Validation Profiles: visionOS Metal Documentation: Enabled Xcode Version: 26.01 visionOS SDK: 26 Mac Hardware: Apple M1 Max Target visionOS Version: 20 or 26 Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max No errors in builds so far; just missing the desired functionality. Thanks for a complete response with actionable steps.
0
0
252
Oct ’25
RealityKit captureHighResolutionFrame from session is broken on iOS26?
A bit of background on what our app is doing: We have a RealityKit ARView session running. During this period we place objects in RealityKit. At some point user can "take photo" and we use session.captureHighResolutionFrame to capture a frame. We then use captured frame and frame.camera.projectPoint to project my objects back to 2D Issue we found is that on devices that have iOS26, first photo user takes and the first frame received from session.captureHighResolutionFrame gives incorrect CGPoint for frame.camera.projectPoint. If user takes the second photo with the same camera phostion, second frame received from session.captureHighResolutionFrame gives correct CGPoint for frame.camera.projectPoint I notices some difference between first and subsequent frames that i believe is corresponding with the issue. Yaw value of camera (frame.camera.eulerAngles.y) on first frame is not correct ( inconsistent with any subsequent frame) I also created a small example app and i followed Building an Immersive Experience with RealityKit example to create it. The issue exists in this app for iOS26, while iOS18.* has consistent values between first and subsequent captured frames. Note: The yaw value seems to differ more if we start session in portrait but take photo in landscape. Example result for 3 captured frames: Frame captured with yaw: 1.4855177402496338 Frame captured with yaw: -0.08803760260343552 Frame captured with yaw: -0.08179682493209839 Example code: class CustomARView: ARView, ARSessionDelegate { required init(frame: CGRect) { super.init(frame: frame) } required init?(coder decoder: NSCoder) { fatalError("init(coder:) has not been implemented")} func setup() { let singleTap = UITapGestureRecognizer(target: self, action: #selector(handleTap)) addGestureRecognizer(singleTap) } @objc func handleTap(_ gestureRecognizer: UIGestureRecognizer) { Task { do { let frame = try await session.captureHighResolutionFrame() print("Frame captured with yaw: \(Double(frame.camera.eulerAngles.y))") } catch { } } } } struct CustomARViewUIViewRepresentable: UIViewRepresentable { func makeUIView(context: Context) -> some UIView { let arView = CustomARView(frame: .zero) arView.setup() return arView } func updateUIView(_ uiView: UIViewType, context: Context) { } } struct ContentView: View { var body: some View { CustomARViewUIViewRepresentable() .frame(maxWidth: .infinity, maxHeight: .infinity) .ignoresSafeArea() } }
3
1
641
Sep ’25
WWDC 25 RemoteImmersiveSpace - Support for Passthrough Mode? RealityKit?
This is related to the WWDC presentation, What's new in Metal rendering for immersive apps.. Specifically, the macOS spatial streaming to visionOS feature: For reference: the page in the docs. The presentation demonstrates it using a full immersive space and Metal rendering using compositor services. I'd like clarity on a few things: Is the remote device wireless, or must the visionOS device be connected via a wired connected? Is there a limit to the number of remote devices, and if not, could macOS render different things per remote device simultaneously? Can I also use mixed mode with passthrough enabled, instead of just a fully-immersive mode? Can I use RealityKit instead of Metal? If so, may I have an example, or would someone point to an example?
5
0
640
Sep ’25