Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

When running Unity applications in the shared space of visionOS, the Mask/RectMask2D of ScrollView does not function properly
Problem Description: I am developing an application that runs in the Shared Space on Apple Vision Pro using Unity. When using the UI ScrollView (Scroll View) component, I found that the Mask / RectMask2D does not function in the Shared Space. Scrolling content is not masked or cropped; it extends beyond the view boundary and is displayed directly. The same UI works correctly across platforms such as Unity Editor, iOS, and macOS, but the issue only occurs in the shared space of Vision Pro. Reproduction steps: Create a ScrollView in Unity. Add a Mask or RectMask2D to the viewport. Deploy the application to Apple Vision Pro and run it in Shared Space mode. Sliding content will not be clipped by the mask, and the masked area is entirely ineffective. Expected behavior: The content of ScrollView should be properly clipped by Mask / RectMask2D and should not render outside the mask boundary. Actual results: In the shared space of Vision Pro, the mask is ineffective, causing scrolling content to extend beyond the designated area and resulting in severe UI distortion. Environmental Information: Device: Apple Vision Pro Mode: Shared Space Unity Version: 6000.0.40f1 visionOS version: visionOS 26.0 Unity PolySpatial Version: 2.0.4 Impact This issue causes Unity UI to fail to display correctly on Vision Pro, preventing ScrollView from properly clipping content, which impacts the UI experience and interaction effects in practical applications. Expected Result: When running a Unity app in the shared space of visionOS, the Mask / RectMask2D of ScrollView functions correctly
0
0
90
1w
Is it possible to raycast with PSVR2 controllers when developing in Unity for the Apple Vision Pro?
Hi all, I am currently developing a game in Unity for VisionOS and I'd prefer to use the PSVR2 controllers as a source of the raycast for menu selection instead of the default VisionOS gaze for my specific use case. Is there a way to access the IMU of PSVR2 controllers to do this instead of just using eyegaze + controller click for selection? Is there a specific configuration for GCController from within Unity maybe? Thank you!
1
0
517
2w
Reoccurring World Tracking / Scene Exceeded Limit Error
Hi, We’ve been successfully using the RoomPlan API in our application for over two years. Recently, however, users have reported encountering persistent capture errors during their sessions. Specifically, the errors observed are: CaptureError.worldTrackingFailure CaptureError.exceedSceneSizeLimit What we have observed: Persistent Errors: The errors continue to occur even after initiating new capture sessions. Normal Usage: Our implementation adheres to typical usage patterns of the RoomPlan API without exceeding any documented room size limits. Limited Feature Usage: We are not utilizing the WorldTracking feature for the StructureBuilder functionality to stitch rooms together. Potential State Caching: Given that these errors persist across sessions, we suspect that there might be memory or state cached between sessions that is not being cleared, particularly since we are not taking advantage of StructureBuilder. Request: Could you please advise if there is any internal caching or memory retention between capture sessions that might lead to these errors? Additionally, we would appreciate guidance on how to clear or manage this state when the StructureBuilder feature is not in use. Here is a generalised version of our capture session initialization code to help diagnose the issue. struct RoomARCaptureView: UIViewRepresentable { typealias Handler = (CapturedRoom, Error?) -> Void @Binding var stop: Bool @Binding var done: Bool let completion: Handler? func makeUIView(context: Self.Context) -> RoomCaptureView { let view = RoomCaptureView(frame: .zero) view.delegate = context.coordinator view.captureSession.run(configuration: .init()) return view } func updateUIView(_ uiView: RoomCaptureView, context: Self.Context) { if stop { // Stop the session only once, multiple times causes issues with the final presentation uiView.captureSession.stop() stop = false done = true } } static func dismantleUIView(_ uiView: RoomCaptureView, coordinator: Self.Coordinator) { uiView.captureSession.stop() } func makeCoordinator() -> ARViewCoordinator { ARViewCoordinator(completion) } @objc(ARViewCoordinator) class ARViewCoordinator: NSObject, RoomCaptureViewDelegate { var completion: Handler? public required init?(coder: NSCoder) {} public func encode(with coder: NSCoder) {} public init(_ completion: Handler?) { super.init() self.completion = completion } public func captureView(shouldPresent roomDataForProcessing: CapturedRoomData, error: (Error)?) -> Bool { return true } public func captureView(didPresent processedResult: CapturedRoom, error: (Error)?) { completion?(processedResult, error) } } } Thank you for your assistance.
6
3
691
2w
VisionOS 2 - Screen Capture with passthrough
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug. We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after. The only message that is rather contradictory we see in the console.app is the following [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license and just right after [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
1
0
284
2w
window loses resize feature on reopening
I have this problem on VisionOS. When I dismiss and reopen a window from a ImagePresentationComponent, the window misses the resize ui elements when I look at the window corners. The rest of the window ui elements (drag, close...) are there. Resizing was possible before the window was dismissed. The code is something like this: WindowGroup(id: "image-display-window",..... } .windowResizability(.automatic) .windowStyle(.plain) I call dismissWindow() from the window view and it is dismissed correctly. Then I call openWindow(id: "image-display-window", value: data) from another view to reopen it. It reopens but it missing the possibility to resize. Anyone knows how to fix this? Thanks.
1
0
237
2w
VisionOS 2 - Passthrough in screen capture
I'm trying to develop an app that broadcasts what the user sees (priorly we were using main camera access) but now we'd like to investigate and try with this option. I have set up the BroadcastExtension, I've added the picker, I click on my button, I can see my broadcast extension in the options list in the control center, once I click start, it stops after 1 second more or less. I'm not able to get anything in the console from my Sample Handler (prints or logs or anything). I can see however in the console.app some misleading information (one after the other): [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license We have the entreprise license, the capability and I did add the capability on the extension target as well.
0
0
176
2w
VisionOS Enterprise API Not Working
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps. We did the following: Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://developer.apple.com/documentation/visionos/accessing-the-main-camera I am just unable to receive camera frames. I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason. "Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
3
0
739
2w
Access a DepthMap while RoomCaptureSession
I'm capturing a room via RoomPlan API and would like to access the DepthMap(sceneDepth) or SmoothDepthMap(smoothedSceneDepth) from my own provided ARSession for RoomCaptureSession. But both depth maps are empty when handling the delegates. I have not found a solution yet. So is it even possible? Because i have not found any documentation of what RoomCaptureSession overwrites in the ARSession if I provide my own ARSession instance. Here is a example code snippet of what i'm trying to do: private let arSession = ARSession() private lazy var roomPlanCaptureSession = RoomCaptureSession(arSession: arSession) let arConfig = ARWorldTrackingConfiguration() //Create semantics for ARconfig which is used for ARSession var semantics: ARWorldTrackingConfiguration.FrameSemantics = [] if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth) { semantics.insert(.sceneDepth) } if ARWorldTrackingConfiguration.supportsFrameSemantics(.smoothedSceneDepth) { semantics.insert(.smoothedSceneDepth) } arConfig.frameSemantics = semantics //set delegates roomPlanCaptureSession.delegate = self arSession.delegate = self //Check if device support for depthMap if ARWorldTrackingConfiguration.supportsFrameSemantics(.sceneDepth){ arSession.run(arConfig) } else{ print(".sceneDepth is unsupported.") } //run roomcapture scan config let captureConfig = RoomCaptureSession.Configuration() roomPlanCaptureSession.run(configuration: captureConfig) //trying to get sceneDepth public func session(_ session: ARSession, didUpdate frame: ARFrame) { print("session delegate capture: sceneDepth: \(String(describing: frame.sceneDepth))") //prints: session delegate capture: sceneDepth: nil also in this video from 2023 it is say that i can pass custom ARSession to my RoomPlan. Explore enhancements to RoomPlan - Video Quote 3:00: Here is the init and stop function in previous RoomPlan. And here is how you pass over a custom ARSession to init function. Any custom ARSession with ARWorldTrackingConfiguration will be honored inside RoomCaptureSession. anyway I welcome any input. maybe im doing something wrong. :)
0
0
307
3w
ShaderGraphMaterial with Occlusion Surface Output fails to load on iOS and macOS
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error: RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound) This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) RealityView { content in do { let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)]) bgEntity.position.z = -0.2 content.add(bgEntity) let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial") let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial]) content.add(testEntity) content.cameraTarget = testEntity } catch { print("Shader Graph Load Error:") dump(error) } } .realityViewCameraControls(.orbit) .edgesIgnoringSafeArea(.all) Feedback ID: FB15081296
2
1
1.3k
3w
Bouncy ball in RealityKit - game
I'm developing a VisionOS app with bouncing ball physics and struggling to achieve natural bouncing behavior using RealityKit's physics system. Despite following Apple's recommended parameters, the ball loses significant energy on each bounce and doesn't behave like a real basketball, tennis ball, or football would. With identical physics parameters (restitution = 1.0), RealityKit shows significant energy loss. I've had to implement a custom physics system to compensate, but I want to use native RealityKit physics. It's impossible to make it work by applying custom impulses. Ball Physics Setup (Following Apple Forum Recommendations) // From PhysicsManager.swift private func createBallEntityRealityKit() -> Entity { let ballRadius: Float = 0.05 let ballEntity = Entity() ballEntity.name = "bouncingBall" // Mesh and material let mesh = MeshResource.generateSphere(radius: ballRadius) var material = PhysicallyBasedMaterial() material.baseColor = .init(tint: .cyan) material.roughness = .float(0.3) material.metallic = .float(0.8) ballEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) // Physics setup from Apple Developer Forums let physics = PhysicsBodyComponent( massProperties: .init(mass: 0.624), // Seems too heavy for 5cm ball material: PhysicsMaterialResource.generate( staticFriction: 0.8, dynamicFriction: 0.6, restitution: 1.0 // Perfect elasticity, yet still loses energy ), mode: .dynamic ) ballEntity.components.set(physics) ballEntity.components.set(PhysicsMotionComponent()) // Collision setup let collisionShape = ShapeResource.generateSphere(radius: ballRadius) ballEntity.components.set(CollisionComponent(shapes: [collisionShape])) return ballEntity } Ground Plane Physics // From GroundPlaneView.swift let groundPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 1.0 // Perfect bounce ), mode: .static ) entity.components.set(groundPhysics) Wall Physics // From WalledBoxManager.swift let wallPhysics = PhysicsBodyComponent( massProperties: .init(mass: 1000), material: PhysicsMaterialResource.generate( staticFriction: 0.7, dynamicFriction: 0.6, restitution: 0.85 // Slightly less than ground ), mode: .static ) wall.components.set(wallPhysics) Collision Detection // From GroundPlaneView.swift content.subscribe(to: CollisionEvents.Began.self) { event in guard physicsMode == .realityKit else { return } let currentTime = Date().timeIntervalSince1970 guard currentTime - lastCollisionTime > 0.1 else { return } if event.entityA.name == "bouncingBall" || event.entityB.name == "bouncingBall" { let normal = event.collision.normal // Distinguish between wall and ground collisions if abs(normal.y) < 0.3 { // Wall bounce print("Wall collision detected") } else if normal.y > 0.7 { // Ground bounce print("Ground collision detected") } lastCollisionTime = currentTime } } Issues Observed Energy Loss: Despite restitution = 1.0 (perfect elasticity), the ball loses ~20-30% energy per bounce Wall Sliding: Ball tends to slide down walls instead of bouncing naturally No Damping Control: Comments mention damping values but they don't seem to affect the physics Change in mass also doesn't do much. Custom Physics System (Workaround) I've implemented a custom physics system that manually calculates velocities and applies more realistic restitution values: // From BouncingBallComponent.swift struct BouncingBallComponent: Component { var velocity: SIMD3<Float> = .zero var angularVelocity: SIMD3<Float> = .zero var bounceState: BounceState = .idle var lastBounceTime: TimeInterval = 0 var bounceCount: Int = 0 var peakHeight: Float = 0 var totalFallDistance: Float = 0 enum BounceState { case idle case falling case justBounced case bouncing case settled } } Is this energy loss expected behavior in RealityKit, even with perfect restitution (1.0)? Are there additional physics parameters (damping, solver iterations, etc.) that could improve bounce behavior? Would switching to Unity be necessary for more realistic ball physics, or am I missing something in RealityKit? Even in the last video here: https://stepinto.vision/example-code/collisions-physics-physics-material/ bounce of the ball is very unnatural - stops after 3-4 bounces. I apply custom impulses, but then if I have walls around the ball, it's almost impossible to make it look natural. I also saw this post https://developer.apple.com/forums/thread/759422 and ball is still not bouncing naturally.
9
0
856
3w
Best approach for high-quality textured room reconstruction using ARKit / RoomPlan / Object Capture?
I am developing an IOS App that allow users to scan rooms, view the scans on device, and add notes. I need to preserve actual geometry (odd angles, chamfers, fixtures), not simplified RoomPlan boxes. Are there any easy ways to incorporate high quality texture mapping or PBR? Where is the documentation for scene reconstruction?
1
0
810
3w
how to transition between spatial3d to spatial3DImmersive?
Hi, When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image. Thanks.
1
0
829
4w
I built apple.PHASE with Unity and targeted with visionOS, but Reverb does not sound.
Environment Versions ・macOS15.6.1 ・visionOS26.0.1 ・Xcode16.1 or 26.0.1 ・unity6000.2.9f1 ・Apple.core3.2.0 ・Apple.PHASE1.2.7 ・polyspatial2.4.2 With the above environment, after installing Apple.PHASE into unity and building to a visionOS device, Audio is available and distance attention works, but Early Reflection and Late Reverb produce no audible change even when checked and their parameters are adjusted. What is required to make Early Reflection and Late Reverb take effect on a visionOS device build? action taken ・created a SoundEvent. ・in composer, created a Sampler and a SpatialMixer; attached an AudioClip to the Sampler; enabled Direct Path, Early Reflection, and Late Reverb on the SpatialMixer. ・attached a PHASE Source to the object to be played, attached the created SoundEvent to it, and set non-zero values for Early Reflection and Late Reverb. ・attached a PHASE Listener to the mainCamera and set the ReverbPreset to a value other than None. ・in project settings > Audio, set Spatializer plugin to PHASE Spatializer. ・from there, build for visionOS.
0
0
709
Nov ’25
ARView environment.lighting IBL from HDR file
I have an iOS app that can display a USDZ model downloaded from the Internet (and cached locally) via an ARView. I would like to light that model with an image based light (IBL) also downloaded from the Internet. However, as far as I can tell, ARView can only create an IBL from a resource that has been compiled into the Xcode project and loaded with EnvironmentResource(named:in:) or EnvironmentResource.load(named:in:). Is there a way to create an EnvironmentResource from an HDRI via a file URL to use in ARView in iOS?
0
0
593
Nov ’25
Access Main Camera not working in VisionOS 26.1
I downloaded the official sample project “Accessing the Main Camera”, but I found that it’s not able to retrieve the camera feed on visionOS 26.1. After checking the debug logs, it seems the issue is caused by the system being unable to find the expected format. I tested on a device running visionOS 2, and the camera feed worked correctly — but only when using the sample code from the visionOS 2 version, not the current one. I also noticed that some of the APIs have changed between versions. Has anyone managed to successfully access the camera feed on visionOS 26.1?
4
0
737
Nov ’25
ARFrame.sceneDepth not correctly registered with ARFrame.capturedImage for iPad Pro (6th Gen) for high resolution capture.
Hi team, I believe I’ve found a registration issue between ARFrame.sceneDepth and ARFrame.capturedImage when using high-resolution frame capture on a 2022 iPad Pro (6th gen). When enabling high-resolution capture: if let highResFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing { config.videoFormat = highResFormat } … arView.session.captureHighResolutionFrame { ... } the depth map provided by ARFrame.sceneDepth no longer aligns correctly with the corresponding high-resolution capturedImage. This misalignment results in consistently over-estimated distance measurements in my app (which relies on mapping depth to 2D pixel coordinates). iPad Pro (6th gen): misalignment occurs only when capturing high-resolution frames. iPhone 16 Pro: depth is correctly registered for both standard and high-resolution captures. It appears the camera intrinsics, specifically the FOV, change between the “regular” resolution stream and the high-resolution capture on the iPad. My suspicion is that the depth data continues using the intrinsics of the lower resolution stream, resulting in an unregistered depth-to-RGB mapping. Once I have the iPad in hand again, I will confirm whether camera.intrinsics or FOV differ between the low-res and high-res frames. Is this a known issue with high-resolution frame capture on the 2022 iPad Pro? If not, I’m happy to provide some more thorough sample code. Thanks for your time!
0
0
143
Nov ’25