Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Best approach for high-quality textured room reconstruction using ARKit / RoomPlan / Object Capture?
I am developing an IOS App that allow users to scan rooms, view the scans on device, and add notes. I need to preserve actual geometry (odd angles, chamfers, fixtures), not simplified RoomPlan boxes. Are there any easy ways to incorporate high quality texture mapping or PBR? Where is the documentation for scene reconstruction?
1
0
462
2h
how to transition between spatial3d to spatial3DImmersive?
Hi, When viewing a spatial photo scene on the Apple Vision Pro Photos app, you can tap on the immersive icon on the top right corner to transaction from the window presenting the image as spatial3d to an immersive photo scene with spatial3DImmersive where the window borders disappear. Could someone explain how to achieve that? I tried to do it but once I transition from spatial3d to spatial3DImmersive I can see still see a rectangle around the spatial image. Thanks.
1
0
552
2d
I built apple.PHASE with Unity and targeted with visionOS, but Reverb does not sound.
Environment Versions ・macOS15.6.1 ・visionOS26.0.1 ・Xcode16.1 or 26.0.1 ・unity6000.2.9f1 ・Apple.core3.2.0 ・Apple.PHASE1.2.7 ・polyspatial2.4.2 With the above environment, after installing Apple.PHASE into unity and building to a visionOS device, Audio is available and distance attention works, but Early Reflection and Late Reverb produce no audible change even when checked and their parameters are adjusted. What is required to make Early Reflection and Late Reverb take effect on a visionOS device build? action taken ・created a SoundEvent. ・in composer, created a Sampler and a SpatialMixer; attached an AudioClip to the Sampler; enabled Direct Path, Early Reflection, and Late Reverb on the SpatialMixer. ・attached a PHASE Source to the object to be played, attached the created SoundEvent to it, and set non-zero values for Early Reflection and Late Reverb. ・attached a PHASE Listener to the mainCamera and set the ReverbPreset to a value other than None. ・in project settings > Audio, set Spatializer plugin to PHASE Spatializer. ・from there, build for visionOS.
0
0
553
4d
ARView environment.lighting IBL from HDR file
I have an iOS app that can display a USDZ model downloaded from the Internet (and cached locally) via an ARView. I would like to light that model with an image based light (IBL) also downloaded from the Internet. However, as far as I can tell, ARView can only create an IBL from a resource that has been compiled into the Xcode project and loaded with EnvironmentResource(named:in:) or EnvironmentResource.load(named:in:). Is there a way to create an EnvironmentResource from an HDRI via a file URL to use in ARView in iOS?
0
0
549
4d
Access Main Camera not working in VisionOS 26.1
I downloaded the official sample project “Accessing the Main Camera”, but I found that it’s not able to retrieve the camera feed on visionOS 26.1. After checking the debug logs, it seems the issue is caused by the system being unable to find the expected format. I tested on a device running visionOS 2, and the camera feed worked correctly — but only when using the sample code from the visionOS 2 version, not the current one. I also noticed that some of the APIs have changed between versions. Has anyone managed to successfully access the camera feed on visionOS 26.1?
4
0
696
5d
ARFrame.sceneDepth not correctly registered with ARFrame.capturedImage for iPad Pro (6th Gen) for high resolution capture.
Hi team, I believe I’ve found a registration issue between ARFrame.sceneDepth and ARFrame.capturedImage when using high-resolution frame capture on a 2022 iPad Pro (6th gen). When enabling high-resolution capture: if let highResFormat = ARWorldTrackingConfiguration.recommendedVideoFormatForHighResolutionFrameCapturing { config.videoFormat = highResFormat } … arView.session.captureHighResolutionFrame { ... } the depth map provided by ARFrame.sceneDepth no longer aligns correctly with the corresponding high-resolution capturedImage. This misalignment results in consistently over-estimated distance measurements in my app (which relies on mapping depth to 2D pixel coordinates). iPad Pro (6th gen): misalignment occurs only when capturing high-resolution frames. iPhone 16 Pro: depth is correctly registered for both standard and high-resolution captures. It appears the camera intrinsics, specifically the FOV, change between the “regular” resolution stream and the high-resolution capture on the iPad. My suspicion is that the depth data continues using the intrinsics of the lower resolution stream, resulting in an unregistered depth-to-RGB mapping. Once I have the iPad in hand again, I will confirm whether camera.intrinsics or FOV differ between the low-res and high-res frames. Is this a known issue with high-resolution frame capture on the 2022 iPad Pro? If not, I’m happy to provide some more thorough sample code. Thanks for your time!
0
0
105
5d
VisionOS Enterprise API Not Working
My development team admin requested the Enterprise API for camera access on the vision pro. We got that granted, got a license for usage, and got instructions for integrating it with next steps. We did the following: Even when I try to download and run the sample project for "Accessing the Main Camera", and follow all the exact instructions mentioned here: https://developer.apple.com/documentation/visionos/accessing-the-main-camera I am just unable to receive camera frames. I added the capabilities, created a new provisioning profile with this access, added the entitlements to info.plist and entitlements, replaced the dummy license file with the one we were sent, and also have a matching bundle identifier and development certificate, but it is still not showing camera access for some reason. "Main Camera Access" shows up in our Signing & Capabilities tab, and we also added the NSMainCameraDescription in the Info.plist and allow access while opening the app. None of this works. Not on my app, and not on the sample app that I just downloaded and tried to run on the Vision Pro after replacing the dummy license file.
3
0
326
5d
Where can we access the new enterprise license files mentioned in the WWDC session?
Hi everyone, I’m trying to verify something mentioned in the WWDC session “Explore enhancements to your spatial business app.” At timestamp 3:36, the presenter states: “You can now access your enterprise license files directly within your Apple Developer account.” I’ve checked every section of my Developer account, including: • Membership and Agreements • Certificates, Identifiers & Profiles • App Store Connect • Additional Resources • Account settings …but no UI or section exposes these enterprise license files. Since the Vision Entitlement Services framework actively checks these licenses (for example, mainCameraAccess entitlement approval), I need to confirm the location of the new license file. Could someone from Apple or anyone who has seen this feature clarify: 1. Where exactly do these enterprise license files appear in the Developer account UI, or 2. Whether this feature has not rolled out yet? Any guidance or screenshots from those who have access would be invaluable. Thanks,
1
0
201
1w
Spatial-backdrop standards process
Apple's WWDC video What’s new for the spatial web says the spatial-backdrop markup may change as it goes through the standards process (at 27:26 mark). I have started adding spatial-backdrops to web pages, so I want to keep an eye out for status updates by Apple and follow the standards progress. Is there any place I can keep an eye on this standards process? Has Apple announced any feature updates or news on spatial-backdrops?
0
0
54
1w
Can't establish spatial connection after visionOS update
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update. To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect. Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it? Thanks! Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow]. ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5) CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]: Failed to find any binary shader archive
0
0
59
1w
Blender Geometry Nodes to Reality Composer Pro
Hello! Back from last week's amazing visit to Cupertino for the Game Dev session and diving back into Vision Pro experimentation. I've exported a simple geometry nodes with animation test from Blender for use in RCP, with intended output to Vision Pro. I've attached a few screenshots showing the node setup and how it animates over time. I select the Cube mesh and export as .usdc with animation. In the finder via quick look, I can actually see it working! If I try exporting as .usdz, however, i'm not seeing any animation in the finder preview. Next, I import the .usdc file to RCP and add an Animation Library component to the cube mesh, but am not seeing any animation selectable, even though I see animation playing back in preview. Next, I import the .usdc into Maya (via proper USD Stage pipeline - i'm learning to be USD compliant for authoring!) to verify if the animation is working, and it does. What step(s) am I missing to get this working in Reality Composer Pro? My goal is to experiment with animating these geometry node instances - along with color animation if possible - over to Vision Pro for full scale, immersive presentation. Of particular note, I am not a programmer, so I am trying my best to brute force this the only way I currently know possible, by keyframe animation and importing through Reality Composer Pro. I realize that, ideally, I should be learning how to leverage the code portion so I can start programatically controlling my 3d entities (with animation), but need more hand holding and real-world examples to help me get there. Thx!
1
0
280
1w
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
2
1
723
1w
Object tracking capability not available
Hi there, I received an enterprise license file to include enhanced object tracking configuration for the Vision Pro. My account is part of the team which got the allowance from Apple to use this capability. Unfortunately, although I followed the guide, I do not find the Object Tracking capability when I try to add it to my project. There are other capabilities like Main Camera on the Vision Pro, but not for Object Tracking. I am using Xcode 26.1 and visionOS 26.1. What am I missing here? Thanks in advance, Matthias
0
0
84
1w
Occlusion issues in Immersive Space - Breaking User Input Interaction
I'm developing a custom gesture-based visionOS project that uses hand tracking with collision detection spheres on fingers to register user interactions through collision components. I'm experiencing a critical occlusion issue where collision detection spheres are intermittently occluded by the background/depth buffer, causing fingers to pass through the 3D model entities without registering interactions. Detailed Description: I have added 3D entities in an immersive scene with collision spheres attached to fingers for detecting user interactions. Each sphere has: CollisionComponent with sphere shape Proper collision masks and groups configured Real-time position updates from hand joint transforms Each entity has: InputTarget components to register collisions The Issue: When users move their fingers to the entity to interact, some collision spheres (particularly on the pinkie and ring fingers) become occluded and pass directly through the 3D model without triggering collision events. Meanwhile, other fingers (like the index finger) continue to work correctly. This appears to be a depth perception/z-buffer issue between the model entity and the hand tracking collision spheres Questions: Is there a recommended approach for maintaining consistent depth ordering between hand-tracking entities and 3D models in immersive spaces to prevent occlusion issues? Should I be using AnchorEntities to anchor the entity to a plane or world position to establish a more stable depth reference? Are there specific RenderingComponent or material settings that could help ensure collision entities maintain their depth priority and don't get occluded? Could this be related to z-fighting when collision spheres and entity geometry occupy similar depth ranges? If so, what's the recommended depth bias approach? Is there a better architectural approach for implementing interactions with custom hand gesture tracking that avoids these depth perception issues? What Would Help: Implementation guidance for ensuring reliable collision detection between hand-tracked entities through custom gestures and 3D models. Best practices for depth management in immersive spaces with custom hand gesture tracking. Sample code demonstrating stable hand-to-object interaction patterns. Information about whether this is a known limitation or if there are specific APIs I should be leveraging This issue is significantly impacting the reliability of our app experience, as users cannot consistently interact with all model components. Any guidance from Apple engineers or developers who have solved similar depth/occlusion challenges would be greatly appreciated. Additional Context: This is for a productivity-focused application where accuracy and reliability are critical. Thank you for any assistance!
0
0
207
1w
ARSkeleton3D modelTransform always return nil
I use ARKit for motion tracking. I get the skeleton joint coordinates and use them for animation. I didn't make any changes to the code, but I updated the iOS version from 18 to 26, and modelTransform now always returns nil. https://developer.apple.com/documentation/arkit/arskeleton3d/modeltransform(for:) For example bodyAnchor.skeleton.modelTransform(for: .init(rawValue: "head_joint")) bodyAnchor is ARBodyAnchor. I see the default skeleton on the screen, but now I can't get the coordinates out of it. I'm using an example from Apple's WWDC presentation. https://developer.apple.com/documentation/arkit/capturing-body-motion-in-3d Are there any changes in the API? Or just bug?
5
0
670
1w
Developer Strap Gen 2 - Only USB2 Speeds
I am testing out the Gen 2 of the developer strap on my Vision Pro M2 and I have only been able to get USB 2 speeds when connecting it to my MacBook Pro Max M3. I used the official Apple Thunderbolt 4 cable, which does get Thunderbolt speeds on my T7 Touch drive. Has anyone figured out a solution for this issue? The Gen 2 developer strap does advertise 20 Gb/s speeds.
5
2
1.2k
1w
PSVR2 controller button quirks
I have an open Feedback conversation with Apple on this topic, but I am curious if others have run into this, or want to try out my sample code in their set up. there are two API’s for reading controller buttons, axis, and D pads: GCPhysicalInputProfile and GCControllerLiveInput. There are inconsistencies in behaviour between the two of them. Apple recommends we use GCControllerLiveInput, however, there are some capabilities on these controllers that are only accessible through GCPhysicalInputProfile, as I’ll discuss below. PSVR2 R2/L2 buttons, a.k.a. triggers, have force input analogue values. These can only be accessed on GCPhysicalInputProfile PSVR2 thumbstick direction values are read through “axes” on GCPhysicalInputProfile, but only “dpads” on GCControllerLiveInput on both GCPhysicalInputProfile and GCControllerLiveInput, All pressed events of all buttons are fired properly using generic aliases ( Trigger, Grip ,Menu, Right Thumbstick, Left Thumbstick, Right Button A & B (Circle & Cross), Left Button A&B (Triangle and Square) ). Apple reserves the system button as the equivalent of a home button for the OS. on GCPhysicalInputProfile, touch events are fired when the button is also pressed, but not for only touches. on GCControllerLiveInput , Touch events only works for the following buttons: Left Thumbstick, Right Thumbstick, Right Button A (Circle), and Right Button B (Cross). But Right Button B touch event isn’t labelled correctly, it fires as the Right Button A event. I observed this inside ALVR which uses a polling based approach to event processing: https://github.com/alvr-org/alvr-visionos/blob/17b5968f9d894944b53e97134b39dfce0993302a/ALVRClient/WorldTracker.swift#L301 To simplify to see this on a very simple app, I used the Apple example TrackingAccessories application: https://developer.apple.com/documentation/ARKit/tracking-accessories-in-volumetric-windows I’ve attached the code that replaces the AccessoryTrackingModel class. I added code that prints out what is touched/pressed, see the trackAllConnectedSpatialControllers method: https://github.com/svrc/TrackingAccessories
4
0
571
1w