Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
2
1
767
Nov ’25
Game Controller Input Limitations in visionOS Volumetric Windows - Need Clarification
Game Controller Input Limitations in visionOS Volumetric Windows Hello Apple Developer Community, I'm developing a game for visionOS and have encountered significant limitations with game controller input when using volumetric windows (WindowGroup with .volumetric style). I'd appreciate clarification on whether this is expected behavior and any guidance on best practices. 🧩 Issue Summary When using a DualSense controller with a volumetric window in visionOS, only a subset of controller inputs are available to the app. The remaining inputs appear to be reserved by the system for UI navigation. ✅ Working Inputs (Volumetric Window) D-Pad (all directions) L3 (left thumbstick button click) R3 (right thumbstick button click) Menu button Options button ❌ Not Working Inputs (Volumetric Window) Left thumbstick analog movement (used for UI scrolling instead) Right thumbstick analog movement (used for UI scrolling instead) Face buttons (Cross, Circle, Square, Triangle / A, B, X, Y) Shoulder buttons (L1, R1) Triggers (L2, R2) Key observation: When moving the left thumbstick in a volumetric window, the window's UI scrolls vertically instead of sending input to my app's GameController handlers. Similarly, face buttons seem to be reserved for system UI interactions. ⚙️ Implementation Details I'm using the standard GameController framework: Connect to controller via GCController.controllers() Access extendedGamepad profile Set up valueChangedHandler and pressedChangedHandler for all inputs Handlers confirmed registered via logging Working inputs (D-Pad, L3, R3) trigger immediately and consistently Non-working inputs (thumbsticks, face buttons) never trigger 🧠 Critical Finding: ImmersiveSpace Works Perfectly When testing the exact same code in an ImmersiveSpace (.mixed immersion style), all controller inputs work perfectly: ✅ Both thumbsticks provide full analog input ✅ All face buttons trigger their handlers ✅ All shoulder buttons and triggers work correctly ✅ 100% success rate with no intermittent issues This suggests the issue isn't with my code, but rather how visionOS handles controller input differently between Volumetric Windows and ImmersiveSpace. 🧪 Test Environment I created a minimal test project (Controller-Playground) to isolate the issue: A simple ControllerTester class that registers all GameController handlers A visual UI showing real-time input state No game logic, RealityKit physics, or other complexity Results In volumetric window: Only D-Pad, L3, R3, Menu, Options work In ImmersiveSpace: All inputs work perfectly This confirms the limitation exists at the visionOS platform level, not in app code. 🧰 Attempted Workarounds I tried the following without success: Setting GCSupportsControllerUserInteraction = false in Info.plist Setting UIRequiresFullScreen = true Changing window styles (.plain, .volumetric) Polling vs. handler-based input approaches Various threading models (MainActor, separate thread) Result: The only way to enable full controller support is to switch to ImmersiveSpace. ❓ Questions for Apple Is this input reservation behavior in volumetric windows intended and documented? Are game controllers expected to have limited functionality in volumetric windows while full functionality is reserved for ImmersiveSpace? Is there a way to request full controller input access in a volumetric window, or is ImmersiveSpace the only option for complete controller support? Where can I find official documentation about controller input differences between window types? Are there any APIs or configuration options to disable system controller shortcuts in volumetric windows? 🎯 Impact This limitation has a significant effect on game design and architecture: Volumetric windows offer a multitasking-friendly, less immersive experience ImmersiveSpace provides full controller support but may be more immersive than some games require Games that only need basic D-Pad and button input can work fine in volumetric windows Games requiring analog sticks or face buttons must currently use ImmersiveSpace It would be very helpful if Apple could clarify or reference existing documentation regarding controller input handling in different visionOS window types. If such documentation doesn't exist yet, it might be valuable to include this information in future developer guides or best-practice documents. 🕹 Current Workaround For now, I'm using: D-Pad for character movement (digital 8-direction) R3 (right stick click) as a substitute for the "X" button This setup allows the game to function within a volumetric window, though full controller support still requires ImmersiveSpace. 📄 Request If this is expected behavior, I may have simply missed the relevant documentation — could you please point me to any existing resources that explain this design? If there isn't one yet, it would be great if future visionOS documentation could: Clearly outline controller input behavior across window types Provide guidance on when to use Volumetric Windows vs. ImmersiveSpace for games Consider adding an API option to request full controller access when appropriate If this is not expected behavior, I'm happy to file a detailed bug report with sample code. 💻 System Information visionOS: Latest Simulator Xcode: Latest version Controller: Sony DualSense Framework: GameController (standard extendedGamepad profile) Test project: Minimal reproducible example available Thank you for any clarification or guidance you can provide. This information would be valuable for many developers working on visionOS games.
1
0
562
Oct ’25
[xrsimulator] Exception thrown during compile: cannotGetRkassetsContents
My friend cannot build my visionOS project in the simulator. He gets the following error. Error: [xrsimulator] Exception thrown during compile: cannotGetRkassetsContents(path: "/Users/path/to/Packages/RealityKitContent/Sources/RealityKitContent/RealityKitContent.rkassets") In Xcode, he is able to open the RealityKitContent package in realityComposer Pro by clicking on the Package.realitycomposerpro file. No warnings show up wrt this error in RCP either. All scenes appear to be usable/navigable in RCP. This error only comes up when he tries to build the project in Xcode command+b. The is no other information in the Report Navigator's Build logs for this error. The error is always followed by this next error. Error: Tool exited with code 1 Yikes, please help!
1
1
506
Feb ’25
Spaceship sample code does not compile
I'm getting the following error message when compiling the Apple provided sample, Spaceship game for the Apple Visio Pro. I've already tried deleting the derived data resetting the package cache and restarting Xcode but still getting the following error: [xrsimulator] Exception thrown during compile: Cannot get rkassets content for path /Users/myoungkang/Downloads/CreatingASpaceshipGame/Packages/Studio/Sources/Studio/Studio.rkassets because 'The file “Studio.rkassets” couldn’t be opened because you don’t have permission to view it.' error: Tool exited with code 1
0
1
68
Apr ’25
Please provide access to face tracking blendshapes on vision os
Apple, please provide access to face tracking blend shapes on vision os, just like you do on iOS. You have the best eye and face tracking implementation on the market, please let us use it. There is a sizable audience who will buy the headset just for it. I personally know multiple people who are not buying the headset simply because you locked those features out. No raw camera access is needed, just abstracted blendshapes values. You will make the headset so much more useful if you do this simple thing.
1
1
134
Jun ’25
Getting to MeshAnchor.MeshClassification from MeshAnchor?
I am working with MeshAnchors, and I am having troubles getting to the classification of the triangles/faces. This post references the MeshAnchor.Geometry, and that struct does have a property named "classifications", but it is of type GeometrySource. I cannot find any classification information in GeometrySource. Am I missing something there? I think I am looking for something of type MeshAnchor.MeshClassification, but I cannot find any structs with this as a property.
3
0
1.3k
Feb ’25
Force quit apps on Vision Pro simulator
How do you force quit apps on the Vision Pro simulator? I've read online you can double press the digital crown on a real Vision Pro device but there is no such button on the simulator. So far I've tried Pressing the home button twice (⇧⌘H) Pressing the Siri button twice (⌥⇧⌘H) None of them works I'm on Xcode 15.0 beta 5 (15A5209g) & visionOS 1.0 beta 2 (21N5207e)
6
2
3.1k
Feb ’25
Is ARGeoTrackingConfiguration always more accurate than ARWorldTrackingConfiguration for world scale AR?
We are working on a world scale AR app that leverages the device location and heading to place objects in the streets, so that they are correctly and stably anchored to certain locations. Since the geo-tracking imagery is only available in certain cities and areas, we are trying to figure out how to fallback when geo-tracking is not available as the device move away, to still retain good AR camera accuracy. We might need to come up with some algorithm using the device GPS, to line up the ARCamera with our objects. Question: Does geo-tracking always provide greater than or equal to the accuracy of world tracking, for a GPS outdoor AR experience? If so, we can simply use the ARGeoTrackingConfiguration for the entire time, and rely on the ARView keeping itself aligned. Otherwise, we need to switch between it and ARWorldTrackingConfiguration when geo-tracking is not available and/or its accuracy is low, then roll our own algorithm to keep the camera aligned. Thanks.
3
0
1.1k
Oct ’25
Issue with tracking multiple images using ARKit on VisionOS
We are using the ARKit image tracking feature on visionOS 2.0 with three pre-registered images. The image tracking works, but only one image is actively tracked at a time. When more than one target image is visible to the camera, it has difficulty detecting and tracking the other images. Is this the expected behavior in visionOS, or is there something we need to do to resolve this issue?
4
1
510
Mar ’25
VisionOS 2 - Screen Capture with passthrough
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug. We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after. The only message that is rather contradictory we see in the console.app is the following [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license and just right after [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
2
1
452
1w
How to fix "Sample 0 missing LiDAR point cloud!" error?
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files. When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample. Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac. Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here: So my questions are: Are these the missing information in my manual capture causing this warning? Can I somehow add these information in an AVCaptureSession? Do these information allow better photogrammetry results?
2
0
320
3w
Partial Occlusion Material
I am looking for a material that functions in the same way that Occlusion Material does, except that it only partially occludes whatever is behind it. One way that I have thought of doing this was to change the opacity of the entity that was covered in Occlusion Material, however this did not change anything. Please let me know if this is possible.
2
1
137
Apr ’25
Why VideoMaterial can't show transparency on Apple Vision Pro
https://developer.apple.com/documentation/realitykit/videomaterial The documentation: "Video materials support transparency if the source video’s file format also supports transparency." I have a transparency video(Hand.mov, HEVC with alpha), I can show the video with transparency background correctly on Vision Pro Simulates, but on physic Device the video has a black background. I'm sure the video format is ok because I can see get the texture from video and display it on an UnlitMaterial. How can I show the transparency video correctly with the RealityKit/VideoMaterial?
2
0
239
Dec ’25
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
2
0
419
Oct ’25
Request File Access from Unity for Apple Vision Pro
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited: Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app. I know that an app can request access to these "Files & Folders": So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
5
0
343
Oct ’25
Displaying multiple immersive movies in spheres in an immersive environment
In visionOS, I'm trying to create an immersive environment which would feature several spheres in which immersive movies are visible. I'm starting from a sample code which creates a sphere, sets an immersive movie as its material, and opens it as an immersive environment. This works fine. But if I create a sphere in an open immersive environment using Reality Composer Pro and sets its material to an immersive movie, I can see the movie on the sphere while I move outside of it but if I try to get inside the sphere, it disappears. What would be the right way of doing this ?
1
1
696
Oct ’25
Xcode Cloud builds don't work with *.usdz files in a RealityComposer package
In courses like Compose interactive 3D content in Reality Composer Pro Realitykit Engineers recommended working with Reality Composer Pro to create RealityKit packages to embed in our Realitykit Xcode projects. And, comparing the workflow to Unity/Unreal, I can see the reasoning since it is nice to prepare scenes/materials/assets visually. Now when we also want to run a Xcode Cloud CI/CD pipeline this seems to come into conflict: When adding a basic *.usdz to the RealityKitContent.rkassets folder, every build we run on Xcode cloud fails with: Compile Reality Asset RealityKitContent.rkassets ❌realitytool requires Metal for this operation and it is not available in this build environment I have also found this related forum post here but it was specifically about compiling a *.skybox.
4
1
477
Sep ’25
Issue in TabletopKit Sample scene + shared immersive space
Hello Community, I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars. I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat. I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well. When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template." So my questions are: What needs to be changed so the TabletopKit can handle seating correctly? How can I correctly use immersive scenes in combination with the TabletopKit? I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now. I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
6
4
646
Mar ’25
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
2
1
758
Nov ’25