visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

Occlusion issues in Immersive Space - Breaking User Input Interaction
I'm developing a custom gesture-based visionOS project that uses hand tracking with collision detection spheres on fingers to register user interactions through collision components. I'm experiencing a critical occlusion issue where collision detection spheres are intermittently occluded by the background/depth buffer, causing fingers to pass through the 3D model entities without registering interactions. Detailed Description: I have added 3D entities in an immersive scene with collision spheres attached to fingers for detecting user interactions. Each sphere has: CollisionComponent with sphere shape Proper collision masks and groups configured Real-time position updates from hand joint transforms Each entity has: InputTarget components to register collisions The Issue: When users move their fingers to the entity to interact, some collision spheres (particularly on the pinkie and ring fingers) become occluded and pass directly through the 3D model without triggering collision events. Meanwhile, other fingers (like the index finger) continue to work correctly. This appears to be a depth perception/z-buffer issue between the model entity and the hand tracking collision spheres Questions: Is there a recommended approach for maintaining consistent depth ordering between hand-tracking entities and 3D models in immersive spaces to prevent occlusion issues? Should I be using AnchorEntities to anchor the entity to a plane or world position to establish a more stable depth reference? Are there specific RenderingComponent or material settings that could help ensure collision entities maintain their depth priority and don't get occluded? Could this be related to z-fighting when collision spheres and entity geometry occupy similar depth ranges? If so, what's the recommended depth bias approach? Is there a better architectural approach for implementing interactions with custom hand gesture tracking that avoids these depth perception issues? What Would Help: Implementation guidance for ensuring reliable collision detection between hand-tracked entities through custom gestures and 3D models. Best practices for depth management in immersive spaces with custom hand gesture tracking. Sample code demonstrating stable hand-to-object interaction patterns. Information about whether this is a known limitation or if there are specific APIs I should be leveraging This issue is significantly impacting the reliability of our app experience, as users cannot consistently interact with all model components. Any guidance from Apple engineers or developers who have solved similar depth/occlusion challenges would be greatly appreciated. Additional Context: This is for a productivity-focused application where accuracy and reliability are critical. Thank you for any assistance!
0
0
12
6h
How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
1
1
455
7h
Severe Performance Issue with URLSessionConfiguration.background on Vision Pro (10× slower than default config)
Hi everyone, I’ve run into a consistent issue on multiple Apple Vision Pro devices where downloads using URLSessionConfiguration.background are between 4× and 10x slower than when using URLSessionConfiguration.default. This issue is systematic and can easily be reproduced. This only happens on device, in the simulator, both configurations download files at the expected speed with respect to the network speed. Details: Tested on visionOS 26.0.1 and 26.1 (public releases) Reproduced across 2 Vision Pro (currently testing on a third one) Reproduced on 2 different Wi-fi networks (50mb/s and 880mb/s) From my tests this speed issue seems to affects multiple apps on my device: Stobo Vision (our app), Immersive India, Amplium Not server-related (reproduces with Apple CDN, S3, and DigitalOcean) I’ve built a small sample project that makes this easy to reproduce, it downloads a large file (1.1 GB video) using two managers: One with URLSessionConfiguration.default One with URLSessionConfiguration.background You can also try it with your own file url (from an s3 for example) Expected behavior: Background sessions should behave similarly to default sessions in terms of throughput, just as they do in the simulator. To be clear I am comparing both config when running in the foreground, not in the background. Actual behavior: Background sessions on Vision Pro are significantly slower, making them less usable for large file downloads. On this screenshot it's even reaching 27x slower than the expected speed. Default config takes ~97s to download and Background config takes ~2640s. I do now have the fastest internet connection but 44min to download 90.5MB is extremely slow. Has anyone else seen this behavior or found a workaround? Or is this an expected behavior from URLSessionConfiguration.background? If I'm doing something wrong please let me know Repo link: https://github.com/stobo-app/DownloadConfigTesting
3
0
63
8h
Sudden Drop in VPM5 Frame Rate
Hello, I'd like to consult everyone about abnormal frame rate issues encountered during Vision Pro usage, along with questions regarding refresh rates. I hope to gain insights or solutions: My VP (M5) device experiences sudden frame rate drops during operation, with the following specific symptoms: During initial operation, the frame rate is approximately 90 frames per second, providing a smooth and normal user experience; However, it subsequently drops unexpectedly to 45 FPS, causing noticeable lag that severely impacts usability. Actual usage logs provide a clear timeline: During the first session (23:47:35–23:57:41), the frame rate remained consistently stable at 90 FPS; After pausing usage for a period and restarting the device (00:22:30–00:30:47), the frame rate initially remained stable at 90 FPS. However, starting at 00:30:48, the frame rate suddenly dropped to 45 FPS and remained at this low rate until 00:36:18. Additionally, we have two further questions: What could cause this sudden, unexplained drop from 90 FPS to 45 FPS? Our team has multiple VP M2 devices of the same model and noticed that some default refresh rates are 90Hz while others are 100Hz. What factors determine the frame rate of these devices? Why do refresh rate differences occur among identical models? Have others encountered similar sudden frame rate drops? If so, could you share potential causes and troubleshooting approaches for resolving such issues? Thank you very much!
0
0
156
2d
Where can we access the new enterprise license files mentioned in the WWDC session?
Hi everyone, I’m trying to verify something mentioned in the WWDC session “Explore enhancements to your spatial business app.” At timestamp 3:36, the presenter states: “You can now access your enterprise license files directly within your Apple Developer account.” I’ve checked every section of my Developer account, including: • Membership and Agreements • Certificates, Identifiers & Profiles • App Store Connect • Additional Resources • Account settings …but no UI or section exposes these enterprise license files. Since the Vision Entitlement Services framework actively checks these licenses (for example, mainCameraAccess entitlement approval), I need to confirm the location of the new license file. Could someone from Apple or anyone who has seen this feature clarify: 1. Where exactly do these enterprise license files appear in the Developer account UI, or 2. Whether this feature has not rolled out yet? Any guidance or screenshots from those who have access would be invaluable. Thanks,
0
0
22
2d
Access Main Camera not working in VisionOS 26.1
I downloaded the official sample project “Accessing the Main Camera”, but I found that it’s not able to retrieve the camera feed on visionOS 26.1. After checking the debug logs, it seems the issue is caused by the system being unable to find the expected format. I tested on a device running visionOS 2, and the camera feed worked correctly — but only when using the sample code from the visionOS 2 version, not the current one. I also noticed that some of the APIs have changed between versions. Has anyone managed to successfully access the camera feed on visionOS 26.1?
2
0
401
3d
ARKit / visionOS - handtracking with 3D objects attached on hand
I use ARKit's hand tracking to attach a 3D model of a remote control to the left hand. The user is supposed to press buttons on the remote control. In the Vision Pro settings, I have removed the left hand from Hands & Eye Tracking. Only the right hand is used. The problem now is that the left hand appears and the 3D model of the remote control fades out. I want the remote control to be completely visible. The user should feel like they really have the remote control in their hand. Can I prevent the fading out?
1
0
125
5d
visionOS 3d interactions like the native keyboard when no longer observed in passthrough
While using apple's vision pro, we noticed that we can continue to use the visionOS keyboard when we no longer actually see it in passthrough. In other words, when we focus on a field to type, visionOS displays the keyboard for us in such a way that we actually see it. Then, we noticed if we look away a little bit, either up, or down, or left, or right, in such a way that the keyboard is no longer visible by us in the passthrough, the keyboard still remains responsive to taps from our fingers at the location where it is. It seems the keyboard remains functional and responsive to taps even though we can no longer observe/see it. We are trying to figure out how to implement similar functionality in our app whereby the user can continue to manipulate a 3d entity when the user can no longer actually observe it in passthrough (like the visionOS keyboard appears to allow). I assume the visionOS keyboard has this functionality thanks to the downward facing sensors on the hardware that allow hand tracking even though the hands can no longer be observed by the user. That is likely how we can rest our hands on our lap is still be able to interact with visionOS. How can we implement a similar functionality for 3D entities? Is there a way to tap in, or to allow hand tracking, from those toward facing cameras? Is it possible to manipulate a 3D entity when it is no longer observed by the user for example when they shift their attention somewhere else in the field of vision? How does the visionOS keyboard achieve this?
1
0
250
5d
RealityKit / visionOS – Memory not released after dismissing ImmersiveSpace with USDZ models
Hi everyone, I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup. App Context The app showcases apartments in real scale using AR. Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures). Users can walk inside the apartments, and performance is good even close to hardware limits. Flow The app starts in a full immersive space (RealityView) for selecting the apartment. When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads. The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes. When the user dismisses the experience, we attempt cleanup: Nulling out all entity references. Removing ModelComponents. Clearing cached textures and skyboxes. Forcing dictionaries/collections to empty. Despite this cleanup, memory usage remains very high. Problem After dismissing the ImmersiveSpace, memory does not return to baseline. Check the attached screenshot of the profiling made using Instruments: Initial state: ~30MB (main menu). After loading models sequentially: ~3.3GB. Skybox textures bring it near ~4GB. After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB). When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure. The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected. So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures. Cleanup Code Example Here’s a simplified version of the cleanup I’m doing: func clearAllRoomEntities() { for (entityName, entity) in entityFromMarker { entity.removeFromParent() if let modelEntity = entity as? ModelEntity { modelEntity.components.removeAll() modelEntity.children.forEach { $0.removeFromParent() } modelEntity.clearTexturesAndMaterials() } entityFromMarker[entityName] = nil removeSkyboxPortals(from: entityName) } entityFromMarker.removeAll() } extension ModelEntity { func clearTexturesAndMaterials() { guard var modelComponent = self.model else { return } for index in modelComponent.materials.indices { removeTextures(from: &modelComponent.materials[index]) } modelComponent.materials.removeAll() self.model = modelComponent self.model = nil } private func removeTextures(from material: inout any Material) { if var pbr = material as? PhysicallyBasedMaterial { pbr.baseColor.texture = nil pbr.emissiveColor.texture = nil pbr.metallic.texture = nil pbr.roughness.texture = nil pbr.normal.texture = nil pbr.ambientOcclusion.texture = nil pbr.clearcoat.texture = nil material = pbr } else if var simple = material as? SimpleMaterial { simple.color.texture = nil material = simple } } } Questions Is this expected RealityKit behavior (textures/meshes cached internally)? Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used? Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently? Any guidance, best practices, or confirmation would be hugely appreciated. Thanks in advance!
5
0
1.4k
5d
Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
2
1
722
5d
visionOS 26 - Rendering Issues related to Transparency
Summary After updating to visionOS 26, we’ve encountered severe transparency rendering issues in RealityKit that did not exist in visionOS 2.6 and earlier. These regressions affect applications that dynamically control scene opacity (via OpacityComponent). Our app renders ultra-realistic apartment environments in real time, where users can walk or teleport inside 3D spaces. When the user moves above a speed threshold, we apply a global transparency effect to prevent physical collisions with real-world objects. Everything worked perfectly in visionOS 2.6 — the problems appeared only after upgrading to 26. Scene Setup Overview The environment consists of multiple USDZ models (e.g., architecture, rooms, furniture). We manage LODs manually for performance (e.g., walls and floors always visible in full-res, while rooms swap between low/high-res versions based on user position and field of view). Transparency is achieved using OpacityComponent, applied dynamically when the user moves. Some meshes (e.g., portals to skyboxes, glass windows) use alpha materials We also use OcclusionMaterials to prevent things to be seen through walls when scene is transparent Observed Behavior by Scenario (I can share a video showing the results of each scenario if needed.) Scenario 1 — Severe Flickering (Root Opacity) Setup: OpacityComponent applied to the root entity NO ModelSortGroupComponent used Symptoms: Strong flickering when transparency is active Triangles within the same mesh render at inconsistent opacity levels Appears as if per-triangle alpha sorting is broken Workaround: Moving the OpacityComponent from the root to each individual USDZ entity removes the per-triangle flicker Pros: No conflicts with portals or alpha materials Scenario 2 — Partially Stable, But Alpha Conflicts Setup: OpacityComponent applied per USDZ entity ModelSortGroupComponent(planarUIAlwaysBehind) applied to portal meshes Other entities have NO ModelSortGroupComponent Symptoms: Frequent alpha blending conflicts: Transparent surfaces behind other transparent surfaces flicker or disappear Example: Wine glasses behind glass doors — sometimes neither is rendered, or only one Even opaque meshes behind glass flicker due to depth buffer confusion Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Analysis: Appears related to internal changes in alpha sorting or depth pre-pass behavior introduced in visionOS 26 Pros: Most stable setup so far Cons: Still unreliable when OpacityComponent is active Scenario 3 — Layer Separation Attempt (Regression) Setup: Same as Scenario 2, but: Entities with alpha materials moved to separate USDZs Explicit ModelSortGroupComponent order set (alpha surfaces rendered last) Symptoms: Transparent surfaces behind other transparent surfaces flicker or disappear Depth is completely broken when there's a large transparent surface Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Workaround Attempt: Re-ordering and further separating models did not solve it Pros: None — this setup makes transparency unusable Conclusion There appears to be a regression in RealityKit’s handling of transparency and sorting in visionOS 26, particularly when: OpacityComponent is applied dynamically, and Scenes rely on multiple overlapping transparent materials. These issues did not exist prior to 26, and the same project (no code changes) behaves correctly on previous versions. Request We’d appreciate any insight or confirmation from Apple engineers regarding: Whether alpha sorting or opacity blending behavior changed in visionOS 26 If there are new recommended practices for combining OpacityComponent with transparent materials If a bug report already exists for this regression Thanks in advance!
0
0
140
6d
Feature / Workaround wanted: Seamless, Automated AirPlay Screen Streaming on visionOS for Demos
Hello Apple team and developer community, I am preparing a visionOS app for a fair environment, where we want to automatically stream the current experience to a nearby monitor via AirPlay, without requiring guests or staff to manually interact with the Control Center or AirPlay pickers all the time. The goal is to provide a smooth, frictionless setup so attendees can focus on the demo, not the configuration. Feature Request: A supported API or method to programmatically start/stop AirPlay video streaming (mirroring or external playback) from within a visionOS app, allowing the current experience to be instantly displayed on an external monitor or Apple TV for the audience. Context & Rationale: In a trade fair or exhibition setting, rapid guest turnaround and minimal staff intervention are crucial. Having to manually guide each visitor through AirPlay setup is impractical. As I understood, AVRoutePickerView can be used for this on iOS/macOS, but this is not available in visionOS. Enabling similar automated streaming on visionOS would make the device far more suitable for live demos and public showcases. Questions: Are there any supported workarounds or best practices for enabling automated screen streaming or AirPlay initiation on visionOS in public demo environments that I missed? Is Apple considering adding programmatic AirPlay control or accessibility features to support such use cases in future visionOS releases? Thank you for considering this request! If there are recommended patterns, entitlements, or accessibility solutions we could explore for trade fair scenarios, your guidance would be greatly appreciated. Best regards, Julian Zürn - IPI, HS Kempten
2
0
294
1w
How to best manage ARKitSession in concurrent code
I have a visionOS app where I instantiate ARKitSession and various providers (HandTrackingProvider and WorldTrackingProvider) in my appModel. That way, I can pass these providers to a Task which runs a gRPC server for sending the data from these providers to a client. When the users enters the immersive space of the app, the ARKitSession will run the providers if they are not running already. I am now trying to implement the AccessoryTrackingProvider with the PSVR sense controllers but it does not fit with my current framework because the controllers may not be connected when the ARKitSession.run function is called. So I need to find a new place to start the session. My question is, if I already have a session which is running the hand and world tracking providers, can I start another session to run the accessory tracking? Should they all be running on the same session? Is there a way to stop the session and restart it when the controllers are connected? When I tried this, I get an error that says "It is not possible to re-run a stopped data provider (<ar_hand_tracking_provider_t: " but if I instantiate a new HandTrackingProvider, then the one that got passed to the gRPC task would no longer be the one running in the new session. Any advice on how best to manage the various providers and ARKit sessions would be greatly appreciated.
1
0
143
1w
How to speed up build time when placing large USDZ files in RCP scenes
I’m currently developing a visionOS app that includes an RCP scene with a large USDZ file (around 2GB). Each time I make adjustments to the CG model in Blender, I export it as USDZ again, place it in the RCP scene, and then build the app using Xcode. However, because the USDZ file is quite large, the build process takes a long time, significantly slowing down my development speed. For example, I’d like to know if there are any effective ways to: Improve overall build performance Reduce the time between updating the USDZ file and completing the build Any advice or best practices for optimizing this workflow would be greatly appreciated. Best regards, Sadao
1
0
130
1w
Xcode shows alert about unknown com.apple.quicklook.preview extension point when running on Apple Vision Pro Simulator
I have an iOS app with a QuickLook extension. I also added Apple Vision Pro in the target's General > Supported Destinations section. About one year ago, I was able to run the app on iPhone, iPad and Apple Vision Pro Simulators. Today I tried running it again on Apple Vision Pro with Xcode 26.0.1, but Xcode shows this error: Try again later. Appex bundle at ~/Library/Developer/CoreSimulator/Devices/F6B3CCA8-82FA-485F-A306-CF85FF589096/data/Library/Caches/com.apple.mobile.installd.staging/temp.PWLT59/extracted/problem.app/PlugIns/problemQuickLook.appex with id org.example.problem.problemQuickLook specifies a value (com.apple.quicklook.preview) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point. I tried again later a couple times, even after running Clean Build Folder Immediately, without any change. I can reproduce this with a fresh Xcode project to which I add a Quick Look Preview Extension and Apple Vision Pro as a supported destination. The error doesn't happen when running on Apple Vision Pro (Designed for iPad) or iPad Pro 13-inch (M4) destinations. What is the problem? I created FB20448815.
5
0
240
1w
Why don’t the dinosaurs in “Encounter Dinosaurs” respond to real-world light intensity?
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.” In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment. Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same. If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
1
0
533
1w
[WebXR] Support for AR module in VisionOS 2.x
Thank you again for pushing the web forward in VisionOS 2, super exciting! The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences. Samples such as this one are not supported: https://immersive-web.github.io/webxr-samples/immersive-ar-session.html In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything. Is this the expected behavior at this time? Any developer preview(s) we could try?
10
6
2.3k
1w