visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

TapGesture stops responding on ViewAttachmentComponent after disabling or removing and re-adding the Entity (visionOS 26)
Issue When an Entity with a ViewAttachmentComponent is: disabled using isEnabled = false removed using removeFromParent() and then enabled or added back again, the attached SwiftUI view is rendered correctly, but tap interactions stop working. Specifically: Button actions inside the attached view do not fire TapGesture closures on child views do not respond Expected Behavior Tap interactions inside the attached view should continue to work after the Entity is re-enabled or re-added. Actual Behavior After being disabled or removed once, all tap interactions stop responding. Comparison When displaying the same SwiftUI view using RealityViewAttachments, this issue does not occur. Removing and re-displaying the attachment still allows taps to work correctly. Reproduction Attached sample code reproduces the issue: A RealityView with an Entity that has a ViewAttachmentComponent The attached SwiftUI view contains a Toggle The toggle updates isEnabled on the Entity After toggling off and on, tap interactions stop responding Environment Xcode 26 visionOS 26 Question Is this expected behavior of ViewAttachmentComponent, or a bug? Is there a recommended way to temporarily hide or disable an Entity with ViewAttachmentComponent without breaking tap interactions? import SwiftUI import RealityKit struct GestureTestView: View { @State var sampleEnabled = true @State var sampleEntity: Entity? var body: some View { RealityView { contents, attachments in // After deleting and re-displaying it, taps no longer respond. let sample = Entity(components: ViewAttachmentComponent(rootView: SampleView())) // Executed successfully //let sample = attachments.entity(for: "SampleView")! contents.add(sample) sample.position = [0, 1.2, -1] sampleEntity = sample let toggleButton = Entity(components: ViewAttachmentComponent(rootView: ToggleButtonView(isOn: $sampleEnabled))) contents.add(toggleButton) toggleButton.position = [0, 1, -1] } update: { _, _ in // run update closure print(sampleEnabled) // update sample entity enable sampleEntity?.isEnabled = sampleEnabled } attachments: { Attachment(id: "SampleView") { SampleView() } } } } struct ToggleButtonView: View { @Binding var isOn: Bool var body: some View { VStack { Toggle(isOn: $isOn) { Text("Toggle") } } .padding() .glassBackgroundEffect() } } struct SampleView: View { var body: some View { VStack { Button { print("Hello, World!") } label: { Text("Hello, World!") .padding() } } .padding() .glassBackgroundEffect() } } #Preview(immersionStyle: .mixed) { GestureTestView() }
2
0
195
7h
'__abort_with_payload' from CompositorNonUI on visionOS 26.2 (device + simulator, Omniverse streaming)
I am developing a custom app for Apple Vision Pro using Compositor Services to stream content from NVIDIA Omniverse. The app is based on: https://github.com/NVIDIA-Omniverse/apple-configurator-sample Environment: Device: Apple Vision Pro OS Version: visionOS 26.2 Xcode Version: 26.2 The Issue: The application crashes hard (__abort_with_payload) in "libsystem_kernel.dylib" on Task 6 immediately after initialization. This appears to be a deliberate abort triggered by the compositor, not a typical crash. The issue occurs on both physical device and simulator. Important detail: The console output shows a specific CLIENT BUG assertion. By checking the metadata of the warning, I found that it is related to "Library: CompositorNonUI". Relevant console output before abort: Missed 'FrameLimiter' target of 90.0 Hz running compositor services to get IPD, FOV, etc fence tx observer 14f27 timed out after 0.600000 fence tx observer bc1b timed out after 0.600000 BUG IN CLIENT: For mixed reality experiences please use cp_drawable_compute_projection API
0
0
43
19h
Debugging help: BUG IN CLIENT: For mixed reality experiences please use cp_drawable_compute_projection API
Hi, we've been developing an XR application for Apple Vision Pro which has worked fine so far. Now that the SDKs have updated to 26.2 (for Xcode and AVP versions) we've run into an error that prevents the app from launching. I get the following error when running the application in the AVP Simulator (building for destination Apple Vision Pro (26.2), and my colleague gets the same error when building for the device itself and launching there. BUG IN CLIENT: For mixed reality experiences please use cp_drawable_compute_projection API Type: Error | Timestamp: 2026-01-13 09:21:57.242191+02:00 | Process: My XR App | Library: CompositorNonUI | TID: 0x75e2c (copied with "all metadata") How can we debug this further? The error in the console doesn't seem to give any stack trace or clear pointer to the code which relates to it. I've tried searching for CompositorNonUI, but that doesn't yield any results in our project (nor Google nor the Apple developer forums). There is one post in the forum that has a similar error (https://developer.apple.com/forums/thread/788500?answerId=845039022#845039022) but searching in our project and it's dependencies, we don't seem to use ".tangent" anywhere either. Any help in either debugging to find more details on where the issue happens or pointers to fixing it much appreciated, thanks!
0
0
35
19h
fileImporter issue in visionOS with iPhone app (that can run on visionOS)
Happy new year to all! I have created an iOS app that also runs on Apple Vision Pro. On iOS, when you activate the fileImporter modal, you can swipe down the modal in iOS to dismiss. However, in visionOS, this same modal CANNOT be swiped down to cancel/dismiss. If you are drilled deep into a file hierarchy, you have to navigate back to the top level to tap X to dismiss. Is there a way to add swipe down to the visionOS implementation of fileImporter, or any other workaround so the user doesn't have to navigate back to the top to dismiss? Again, this is not a visionOS app but an iOS app compatible for use in Vision Pro. Thanks!
2
0
760
1d
visionOS Bluetooth LE limited to 2 connections?
Hello, Is there a 2-device limit for CoreBluetooth on visionOS 2.1? My app connects to 4 BLE peripherals on iOS but fails at the 3rd device on Vision Pro. The 3rd call to centralManager.connect() is successful and the peripheral enters .connecting state, but didConnect never fires and it stays in .connecting forever. No errors reported. First 2 devices work perfectly. Same code on iOS connects all 4. Has anyone else had this problem? Is there any documentation I can refer to that states something like this? Environment: visionOS 2.1, CoreBluetooth, Apple Vision Pro. My BLE Peripherals are running on nRF52840.
1
0
25
4d
onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
0
0
516
4d
Help: Compiled Timeline Issues
I have developed a fun living diorama world using Reality Composer Pro and XCode. Everything is as it should be, and it looks/works great ... until it does not. If I seem to make any change to any of the 10 timelines that I am using (all on the same scene, no nested scenes), running the app in simulator, device, and via testflight throws errors around compiled timelines, leading to the black screen of death. Every time I clean and run, the timelines in questions might change. Its very frustrating and impossible to track down. Heres are some examples. AssetLoadRequest failed because asset failed to load '/ (3661553931319769725 Timeline (RealityFileAsset)URL/file:///var/containers/Bundle/Application/F4408256-6014-4264-9E4B-F74AEF0EDE53/SantasVillage.app/RealityKitContent_RealityKitContent.bundle/RealityKitContent.reality/Timeline_779.compiledtimeline)' (failed to register asset) Asset / (13631856135570808851 AnimationLibraryAsset (RealityFileAsset)URL/file:///var/containers/Bundle/Application/F4408256-6014-4264-9E4B-F74AEF0EDE53/SantasVillage.app/RealityKitContent_RealityKitContent.bundle/RealityKitContent.reality/AnimationLibraryAsset_1.compiledanimationlibraryasset) failure: failed to register asset Asset 10430065658338454790 AnimationScene (RealityFileAsset)URL/file:///var/containers/Bundle/Application/F4408256-6014-4264-9E4B-F74AEF0EDE53/SantasVillage.app/RealityKitContent_RealityKitContent.bundle/RealityKitContent.reality/AnimationScene_14.compiledanimationscene failure: failed to register asset I went with recommended fixes of closing RCP > Clean Build Folder > Delete Derrived Date (multiple ways) > Re-Open Xcode > Reset Package Cache > Re-Open RCP via XCode > Make a Change > Save > Clean Build Folder Again > Run. Sometimes it works. Most times it does not. I then found my own little work-around, but its not always working as is literally costing me days of wasted time messing around with this. I will DISABLE all timelines, do the above clean method, rerun with no timelines, and it resolves. Then, turn on timelines ONE BY ONE and run until I get another error. Then, rebuild that timeline and nothing else. This is not sustainable. There must be some better way to do this? Or, perhaps I am doing something wrong? Please help if you can.
3
0
213
6d
Building a Full Space app that enables sharing a visionOS experience with nearby users.
Hello, I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users. Intended Features A Mixed Full Space app in which dozens of 3D models are placed in the space. These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines. The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space. Some media elements play automatically, while others are triggered by user interaction. However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as: audio only spatial audio video without audio video with audio I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario. Questions Is it technically possible to implement the experience described above using visionOS? Are there any important implementation considerations or limitations that should be taken into account? For example, when two participants experience the app simultaneously, how is the content positioned for each participant? Is the spatial placement of content shared across participants, or is it positioned relative to each participant’s viewpoint? For nearby participants, is it necessary to register a spatial Persona? My understanding is that spatial Personas are not visible for nearby users during the experience; is this correct? When experiencing SharePlay with nearby users, is it possible to share the experience without registering the other participant’s contact information? I have watched the following session, but I was unable to fully understand the feasibility of the above use case or the concrete implementation details: https://developer.apple.com/videos/play/wwdc2025/318/ Thank you.
1
0
130
6d
RealityKit / visionOS – Memory not released after dismissing ImmersiveSpace with USDZ models
Hi everyone, I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup. App Context The app showcases apartments in real scale using AR. Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures). Users can walk inside the apartments, and performance is good even close to hardware limits. Flow The app starts in a full immersive space (RealityView) for selecting the apartment. When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads. The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes. When the user dismisses the experience, we attempt cleanup: Nulling out all entity references. Removing ModelComponents. Clearing cached textures and skyboxes. Forcing dictionaries/collections to empty. Despite this cleanup, memory usage remains very high. Problem After dismissing the ImmersiveSpace, memory does not return to baseline. Check the attached screenshot of the profiling made using Instruments: Initial state: ~30MB (main menu). After loading models sequentially: ~3.3GB. Skybox textures bring it near ~4GB. After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB). When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure. The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected. So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures. Cleanup Code Example Here’s a simplified version of the cleanup I’m doing: func clearAllRoomEntities() { for (entityName, entity) in entityFromMarker { entity.removeFromParent() if let modelEntity = entity as? ModelEntity { modelEntity.components.removeAll() modelEntity.children.forEach { $0.removeFromParent() } modelEntity.clearTexturesAndMaterials() } entityFromMarker[entityName] = nil removeSkyboxPortals(from: entityName) } entityFromMarker.removeAll() } extension ModelEntity { func clearTexturesAndMaterials() { guard var modelComponent = self.model else { return } for index in modelComponent.materials.indices { removeTextures(from: &modelComponent.materials[index]) } modelComponent.materials.removeAll() self.model = modelComponent self.model = nil } private func removeTextures(from material: inout any Material) { if var pbr = material as? PhysicallyBasedMaterial { pbr.baseColor.texture = nil pbr.emissiveColor.texture = nil pbr.metallic.texture = nil pbr.roughness.texture = nil pbr.normal.texture = nil pbr.ambientOcclusion.texture = nil pbr.clearcoat.texture = nil material = pbr } else if var simple = material as? SimpleMaterial { simple.color.texture = nil material = simple } } } Questions Is this expected RealityKit behavior (textures/meshes cached internally)? Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used? Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently? Any guidance, best practices, or confirmation would be hugely appreciated. Thanks in advance!
9
0
2.0k
6d
Hover effects w/ Compositor Services w/ PSVR2 controllers
Hi, I would like clarification on whether the new hover effects feature introduced in vision os 26 supported pinch gestures through the psvr 2 controllers. In your sample application, I was not able to confirm that this was working. Only pinch clicking with my hands worked. Pulling the trigger on the controller whilst looking at a 3d object did not activate the hover effect spatial event in the sample application. (The object is showing the highlight though) This is inconsistent with hover effect behavior with psvr2 controllers on swift ui views, where the trigger press does count as a button click. The sample I used was this one: https://developer.apple.com/documentation/compositorservices/rendering_hover_effects_in_metal_immersive_apps
0
0
379
1w
Displaying spatial photos and videos on web pages in Safari
Cross posting from Spatial Computing, apologies if this is not the appropriate forum. The purpose is to create a simple web-based gallery of spatial photos and videos using static html files. I have successfully displayed spatial photos using the img tag and IMG.heic files. I can tap and hold the image to bring up the contextual menu and from there select View Spatial Photo. Is there any way to add a control to the image, like a link or overlay on the image itself, that a user can simply tap to show the image in 3D? And how to host a (small!) video file on a web page without going through a CDN/streaming service? Sample html would be much appreciated.
0
0
633
1w
VisionOS 2 - Screen Capture with passthrough
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug. We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after. The only message that is rather contradictory we see in the console.app is the following [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license and just right after [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
2
1
454
2w
多设备协同操作繁琐
直播过程中需同时操作 Vision Pro(拍摄)、Mac(推流)、中控台(画面切换),无统一控制入口,调节 3D 模型、校准画质等操作需在多设备间切换,易出错且效率低。 期望 针对直播场景,提供桌面端专属控制软件,支持一站式管理 Vision Pro 的拍摄参数、3D 模型切换、虚实融合效果等,实现多设备协同操作的可视化、便捷化。
0
0
260
3w
拍摄画面亮度不稳定(动态波动)
画面亮度存在无规律动态波动(时亮时暗),且无手动控制入口,导致商品颜色还原失真、主播面部曝光异常(过曝 / 欠曝),严重影响直播展示效果。 期望 "· 优化直播模式的自动曝光算法,提升复杂光线环境下的亮度稳定性; · 增加 “直播模式” 专属亮度锁定功能,支持手动设定亮度参数并锁定,满足直播场景下的画质可控需求。 "
0
0
158
3w
多相机切换时画质参数差异显著
切换后两者的亮度、色彩饱和度、对比度等画质参数差距较大,导致画面视觉体验割裂,破坏直播连贯性,影响用户观看沉浸感。 期望 "· 对标常规直播单反相机的画质基准,优化 Vision Pro 的画面亮度、色彩还原能力; · 提供设备端或配套软件的画质自定义调节功能(亮度、对比度、色温等),支持直播前手动校准,确保与单反相机画面风格一致。"
0
0
112
3w
Vision Pro 画面传输至 Mac 后分辨率偏低
传输后的直播流分辨率显著下降,画面细节丢失、清晰度不足,导致 3D 家具商品的纹理、尺寸等关键信息无法精准展示,影响用户对商品的判断。 期望 优化流传输过程中的分辨率压缩策略,减少传输过程中的画质损耗,提升 Mac 端接收的直播流清晰度,匹配 3D 商品展示的高精度需求。
0
0
82
3w
画面抖动导致观众眩晕
佩戴者头部自然晃动时,设备拍摄的画面会出现明显抖动,导致观看直播的用户产生眩晕感,严重影响直播沉浸体验和购物决策效率。 希望 优化设备内置防抖算法,降低头部常规晃动对画面稳定性的影响,提升直播画面的流畅度。
0
0
78
3w