Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
0
0
160
21h
fileImporter issue in visionOS with iPhone app (that can run on visionOS)
Happy new year to all! I have created an iOS app that also runs on Apple Vision Pro. On iOS, when you activate the fileImporter modal, you can swipe down the modal in iOS to dismiss. However, in visionOS, this same modal CANNOT be swiped down to cancel/dismiss. If you are drilled deep into a file hierarchy, you have to navigate back to the top level to tap X to dismiss. Is there a way to add swipe down to the visionOS implementation of fileImporter, or any other workaround so the user doesn't have to navigate back to the top to dismiss? Again, this is not a visionOS app but an iOS app compatible for use in Vision Pro. Thanks!
1
0
663
2d
Building a Full Space app that enables sharing a visionOS experience with nearby users.
Hello, I am currently considering developing a Full Space app that enables a shared visionOS experience with nearby users. Intended Features A Mixed Full Space app in which dozens of 3D models are placed in the space. These 3D models may play embedded animations when tapped, be programmatically moved or rotated, or be controlled via Reality Composer Pro timelines. The app also includes audio, spatial audio, videos with audio, and videos without audio, which are rendered as VideoTextures on planes and played back in the space. Some media elements play automatically, while others are triggered by user interaction. However, it is unclear whether AVPlaybackCoordinator supports shared playback across multiple types of media, such as: audio only spatial audio video without audio video with audio I am also unsure whether there are alternative or recommended approaches for synchronizing playback in this scenario. Questions Is it technically possible to implement the experience described above using visionOS? Are there any important implementation considerations or limitations that should be taken into account? For example, when two participants experience the app simultaneously, how is the content positioned for each participant? Is the spatial placement of content shared across participants, or is it positioned relative to each participant’s viewpoint? For nearby participants, is it necessary to register a spatial Persona? My understanding is that spatial Personas are not visible for nearby users during the experience; is this correct? When experiencing SharePlay with nearby users, is it possible to share the experience without registering the other participant’s contact information? I have watched the following session, but I was unable to fully understand the feasibility of the above use case or the concrete implementation details: https://developer.apple.com/videos/play/wwdc2025/318/ Thank you.
1
0
123
2d
Displaying spatial photos and videos on web pages in Safari
The purpose is to create a simple web-based gallery of spatial photos and videos using static html files. I have successfully displayed spatial photos using the img tag and IMG.heic files. I can tap and hold the image to bring up the contextual menu and from there select View Spatial Photo. Is there any way to add a control to the image, like a link or overlay on the image itself, that a user can simply tap to show the image in 3D? And how to host a video file on a web page without going through a CDN/streaming service? Sample html would be much appreciated.
1
0
685
2d
RealityKit / visionOS – Memory not released after dismissing ImmersiveSpace with USDZ models
Hi everyone, I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup. App Context The app showcases apartments in real scale using AR. Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures). Users can walk inside the apartments, and performance is good even close to hardware limits. Flow The app starts in a full immersive space (RealityView) for selecting the apartment. When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads. The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes. When the user dismisses the experience, we attempt cleanup: Nulling out all entity references. Removing ModelComponents. Clearing cached textures and skyboxes. Forcing dictionaries/collections to empty. Despite this cleanup, memory usage remains very high. Problem After dismissing the ImmersiveSpace, memory does not return to baseline. Check the attached screenshot of the profiling made using Instruments: Initial state: ~30MB (main menu). After loading models sequentially: ~3.3GB. Skybox textures bring it near ~4GB. After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB). When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure. The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected. So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures. Cleanup Code Example Here’s a simplified version of the cleanup I’m doing: func clearAllRoomEntities() { for (entityName, entity) in entityFromMarker { entity.removeFromParent() if let modelEntity = entity as? ModelEntity { modelEntity.components.removeAll() modelEntity.children.forEach { $0.removeFromParent() } modelEntity.clearTexturesAndMaterials() } entityFromMarker[entityName] = nil removeSkyboxPortals(from: entityName) } entityFromMarker.removeAll() } extension ModelEntity { func clearTexturesAndMaterials() { guard var modelComponent = self.model else { return } for index in modelComponent.materials.indices { removeTextures(from: &modelComponent.materials[index]) } modelComponent.materials.removeAll() self.model = modelComponent self.model = nil } private func removeTextures(from material: inout any Material) { if var pbr = material as? PhysicallyBasedMaterial { pbr.baseColor.texture = nil pbr.emissiveColor.texture = nil pbr.metallic.texture = nil pbr.roughness.texture = nil pbr.normal.texture = nil pbr.ambientOcclusion.texture = nil pbr.clearcoat.texture = nil material = pbr } else if var simple = material as? SimpleMaterial { simple.color.texture = nil material = simple } } } Questions Is this expected RealityKit behavior (textures/meshes cached internally)? Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used? Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently? Any guidance, best practices, or confirmation would be hugely appreciated. Thanks in advance!
9
0
1.9k
3d
Degraded RoomPlan performance
We have been using RoomPlan in our app for 2+ years. Through a combination of in-app and manual coaching on scanning best practices, most users are able to achieve high-quality scans on a consistent basis. In recent weeks, however, we have observed an increase in reports of degraded scanning performance, even from veteran users who had not previously encountered issues. The RoomCaptureView overlay is jittery and crooked, and the resulting scan file has significant issues, even for simple, well-lit rectangular rooms. It is difficult to troubleshoot these issues given the number of variables at play, and the overall volume of reports is still relatively low, but we'd appreciate any guidance on known issues or workarounds that could help unblock our users who are being affected by this. I noticed that this post includes an acknowledgement of FB14454922 and FB15035788. Our issues seem slightly different as the scans are simply inaccurate and jittery without failing outright. I haven't found any other threads on similar issues.
2
1
1.6k
1w
VisionOS 2 - Screen Capture with passthrough
We're trying to switch from using main camera access on Arkit to screen-capture with passthrough however we're facing some issues and it seems a bit complicated to debug. We have set up a broadcast Extension, set up some logs on the sample Handler but we get nothing in the console nor that the recording starts, we set up the picker as well and we can see our extension in the control center as one of the choices but clicking start, results in it stopping in less than one second after. The only message that is rather contradictory we see in the console.app is the following [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1333 Extension has passthrough license and just right after [INFO] -[RPRecordingManager getSystemBroadcastExtensionInfo:]_block_invoke:1336 Extension does not have passthrough license
2
1
447
1w
多设备协同操作繁琐
直播过程中需同时操作 Vision Pro(拍摄)、Mac(推流)、中控台(画面切换),无统一控制入口,调节 3D 模型、校准画质等操作需在多设备间切换,易出错且效率低。 期望 针对直播场景,提供桌面端专属控制软件,支持一站式管理 Vision Pro 的拍摄参数、3D 模型切换、虚实融合效果等,实现多设备协同操作的可视化、便捷化。
0
0
256
2w
拍摄画面亮度不稳定(动态波动)
画面亮度存在无规律动态波动(时亮时暗),且无手动控制入口,导致商品颜色还原失真、主播面部曝光异常(过曝 / 欠曝),严重影响直播展示效果。 期望 "· 优化直播模式的自动曝光算法,提升复杂光线环境下的亮度稳定性; · 增加 “直播模式” 专属亮度锁定功能,支持手动设定亮度参数并锁定,满足直播场景下的画质可控需求。 "
0
0
153
2w
多相机切换时画质参数差异显著
切换后两者的亮度、色彩饱和度、对比度等画质参数差距较大,导致画面视觉体验割裂,破坏直播连贯性,影响用户观看沉浸感。 期望 "· 对标常规直播单反相机的画质基准,优化 Vision Pro 的画面亮度、色彩还原能力; · 提供设备端或配套软件的画质自定义调节功能(亮度、对比度、色温等),支持直播前手动校准,确保与单反相机画面风格一致。"
0
0
108
2w
Vision Pro 画面传输至 Mac 后分辨率偏低
传输后的直播流分辨率显著下降,画面细节丢失、清晰度不足,导致 3D 家具商品的纹理、尺寸等关键信息无法精准展示,影响用户对商品的判断。 期望 优化流传输过程中的分辨率压缩策略,减少传输过程中的画质损耗,提升 Mac 端接收的直播流清晰度,匹配 3D 商品展示的高精度需求。
0
0
77
2w
画面抖动导致观众眩晕
佩戴者头部自然晃动时,设备拍摄的画面会出现明显抖动,导致观看直播的用户产生眩晕感,严重影响直播沉浸体验和购物决策效率。 希望 优化设备内置防抖算法,降低头部常规晃动对画面稳定性的影响,提升直播画面的流畅度。
0
0
74
2w
Implementing Foveated Streaming with Apple Vision Pro
Hello, Want to understand what's the current state for developing for Apple Vision Pro? I want to stream a video from a remote server in realtime. It is a video stream and can't download it. I want to stream a low quality stream and high res stream. The server will only send the "box" where user is looking at. Are there any API to track where the user is looking at in the experience? Thanks,
1
0
656
3w
How to fix "Sample 0 missing LiDAR point cloud!" error?
I'm trying to run a PhotogrammetrySession based on photos taken in an AVCaptureSession and stored as .heic files. When I load the files I'm always seeing the error "Sample 0 missing LiDAR point cloud!" showing up for each individual sample. Debugging shows that sample.depthDataMap is populated, also the .heic contains depth data which can be extracted using e.g. heif-convert on my Mac. Comparing the .heic I created to one of the ObjectCaptureSession which doesn't show the LiDAR warning, I noticed the only difference being the HEIC information here: So my questions are: Are these the missing information in my manual capture causing this warning? Can I somehow add these information in an AVCaptureSession? Do these information allow better photogrammetry results?
2
0
319
3w
Real Time Spatial Video Streaming with Vision Pro
Hello, I am trying to build an AVP app for real-time "zero-latency" spatial video streaming. I am trying to figure out, on a high level, the best way to do this. Currently this is my method: Server sends stereo images via a WebRTC service (ie, livekit) The WebRTC stream is converted to a CVPixelBuffer, writes them to file, plays via AVPlayer, and applies a VideoMaterial to a plane entity. However, this is a bit hacky and it seems like this won't be compatible with Apple's spatial experinces. To my understanding, Apple supports HLS streaming for spatial experiences and APMP content. However, HLS (and even Low Latency HLS) introduces a second or more of latency, likely do to the segmentation nature of HLS. Thus, HLS will not work for us. Some other alternatives I've thought of are streaming the live stream video via webrtc from the server to a local computer in the AVP's network, and then using LL-HLS to stream from the local computer to the vision pro. Still, it seems like this would introduce latency on the order of seconds. Is my current approach the best way to implement this? Or could anyone suggest a better way, perhaps something compatible with AVP's spatial experiences
0
0
48
3w
PSVR2 controller button quirks
I have an open Feedback conversation with Apple on this topic, but I am curious if others have run into this, or want to try out my sample code in their set up. there are two API’s for reading controller buttons, axis, and D pads: GCPhysicalInputProfile and GCControllerLiveInput. There are inconsistencies in behaviour between the two of them. Apple recommends we use GCControllerLiveInput, however, there are some capabilities on these controllers that are only accessible through GCPhysicalInputProfile, as I’ll discuss below. PSVR2 R2/L2 buttons, a.k.a. triggers, have force input analogue values. These can only be accessed on GCPhysicalInputProfile PSVR2 thumbstick direction values are read through “axes” on GCPhysicalInputProfile, but only “dpads” on GCControllerLiveInput on both GCPhysicalInputProfile and GCControllerLiveInput, All pressed events of all buttons are fired properly using generic aliases ( Trigger, Grip ,Menu, Right Thumbstick, Left Thumbstick, Right Button A & B (Circle & Cross), Left Button A&B (Triangle and Square) ). Apple reserves the system button as the equivalent of a home button for the OS. on GCPhysicalInputProfile, touch events are fired when the button is also pressed, but not for only touches. on GCControllerLiveInput , Touch events only works for the following buttons: Left Thumbstick, Right Thumbstick, Right Button A (Circle), and Right Button B (Cross). But Right Button B touch event isn’t labelled correctly, it fires as the Right Button A event. I observed this inside ALVR which uses a polling based approach to event processing: https://github.com/alvr-org/alvr-visionos/blob/17b5968f9d894944b53e97134b39dfce0993302a/ALVRClient/WorldTracker.swift#L301 To simplify to see this on a very simple app, I used the Apple example TrackingAccessories application: https://developer.apple.com/documentation/ARKit/tracking-accessories-in-volumetric-windows I’ve attached the code that replaces the AccessoryTrackingModel class. I added code that prints out what is touched/pressed, see the trackAllConnectedSpatialControllers method: https://github.com/svrc/TrackingAccessories
6
0
787
3w
Pinning a pushed window to a wall breaks pushWindow for all other apps on the system
I posted https://developer.apple.com/forums/thread/809481 yesterday about an issue I discovered with pushWindow in visionOS 26.2 RC, but today I discovered a second problem with pushWindow. If window A calls pushWindow to present window B, and the user pins window B to a wall, the following unexpected behaviors are observed: Window B spontaneously disappears. If the user re-launches the (still running) app from the visionOS home view, both window A and window B appear simultaneously. I assume only window B should be visible at this point, since window A pushed window B. If the user closes window B, it's now impossible to present window B again. Calls to pushWindow appear to be ignored. If the user force-quits the app and relaunches it, and pushWindow is called again, window B appears, but window A remains visible. I also noticed this surprising behavior: This broken state of pushWindow behavior now affects all other apps on the system that may call pushWindow in the future, not just the app whose pushed window was pinned above. A workaround is to reboot the device, and then the system will behave as expected until the next time the user pins a pushed window.
2
0
303
3w