Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Apple Vision Pro - hand tracking with gloves
I am considering adding finger pad haptics (Data flow for haptic feedback is directed from the AVP to the fingers, not vice versa). Simple piezos wired to a wrist connection holding the driver/battery. But I'm concerned it will impact the hand tracking. Any guidance regarding gloves and/or the size of any peripherals attached to fingers? Or, if anyone has another (inexpensive) low profile option on the market please LMK. Thanks
0
0
231
Feb ’25
Help Configuring Unity for Immersive VR on Vision Pro with Pinch Teleport
How do I configure a Unity project for a fully immersive VR app on Apple Vision Pro using Metal Rendering, and add a simple pinch-to-teleport-where-looking feature? I've tried the available samples and docs, but they don't cover this clearly (to me). So far, I've reviewed Unity XR docs, Apple dev guides, and tutorials, but most emphasize spatial apps. Metal examples exist but don't include teleportation. Specifically: visionOS sample "XRI_SimpleRig" – Deploys to device/simulator, but no full immersion or teleport. XRI Toolkit sample "XR Origin Hands (XR Rig)" – Pinch gestures detect, but not linked to movement. visionOS "XR Plugin" sample "Metal Sample URP" – Metal setup works, but static scene without locomotion. I'm new in Unity XR development and would appreciate a simple, standalone scene or document focused only on the essentials for "teleport to gaze on pinch" in VR mode—no extra features. I do have some experience in unreal, world toolkit, cosmo, etc from the 90's and I'm ok with code. Please include steps for: Setting up immersive VR (disabling spatial defaults if needed). Integrating pinch detection with ray-based teleport. Any config changes or basic scripts. Project Configuration: Unity Editor Version: 6000.2.5f1.2588.7373 (Revision: 6000.2/staging 43d04cd1df69) Installed Packages: Apple visionOS XR Plugin: 2.3.1 AR Foundation: 6.2.0 PolySpatial XR: 2.3.1 XR Core Utilities: 2.5.3 XR Hands: 1.6.1 XR Interaction Toolkit: 3.2.1 XR Legacy Input Helpers: 2.1.12 XR Plugin Management: 4.5.1 Imported Samples: Apple visionOS XR Plugin 2.3.1: Metal Sample - URP XR Hands 1.6.1 XR Interaction Toolkit 3.2.1: Hands Interaction Demo, Starter Assets, visionOS Build Platform Settings: Target: Apple visionOS App Mode: Metal Rendering with Compositor Services Selected Validation Profiles: visionOS Metal Documentation: Enabled Xcode Version: 26.01 visionOS SDK: 26 Mac Hardware: Apple M1 Max Target visionOS Version: 20 or 26 Test Environment: Model: Apple Vision Pro, visionOS 26.0.1 (23M341), Apple M1 Max No errors in builds so far; just missing the desired functionality. Thanks for a complete response with actionable steps.
0
0
232
Oct ’25
360 Image quality too low even with 72MP How to improve or decrease sphere size
Using a 360 image that I have taken with 72MP with a Insta360 X3 I would like to add those images into my VisionPro and see them surrounding me completely as we expect of a 360 image. I was able to do by performing the described on some tutorial. The problem is the quality. On my 2D window the image looks with great quality. I will still write down the code: struct ImmersiveView: View { @Environment(AppModel.self) var appModel var body: some View { RealityView { content in content.add(createImmersivePicture(imageName: appModel.activeSpace)) } } func createImmersivePicture(imageName: String) -> Entity { let sphereRadius: Float = 1000 let modelEntity = Entity() let texture = try? TextureResource.load(named: imageName, options: .init(semantic: .raw, compression: .none)) var material = UnlitMaterial() material.color = .init(texture: .init(texture!)) modelEntity.components.set( ModelComponent( mesh: .generateSphere( radius: sphereRadius ), materials: [material] ) ) modelEntity.scale = .init(x: -1, y: 1, z: 1) modelEntity.transform.translation += SIMD3<Float>(0.0, 10.0, 0.0) return modelEntity } } Since the quality is a problem. I thought about reducing the radius of the sphere or decreasing the scale. On both cases, nothing changes. I have tried: modelEntity.scale = .init(x: -0.5, y: 0.5, z: 0.5) And also let sphereRadius: Float = 2000, let sphereRadius: Float = 500, but nothing is changed. I also get the warning: IOSurface creation failed: e00002c2 parentID: 00000000 properties: { IOSurfaceAddress = 4651830624; IOSurfaceAllocSize = 35478941; IOSurfaceCacheMode = 0; IOSurfaceMapCacheAttribute = 1; IOSurfaceName = CMPhoto; IOSurfacePixelFormat = 1246774599; } IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceCacheMode IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfacePixelFormat IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceMapCacheAttribute IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAddress IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceAllocSize IOSurface creation failed: e00002c2 parentID: 00000000 property: IOSurfaceName Is there anything I can do to reduce the radius or just to improve the quality itself?
0
0
390
Jan ’25
多相机切换时画质参数差异显著
切换后两者的亮度、色彩饱和度、对比度等画质参数差距较大,导致画面视觉体验割裂,破坏直播连贯性,影响用户观看沉浸感。 期望 "· 对标常规直播单反相机的画质基准,优化 Vision Pro 的画面亮度、色彩还原能力; · 提供设备端或配套软件的画质自定义调节功能(亮度、对比度、色温等),支持直播前手动校准,确保与单反相机画面风格一致。"
0
0
109
2w
New Spatial Rendering App on macOS doesn't display on visionOS device
I created a new Spatial Rendering App from the template in Xcode 26.0.1. When I run the app, click 'Show Immersive Space' and select my Vision Pro from the pop-up dialog, the content in the dialog flickers (which seems to indicate something crashed) and nothing appears on my Vision Pro. I'm running the released macOS 26.0 (25A354) and visionOS 26.0 (23M336). Filed as FB20397093.
0
0
340
Sep ’25
Spatial-backdrop standards process
Apple's WWDC video What’s new for the spatial web says the spatial-backdrop markup may change as it goes through the standards process (at 27:26 mark). I have started adding spatial-backdrops to web pages, so I want to keep an eye out for status updates by Apple and follow the standards progress. Is there any place I can keep an eye on this standards process? Has Apple announced any feature updates or news on spatial-backdrops?
0
0
97
Nov ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
142
May ’25
Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
0
0
99
Apr ’25
visionOS 26 - Rendering Issues related to Transparency
Summary After updating to visionOS 26, we’ve encountered severe transparency rendering issues in RealityKit that did not exist in visionOS 2.6 and earlier. These regressions affect applications that dynamically control scene opacity (via OpacityComponent). Our app renders ultra-realistic apartment environments in real time, where users can walk or teleport inside 3D spaces. When the user moves above a speed threshold, we apply a global transparency effect to prevent physical collisions with real-world objects. Everything worked perfectly in visionOS 2.6 — the problems appeared only after upgrading to 26. Scene Setup Overview The environment consists of multiple USDZ models (e.g., architecture, rooms, furniture). We manage LODs manually for performance (e.g., walls and floors always visible in full-res, while rooms swap between low/high-res versions based on user position and field of view). Transparency is achieved using OpacityComponent, applied dynamically when the user moves. Some meshes (e.g., portals to skyboxes, glass windows) use alpha materials We also use OcclusionMaterials to prevent things to be seen through walls when scene is transparent Observed Behavior by Scenario (I can share a video showing the results of each scenario if needed.) Scenario 1 — Severe Flickering (Root Opacity) Setup: OpacityComponent applied to the root entity NO ModelSortGroupComponent used Symptoms: Strong flickering when transparency is active Triangles within the same mesh render at inconsistent opacity levels Appears as if per-triangle alpha sorting is broken Workaround: Moving the OpacityComponent from the root to each individual USDZ entity removes the per-triangle flicker Pros: No conflicts with portals or alpha materials Scenario 2 — Partially Stable, But Alpha Conflicts Setup: OpacityComponent applied per USDZ entity ModelSortGroupComponent(planarUIAlwaysBehind) applied to portal meshes Other entities have NO ModelSortGroupComponent Symptoms: Frequent alpha blending conflicts: Transparent surfaces behind other transparent surfaces flicker or disappear Example: Wine glasses behind glass doors — sometimes neither is rendered, or only one Even opaque meshes behind glass flicker due to depth buffer confusion Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Analysis: Appears related to internal changes in alpha sorting or depth pre-pass behavior introduced in visionOS 26 Pros: Most stable setup so far Cons: Still unreliable when OpacityComponent is active Scenario 3 — Layer Separation Attempt (Regression) Setup: Same as Scenario 2, but: Entities with alpha materials moved to separate USDZs Explicit ModelSortGroupComponent order set (alpha surfaces rendered last) Symptoms: Transparent surfaces behind other transparent surfaces flicker or disappear Depth is completely broken when there's a large transparent surface Alpha materials sometimes render portals or the real world behind them, ignoring other geometry entirely Workaround Attempt: Re-ordering and further separating models did not solve it Pros: None — this setup makes transparency unusable Conclusion There appears to be a regression in RealityKit’s handling of transparency and sorting in visionOS 26, particularly when: OpacityComponent is applied dynamically, and Scenes rely on multiple overlapping transparent materials. These issues did not exist prior to 26, and the same project (no code changes) behaves correctly on previous versions. Request We’d appreciate any insight or confirmation from Apple engineers regarding: Whether alpha sorting or opacity blending behavior changed in visionOS 26 If there are new recommended practices for combining OpacityComponent with transparent materials If a bug report already exists for this regression Thanks in advance!
0
0
185
Nov ’25
onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
0
0
211
1d
Odd image placeholder appearing when dismissing an ImmersiveSpace with a ImagePresentationComponent
Hello, There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets.. See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space. import OSLog import RealityKit import SwiftUI struct ImmersiveImageView: View { let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView") @Environment(AppModel.self) private var appModel var body: some View { RealityView { content in if let currentMedia = appModel.currentMedia, var imagePresentationComponent = currentMedia.imagePresentationComponent { let imagePresentationComponentEntity = Entity() switch currentMedia.type { case .iphoneSpatialMovie: logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatial3DImmersive case .twoD: logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatial3DImmersive case .visionProConvertedSpatialPhoto: logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))") imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive default : logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)") assertionFailure("Unsupported media type \(currentMedia.type)") } imagePresentationComponentEntity.components.set(imagePresentationComponent) imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition content.add(imagePresentationComponentEntity) } let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton()) let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent) toggleViewAttachmentComponentEntity.position = SIMD3<Float>( AppConstant.Position.spacialImagePosition.x + 1, AppConstant.Position.spacialImagePosition.y, AppConstant.Position.spacialImagePosition.z ) toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments content.add(toggleViewAttachmentComponentEntity) } } }
0
1
205
Jul ’25
Logitech Muse: OS-level support?
I've been experimenting with the Muse pen and understand that it can be accessed by my app through a SpatialTrackingSession, but is there any current or planned support for devices like this as for general UI input like game controllers are? For example, using the button as a tap analogue for SwiftUI views.
0
0
103
Oct ’25
Material showing only half?
Hi, I'm currently implementing 180° / 360° property for immersive video in my app. I was able to implement 360° easily by just giving VideoMaterial to flipped sphere. However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere. But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
0
0
77
Oct ’25
Vision Pro stucks on "Retrieving configuration"
Hi, after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue? Thank you
0
0
111
May ’25
How to make .blur(radius:) visually affect RealityView content?
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred. Here’s the test code: struct ContentView: View { var body: some View { VStack(spacing: 20) { Text("Above the RealityView") .font(.title) RealityView { content, attachments in if let text = attachments.entity(for: "2dView") { text.position.y = 0.1 content.add(text) } let box = ModelEntity( mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)] ) content.add(box) } attachments: { Attachment(id: "2dView") { Text("Above the Box") .font(.title) } } .frame(width: 300, height: 300) .border(.blue) .blur(radius: 99) // Has no visual effect Text("Below the RealityView") .font(.subheadline) } .padding() } } My question: How can I make .blur(radius:) visually affect the content rendered in a RealityView? Can you provide a working example that .blur() to visually affect any part of a RealityView? Thanks!
0
0
113
May ’25
visionOS widget dimensions?
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS. https://developer.apple.com/design/human-interface-guidelines/widgets My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
0
0
572
Jul ’25
Can't establish spatial connection after visionOS update
After updating to visionOS 26.2 Beta 2 (and Beta 3), I'm unable to establish a spatial connection to Vision Pro. This was working fine before the update. To test, I've created a fresh spatialApp project from the Xcode template with zero modifications, but I'm hitting the same issue - the Vision Pro is discovered but won't connect. Am I forgetting to update the config somewhere? Any ideas what might be causing this and how to fix it? Thanks! Warning: -[NSWindow makeKeyWindow] called on <NSWindow: 0xa1f811900> windowNumber=1b9 which returned NO from -[NSWindow canBecomeKeyWindow]. ((processConfiguration != nil && configuration != nil) || (processConfiguration == nil && configuration == nil)) - /AppleInternal/Library/BuildRoots/4~CBS0ugAIF7BrQZjLe6r0lhPXO4GJmNDTovxYoV0/Library/Caches/com.apple.xbs/Sources/ExtensionKit/ExtensionKit/Source/HostViewController/Internal/EXHostSessionDriver.m:80: `processConfiguration` and `configuration` must be both non-nil or both nil Unable to obtain a task name port right for pid 415: (os/kern) failure (0x5) CCContextDeviceGroup.mm(291):+[CCContextDeviceGroup checkBinaryArchivesForDevice:withBundle:]: Failed to find any binary shader archive
0
0
90
Nov ’25
RoomPlan crash on iOS 16 devices even when using available checks
I've encountered an unexpected crash with RoomPlan on iOS 16 devices. The odd part is the code is protected by an available check, since I'm using newer RoomPlan features. Xcode error dyld[40588]: Symbol not found: _$s8RoomPlan08CapturedA0V16USDExportOptionsV5modelAEvgZ I can repro using the Apple sample code. https://developer.apple.com/documentation/roomplan/create-a-3d-model-of-an-interior-room-by-guiding-the-user-through-an-ar-experience Modify RoomCaptureViewController.swift as follows. Remove try finalResults?.export(to: destinationURL, exportOptions: .parametric) Add if #available(iOS 17.0, *) { try finalResults?.export(to: destinationURL, exportOptions: .model) } else { try finalResults?.export(to: destinationURL, exportOptions: .parametric) } I would have expected this code to at least compile and run on older devices. When the app was targeting iOS 15, the available checks worked as expected and the app is able to launch properly.
0
0
155
Apr ’25