Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

ARMeshAnchors are very unreliable on iPad Pro (4th gen)
Hello, We are developing an AR app that requires the lidar meshes. Unfortunately the ARMeshAnchors that allows us to retrieve the mesh data are very unreliable. It happens very often that the ARSession removes all ARMeshAnchors and takes anywhere from 5s to 30s to reappear. The planes detection (ARPlaneAnchors) are still working fine and the camera tracking is also working normally. I tried a basic ARKit sample app, and got the same behaviour as our own app. Is this a known issue ? Anything we can do to mitigate the issue ? Thank you
1
0
293
Mar ’25
iOS needs to allow for background bluetooth scanning. I can't fully build my app.
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences. The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible. A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active. Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically. Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast. The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications. Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
1
0
1.1k
Sep ’25
Realistic Water Shading
Hi I'm trying to create a water shader using the shader graph in Reality Composer Pro, but quite a few of the features you would need for realistic water rendering appear to be missing. One big issue is the lack of a way to create refraction. We can easily control the transparency of the water by changing the opacity, but how can we distort what we see through the water? I can't find any obvious solution for that. In Unity, they provide a node called HD Scene Color which is basically the scene rendered to an offscreen buffer which you can apply to the water and then distort to get a refraction effect. I guess the Background Blur node could be used for something like this if we could turn off the blur and distort it, but there's no control for the blur and no control for the texture coordinates. Am I missing something? Any ideas are welcome :)
1
0
507
Jan ’25
Need to rotate child of a 3D mesh
I am creating a vision pro app with a 3D model, it has a mesh hierarchy of head, hands, feet etc. I want the character to look towards the camera, but am not able to access head of character through sceneKit nor reality kit. when I try to print names of the child meshes, it only prints till the character, it does iterate through all the body parts. Can anyone help?
1
0
155
Sep ’25
Unexpected behavior when writing entities and loading realityFiles.
I have a simple visionOS app that creates an Entity, writes it to the device, and then attempts to load it. However, when the entity file get overwritten, it affects the ability for the app to load it correctly. Here is my code for saving the entity. import SwiftUI import RealityKit import UniformTypeIdentifiers struct ContentView: View { var body: some View { VStack { ToggleImmersiveSpaceButton() Button("Save Entity") { Task { // if let entity = await buildEntityHierarchy(from: urdfPath) { let type = UTType.realityFile let filename = "testing.\(type.preferredFilenameExtension ?? "bin")" let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent(filename) do { let mesh = MeshResource.generateBox(size: 1, cornerRadius: 0.05) let material = SimpleMaterial(color: .blue, isMetallic: true) let modelComponent = ModelComponent(mesh: mesh, materials: [material]) let entity = Entity() entity.components.set(modelComponent) print("Writing \(fileURL)") try await entity.write(to: fileURL) } catch { print("Failed writing") } } } } .padding() } } Every time I press "Save Entity", I see a warning similar to: Writing file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality Failed to set dependencies on asset 1941054755064863441 because NetworkAssetManager does not have an asset entity for that id. When I open the immersive space, I attempt to load the same file: import SwiftUI import RealityKit import UniformTypeIdentifiers struct ImmersiveView: View { @Environment(AppModel.self) private var appModel var body: some View { RealityView { content in guard let type = UTType.realityFile.preferredFilenameExtension else { return } let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent("testing.\(type)") guard FileManager.default.fileExists(atPath: fileURL.path) else { print("❌ File does not exist at path: \(fileURL.path)") return } if let entity = try? await Entity(contentsOf: fileURL) { content.add(entity) } } } } I also get errors after I overwrite the entity (by pressing "Save Entity" after I have successfully loaded it once). The warnings that appear when the Immersive space attempts to load the new entity are: Asset 13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'. Asset 8308977590385781534 Scene (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Scene_0.compiledscene failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to read archive entry. AssetLoadRequest failed because asset failed to load '13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh' (Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.) The order of operations to make this happen: Launch app Press "Save Entity" to save the entity "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request Also Launch app, the entity should still be save from last time the app ran "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request NOTE: It appears I can get it to work slightly better by pressing the "Save Entity" button twice before attempting to view it again in the immersive space.
0
0
165
Aug ’25
How to mix Animation and IKRig in RealityKit
I want an AR character to be able to look at a position while still playing the characters animation. So far, I managed to manually adjust a single bone rotation using skeletalComponent.poses.default = Transform( scale: baseTransform.scale, rotation: lookAtRotation, translation: baseTransform.translation ) which I run at every rendering update, while a full body animation is running. But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints. I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro. But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
1
0
494
Aug ’25
visionOS: Unable to programmatically close child WindowGroup when parent window closes
Hi , I'm struggling with visionOS window management and need help with closing child windows programmatically. App Structure My app has a Main-Sub window hierarchy: AWindow (Home/Main) BWindow (Main feature window) CWindow (Tool window - child of BWindow) Navigation flow: AWindow → BWindow (switch, 1 window on screen) BWindow → CWindow (opens child, 2 windows on screen) I want BWindow and CWindow to be separate movable windows (not sheet/popover) so users can position them independently in space. The Problem CWindow doesn't close when BWindow closes by tapping the X button below the app (next to the window bar) User clicks X on BWindow → BWindow closes but CWindow remains CWindow becomes orphaned on screen Can close CWindow programmatically when switching BWindow back to AWindow App launch issue After closing both windows, CWindow is remembered as last window Reopening app shows only CWindow instead of BWindow User gets stuck in CWindow with no way back to BWindow I've Tried Environment dismissWindow in cleanup but its not working. // In BWindow.swift .onDisappear { if windowManager.isWindowOpen("cWindow") { dismissWindow(id: "cWindow") } } My App Structure Code Now // in MyNameApp.swift @main struct MyNameApp: App { var body: some Scene { WindowGroup(id: "aWindow") { AWindow() } WindowGroup(id: "bWindow") { BWindow() } WindowGroup(id: "cWindow") { CWindow() } } } // WindowStateManager.swift class WindowStateManager: ObservableObject { static let shared = WindowStateManager() @Published private var openWindows: Set<String> = [] @Published private var windowDependencies: [String: String] = [:] private init() {} func markWindowAsOpen(_ id: String) { markWindowAsOpen(id, parent: nil) } func markWindowAsClosed(_ id: String) { openWindows.remove(id) windowDependencies[id] = nil } func isWindowOpen(_ id: String) -> Bool { let isOpen = openWindows.contains(id) return isOpen } func markWindowAsOpen(_ id: String, parent: String? = nil) { openWindows.insert(id) if let parentId = parent { windowDependencies[id] = parentId } } func getParentWindow(of childId: String) -> String? { let parent = windowDependencies[childId] return parent } func getChildWindows(of parentId: String) -> [String] { let children = windowDependencies.compactMap { key, value in value == parentId ? key : nil } return children } func setNextWindowParent(_ parentId: String) { UserDefaults.standard.set(parentId, forKey: "nextWindowParent") } func getAndClearNextWindowParent() -> String? { let parent = UserDefaults.standard.string(forKey: "nextWindowParent") UserDefaults.standard.removeObject(forKey: "nextWindowParent") return parent } func forceCloseChildWindows(of parentId: String) { let children = getChildWindows(of: parentId) for child in children { markWindowAsClosed(child) NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) forceCloseChildWindows(of: child) } } func hasMainWindowOpen() -> Bool { let mainWindows = ["main", "bWindow"] return mainWindows.contains { isWindowOpen($0) } } func cleanupOrphanWindows() { for (child, parent) in windowDependencies { if isWindowOpen(child) && !isWindowOpen(parent) { NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) markWindowAsClosed(child) } } } } // BWindow.swift struct BWindow: View { @Environment(\.dismissWindow) private var dismissWindow @ObservedObject private var windowManager = WindowStateManager.shared var body: some View { VStack { Button("Open C Window") { windowManager.setNextWindowParent("bWindow") openWindow(id: "cWindow") } } .onAppear { windowManager.markWindowAsOpen("bWindow") } .onDisappear { windowManager.markWindowAsClosed("bWindow") windowManager.forceCloseChildWindows(of: "bWindow") } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background || newValue == .inactive { windowManager.forceCloseChildWindows(of: "bWindow") } } } } // CWindow.swift import SwiftUI struct cWindow: View { @ObservedObject private var windowManager = WindowStateManager.shared @State private var shouldClose = false var body: some View { // Content } .onDisappear { windowManager.markWindowAsClosed("cWindow") NotificationCenter.default.removeObserver( self, name: Notification.Name("ForceCloseWindow"), object: nil ) } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background { } } .onAppear { let parent = windowManager.getAndClearNextWindowParent() windowManager.markWindowAsOpen("cWindow", parent: parent) NotificationCenter.default.addObserver( forName: Notification.Name("ForceCloseWindow"), object: nil, queue: .main) { notification in if let windowId = notification.userInfo?["windowId"] as? String, windowId == "cWindow" { shouldClose = true } } } .onChange(of: shouldClose) { _, newValue in if newValue { dismissWindow() } } } The logs show everything executes correctly, but CWindow remains visible on screen. Questions Why doesn't dismissWindow(id:) work in cleanup scenarios? Is there a proper way to create a window relationships like parent-child relationships in visionOS? How can I ensure main windows open on app launch instead of tool windows? What's the recommended pattern for dependent windows in visionOS? Environment: Xcode 16.2, visionOS 2.0, SwiftUI
2
0
332
Aug ’25
Volumetric window not sharing in SharePlay session for VisionOS
I've been struggling with this for far too long so I've decided to finally come here and see if anyone can point me to the documentation that I'm missing. I'm sure it's something so simple but I just can't figure it out. I can SharePlay our test app with my brother (device to device) but when I open a volumetric window, it says "not shared" under it. I assume this will likely fix the video sharing problem we have as well. Everything else works so smooth but SharePlay has just been such a struggle for me. It's the last piece to the puzzle before we can put it on the App Store.
2
0
119
Sep ’25
Build not working
[xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") Tool exited with code 1
1
0
410
Jan ’25
RealityKit fullscreen layer
Hi! I'm currently trying to render another XR scene in front of a RealityKit one. Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0. Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes? Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know. Thanks in advance and have a good day!
2
0
299
Jul ’25
Presenting images in RealityKit sample No Longer Builds
After updating to the latest visionOS beta, visionOS 26 Beta 4 (23M5300g) the ‘Presenting images in RealityKit’ sample from the following link no longer builds due to an error. https://developer.apple.com/documentation/RealityKit/presenting-images-in-realitykit Expected / Previous: Application builds and runs on device, working as described in the documentation. Reality: Application builds, but does not run on device due to an error (shown in screenshot) “Thread 1: EXC_BAD_ACCESS (code=1, address=0xb)”. The application still runs on the simulator, but not on device. When launching the app from Xcode, it builds and installs correctly but hangs due to the respective error. When loading the app from the Home Screen, the app does not load, and immediately returns to the Home Screen. This Xcode project previously ran with no changes to code - the only change was updating the visionOS system software to the latest version. visionOS 26 Beta 4 (23M5300g) Is anyone else experiencing this issue?
4
0
198
Aug ’25
VisionOS Main Camera Enterprise API: Development license into distribution for Business Store
Hello, We've been working for months now on an App for the Vision Pro. (it's been great btw!) We already have an App in the App Store for iOS, and have been migrating our platform from the Microsoft Hololens 2 to the AVP: https://apps.microsoft.com/detail/9NPPP031VHD1 We require the Main Camera access and already have gotten the Enterprise.license for development purposes. Unfortunately, we cannot publish our Business App (which uses an Enterprise API) under the same Name/Bundle ID as our iOS App because it would conflict with our current Distribution Method. We arrived at the conclusion that we need a new Enterprise.license under a different Bundle ID to create a new App for the Business Store. Has anyone been in the same boat as us, and tried to publish to the Business Store while already having an App in the Public App Store under the same name? We applied to get another license for distribution under another name (with "Pro" at the end), but it's been stuck in limbo for over a month now (probably because the new bundle ID doesn't have any track record). Anyhow, thanks for any help, we're open to suggestions as to how to proceed!
0
0
441
Feb ’25
[WWDC25] For GuessTogether, can you initiate a FaceTime call via the custom SharePlay button?
Hello, For GuessTogether source code, it seems like the code assumes that you're already in a FaceTime call before pressing the custom SharePlay button (labeled "Play Guess Together"). If not already on a FaceTime call, my Apple Vision Pro and the visionOS simulator both do nothing after throwing warnings. Is this intended behavior? If so, how do I make it so that pressing the button can also initiate FaceTime calls? Is this allowed? Thank you!
3
0
111
Sep ’25
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
1
0
73
Jul ’25
Build Vision Pro failed
`error: [xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift error: [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") error: Tool exited with code 1
5
0
622
Jul ’25
Setting clip shape of a RealityView
I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)): struct StereoImage: View { var body: some View { let spacing: CGFloat = 10.0 let padding: CGFloat = 40.0 VStack(spacing: spacing) { Text("Stereoscopic Image Example") .font(.largeTitle) RealityView { content in let creator = StereoImageCreator() guard let entity = await creator.createImageEntity() else { print("Failed to create the stereoscopic image entity.") return } content.add(entity) } .frame(depth: .zero) } .padding(padding) .clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE! } } This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers. How can I properly apply a clipshape to RealityViews like this? Thanks!
3
0
462
Feb ’25
new algorithm significantly improves PhotogrammetrySession?
I noticed in the latest macOS beta 3 that there was this update: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) I am not noticing any difference to before with the reconstructions I tested so I am assuming it's reverting to the old model but in the logs there is no way to see if it succeeds or fails to download that new model. do you have any more information on what was improved here with some examples and what we should be looking for? also how can confirm the download of that new model has not failed?
0
0
317
Jul ’25