Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Having an issue with using a custom component with Reality Composer Pro
We have a project which is currently being built as a XCFramework. The framework contains a custom component to be used with entities in Reality Composer Pro. I have tried to se set the RCP Package.swift file to reference the framework package for the in the dependancies. Nothing that I do with the folder path to reference the code is working. Do I need to change the project to be using Swift source code instead of a XCFramework? The component needs to be in the framework as there is a class in the framework that works directly with the custom compoent. I am able to reference the XCFramework as a Swift Package with other projects.
2
0
125
Apr ’25
Shared/GroupImmersive Space – Query Local Device Transform
Hi, I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true Now visionOS establishes a shared coordinate space for the immersive space. From the docs: To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform to position content in front of the user (plus some Z Offset and smooth interpolation). This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset. Here is a video of the issue in action: https://streamable.com/205r2p The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position. The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset. Is there any way I can query query this transform locally for the user during a FaceTime call? Also I would like to know if it's possible to disable this automatic entity transform syncing behavior? Setting entity.synchronization = nil results in the entity not showing up at all. https://developer.apple.com/documentation/realitykit/synchronizationcomponent Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach? Thank you!
2
0
292
Oct ’25
Volumetric window not sharing in SharePlay session for VisionOS
I've been struggling with this for far too long so I've decided to finally come here and see if anyone can point me to the documentation that I'm missing. I'm sure it's something so simple but I just can't figure it out. I can SharePlay our test app with my brother (device to device) but when I open a volumetric window, it says "not shared" under it. I assume this will likely fix the video sharing problem we have as well. Everything else works so smooth but SharePlay has just been such a struggle for me. It's the last piece to the puzzle before we can put it on the App Store.
2
0
132
Sep ’25
Using vision pro to detect distance to real life objects
Is it possible to detect distance from the vision pro to real live objects and people? I tried using scene.raycast to perform a raycast forward from the center of the viewport, but it doesn't seem to react to real life objects, only entities. I see mentioned here: https://developer.apple.com/forums/thread/776807?answerId=829576022#829576022, that a raycast with scene reconstruction should allow me to measure that distance, as long as the object is non-moving. How could I accomplish that?
2
0
173
Apr ’25
How to create a MultiplayerDelegate and use TabletopNetworkSession features?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works. Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession. Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?   possibly we are missing something here; any suggestions?
2
0
246
Apr ’25
TextComponent on iOS/macOS pixelated when viewed from short distance
Hello, I've been tinkering a bit with TextComponent. Based on the docs it seems like this component should always render sharp and nice text, no matter how close the user gets: RealityKit dynamically adjusts the backing size to a value that results in high-fidelity text at its current location. And it does on visionOS, but on iOS and macOS the text gets pixelated when I get close to it, as if its just rendering it once as a plain image texture. Can anyone tell me if this is expected behavior or a bug? Here two screenshots for comparison (iPhone and Vision Pro): Thanks!
2
2
127
Aug ’25
Reading Imagefiles outside of app content
Hi there. Thanks to amazing help from you guys, I've managed to code a 360 image carousel, where the user can browse 360 images located inside the project package. Is there a way to access the filesystem on AVP outside the app? I know about the FileManager, and I can get access to the .documentsDirectory, but how do I access documents folder from the "Files" app on the AVP? My goal is to read images from a hardcoded folderlocation on the AVP, such that the user never will have to select the images themselves. I know this may not be the "right" way to do this. The app is supposed to be "foolproof" with a minimum of userinteraction. The only way to change the images should be to change the contents of the hardcoded imagefolder. I hope this makes sense =) Thanks in advance! Regards, Kim
2
0
282
Feb ’25
Difference in ARKit plane detection from iPhone 8 to iPhone 15
I am developing an ARKit based application that requires plane detection of the tabletop at which the user is seated. Early testing was with an iPhone 8 and iPhone 8+. With those devices, ARKit rapidly detected the plane of the tabletop when it was only 8 to 10 inches away. Using iPhone 15 with the same code, it seems to require me to move the phone more like 15 to 16 inches away before detecting the plane of the table. This is an awkward motion for a user seated at a table. To validate that it was not necessarily a feature of my code, I determined that the same behavior results with Apple's sample AR Interaction application. Has anyone else experienced this, and if so, have suggestions to improve the situation?
2
0
505
Feb ’25
visionOS: Unable to programmatically close child WindowGroup when parent window closes
Hi , I'm struggling with visionOS window management and need help with closing child windows programmatically. App Structure My app has a Main-Sub window hierarchy: AWindow (Home/Main) BWindow (Main feature window) CWindow (Tool window - child of BWindow) Navigation flow: AWindow → BWindow (switch, 1 window on screen) BWindow → CWindow (opens child, 2 windows on screen) I want BWindow and CWindow to be separate movable windows (not sheet/popover) so users can position them independently in space. The Problem CWindow doesn't close when BWindow closes by tapping the X button below the app (next to the window bar) User clicks X on BWindow → BWindow closes but CWindow remains CWindow becomes orphaned on screen Can close CWindow programmatically when switching BWindow back to AWindow App launch issue After closing both windows, CWindow is remembered as last window Reopening app shows only CWindow instead of BWindow User gets stuck in CWindow with no way back to BWindow I've Tried Environment dismissWindow in cleanup but its not working. // In BWindow.swift .onDisappear { if windowManager.isWindowOpen("cWindow") { dismissWindow(id: "cWindow") } } My App Structure Code Now // in MyNameApp.swift @main struct MyNameApp: App { var body: some Scene { WindowGroup(id: "aWindow") { AWindow() } WindowGroup(id: "bWindow") { BWindow() } WindowGroup(id: "cWindow") { CWindow() } } } // WindowStateManager.swift class WindowStateManager: ObservableObject { static let shared = WindowStateManager() @Published private var openWindows: Set<String> = [] @Published private var windowDependencies: [String: String] = [:] private init() {} func markWindowAsOpen(_ id: String) { markWindowAsOpen(id, parent: nil) } func markWindowAsClosed(_ id: String) { openWindows.remove(id) windowDependencies[id] = nil } func isWindowOpen(_ id: String) -> Bool { let isOpen = openWindows.contains(id) return isOpen } func markWindowAsOpen(_ id: String, parent: String? = nil) { openWindows.insert(id) if let parentId = parent { windowDependencies[id] = parentId } } func getParentWindow(of childId: String) -> String? { let parent = windowDependencies[childId] return parent } func getChildWindows(of parentId: String) -> [String] { let children = windowDependencies.compactMap { key, value in value == parentId ? key : nil } return children } func setNextWindowParent(_ parentId: String) { UserDefaults.standard.set(parentId, forKey: "nextWindowParent") } func getAndClearNextWindowParent() -> String? { let parent = UserDefaults.standard.string(forKey: "nextWindowParent") UserDefaults.standard.removeObject(forKey: "nextWindowParent") return parent } func forceCloseChildWindows(of parentId: String) { let children = getChildWindows(of: parentId) for child in children { markWindowAsClosed(child) NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) forceCloseChildWindows(of: child) } } func hasMainWindowOpen() -> Bool { let mainWindows = ["main", "bWindow"] return mainWindows.contains { isWindowOpen($0) } } func cleanupOrphanWindows() { for (child, parent) in windowDependencies { if isWindowOpen(child) && !isWindowOpen(parent) { NotificationCenter.default.post( name: Notification.Name("ForceCloseWindow"), object: nil, userInfo: ["windowId": child] ) markWindowAsClosed(child) } } } } // BWindow.swift struct BWindow: View { @Environment(\.dismissWindow) private var dismissWindow @ObservedObject private var windowManager = WindowStateManager.shared var body: some View { VStack { Button("Open C Window") { windowManager.setNextWindowParent("bWindow") openWindow(id: "cWindow") } } .onAppear { windowManager.markWindowAsOpen("bWindow") } .onDisappear { windowManager.markWindowAsClosed("bWindow") windowManager.forceCloseChildWindows(of: "bWindow") } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background || newValue == .inactive { windowManager.forceCloseChildWindows(of: "bWindow") } } } } // CWindow.swift import SwiftUI struct cWindow: View { @ObservedObject private var windowManager = WindowStateManager.shared @State private var shouldClose = false var body: some View { // Content } .onDisappear { windowManager.markWindowAsClosed("cWindow") NotificationCenter.default.removeObserver( self, name: Notification.Name("ForceCloseWindow"), object: nil ) } .onChange(of: scenePhase) { oldValue, newValue in if newValue == .background { } } .onAppear { let parent = windowManager.getAndClearNextWindowParent() windowManager.markWindowAsOpen("cWindow", parent: parent) NotificationCenter.default.addObserver( forName: Notification.Name("ForceCloseWindow"), object: nil, queue: .main) { notification in if let windowId = notification.userInfo?["windowId"] as? String, windowId == "cWindow" { shouldClose = true } } } .onChange(of: shouldClose) { _, newValue in if newValue { dismissWindow() } } } The logs show everything executes correctly, but CWindow remains visible on screen. Questions Why doesn't dismissWindow(id:) work in cleanup scenarios? Is there a proper way to create a window relationships like parent-child relationships in visionOS? How can I ensure main windows open on app launch instead of tool windows? What's the recommended pattern for dependent windows in visionOS? Environment: Xcode 16.2, visionOS 2.0, SwiftUI
2
0
349
Aug ’25
Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
2
0
106
Apr ’25
spatial-backdrop feature available yet?
In WWDC25 session What’s new for the spatial web, the presenter showed creating an immersive environment for a web page by adding to the page's HEAD section <link rel="spatial-backdrop" href="office.usdz" environmentmap="lighting.hdr"> My first attempt failed, and I am trying to track down why. Before I search all the potential failure paths, I wanted to ask the community, Is this feature available in the latest visionOS 26 beta? I haven't seen anyone talk about their use of the feature yet.
2
0
233
Aug ’25
Multiple-frames BlendShape (failed) Animation in Reality Composer Pro
Goal: To render in an apple vision pro app, the solid-mechanics 3D simulation results coming form an FEA code. Starting point: I have surface vtks with deformations on each node. Each time step has a a mesh with the nodal coordinates. This is straighforward translatable to a usd MeshSequence. Unfortunately, the results cannot be simplified to a scaling o linear transformation as you would do with other game-oriented animations. Tools: Right now, I am using Xcode and reality composer pro (RCP) to build the scenes. Technical limitations: I am aware that RCP can do animations with BlendMesh and skeletons and that MeshSequence is not a problem. Progress: Coverting to the sequence of vtk meshes to a usd MeshSequence is straighforward. This animates correctly in Preview and Blender (see screenshot). I managed to convert from MeshSequence to multiple keys and BlendMesh. This also animates correctly in Blender and preview. Unfortunately, the BlendMesh of multiple blended meshes shows a zero animation time in RCP (see screenshot below) Also, see below usda file scheme for the animation. Of course I am not showing full vectors such as faceVertexCounts, faceVertexIndex, normals. Question: what is the right set up to create a BlendMesh animation that RCP will correctly import and animate, form a set of Meshes or multiple key shapes? Blender animation Time zero RCP "animations" #usda 1.0 ( defaultPrim = "BlendMeshRoot" doc = "Blender v4.5.3 LTS" endTimeCode = 48 framesPerSecond = 24 metersPerUnit = 1 startTimeCode = 0 timeCodesPerSecond = 24 upAxis = "Z" ) def Xform "BlendMeshRoot" ( customData = { dictionary Blender = { bool generated = 1 } } ) { def SkelRoot "Mesh" { custom string userProperties:blender:object_name = "Mesh" float3 xformOp:rotateXYZ = (89.99999, -0, 0) float3 xformOp:scale = (0.009999999, 0.01, 0.01) double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"] def Mesh "Mesh" ( active = true prepend apiSchemas = ["MaterialBindingAPI", "SkelBindingAPI"] ) { uniform bool doubleSided = 1 float3[] extent = [(25.091871, -34.121277, -13.298501), (299.94482, 245.10088, 202.35126)] int[] faceVertexCounts = [3, 3, ... int[] faceVertexIndices = [0, 10293, ... rel material:binding = </BlendMeshRoot/_materials/MeshSequence_Default> normal3f[] normals = [(-0.3632836, -0.9102419, -0.19870725), .... point3f[] points = [(244.41148, 155.42062, 70.454926),..... float3[] primvars:node_displacement = [(93.54703, 110.9341, 48.37992).... float3[] primvars:Normals = [(-0.0050530406, -0.9910114, -0.13368203),... int[] primvars:skel:jointIndices = [0, 0, 0, 0, 0 ... float[] primvars:skel:jointWeights = [1, 1, 1, 1, 1... uniform token[] skel:blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"] rel skel:blendShapeTargets = [ </BlendMeshRoot/Mesh/Mesh/frame_0000>, ....... </BlendMeshRoot/Mesh/Mesh/frame_0005>, ] prepend rel skel:skeleton = </BlendMeshRoot/Mesh/Skel> uniform token subdivisionScheme = "none" custom string userProperties:blender:data_name = "Mesh" custom float userProperties:originalTime float userProperties:originalTime.timeSamples = { 0: 0, } def BlendShape "frame_0000" { uniform vector3f[] offsets = [(0, 0, 0), (0, 0, 0),..... uniform int[] pointIndices = [0, 1, 2, ..... } ..... ..... #### BlendShape frame to 0005 ..... def Skeleton "Skel" ( prepend apiSchemas = ["SkelBindingAPI"] ) { uniform matrix4d[] bindTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )] uniform token[] joints = ["joint1"] uniform matrix4d[] restTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )] prepend rel skel:animationSource = </BlendMeshRoot/Mesh/Skel/Anim> def SkelAnimation "Anim" { uniform token[] blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"] float[] blendShapeWeights.timeSamples = { 0: [1, 0, 0, 0, 0, 0], 1: [0.9697085, 0.03029152, 0, 0, 0, 0], 2: [0.88787615, 0.11212383, 0, 0, 0, 0], ..... 46: [0, 0, 0, 0, 0.11212379, 0.8878762], 47: [0, 0, 0, 0, 0.030291557, 0.96970844], 48: [0, 0, 0, 0, 0, 1], } } } } def Scope "_materials" { def Material "MeshSequence_Default" { token outputs:surface.connect = </BlendMeshRoot/_materials/MeshSequence_Default/Principled_BSDF.outputs:surface> custom string userProperties:blender:data_name = "MeshSequence_Default" def Shader "Principled_BSDF" { uniform token info:id = "UsdPreviewSurface" float inputs:clearcoat = 0 float inputs:clearcoatRoughness = 0.03 color3f inputs:diffuseColor = (0.8, 0.4, 0.3) float inputs:ior = 1.5 float inputs:metallic = 0 float inputs:opacity = 1 float inputs:roughness = 0.5 float inputs:specular = 0.2 token outputs:surface } } } def Scope "AnimationClips" { custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim> } def RealityKitComponent "AnimationLibrary" { custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim> custom token info:id = "RealityKit.AnimationLibrary" custom double realitykit:approximateDuration = 2 custom double[] realitykit:clipDurations = [2] custom string[] realitykit:clipNames = ["Anim"] custom rel realitykit:clipTargets = </BlendMeshRoot/Mesh/Skel/Anim> custom double realitykit:frameRate = 24 custom bool realitykit:isAnimationLibrary = 1 } }
2
1
414
Oct ’25
Misaligned visionOS Simulator Home Position
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace. How to resolve: restart simulator device. See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
2
0
884
Sep ’25
How to configure Spatial Audio on a Video Material?, Compile error.
I've tried following apple's documentation to apply a video material on a Model Entity, but I have encountered a compile error while attempting to specify the Spatial Audio type. It is a 360 video on a Sphere which plays just fine, but the audio is too quiet compared to the volume I get when I preview the video on Xcode. So I tried tried to configure audio playback mode on the material but it gives me a compile error: "audioInputMode' is unavailable in visionOS audioInputMode' has been explicitly marked unavailable here RealityFoundation.VideoPlaybackController.audioInputMode)" https://developer.apple.com/documentation/realitykit/videomaterial/ Code: let player = AVPlayer(url: url) // Instantiate and configure the video material. let material = VideoMaterial(avPlayer: player) // Configure audio playback mode. material.controller.audioInputMode = .spatial // this line won’t compile. VisionOS 2.4, Xcode 16.4, also tried Xcode 26 beta 2. The videos are HEVC MPEG-4 codecs. Is there any other way to do this, or is there a workaround available? Thank you.
2
0
241
Jul ’25
RealityKit fullscreen layer
Hi! I'm currently trying to render another XR scene in front of a RealityKit one. Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0. Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes? Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know. Thanks in advance and have a good day!
2
0
317
Jul ’25
How to get the floor plane with Spatial Tracking Session and Anchor Entity
In the WWDC session titled "Deep dive into volumes and immersive spaces", the developers discussed adding a Spatial Tracking Session and an Anchor Entity to detect the floor. They then glossed over some important details. They added a spatial tap gesture to let the user place content relative to the floor anchor, but they left a lot of information. .gesture( SpatialTapGesture( coordinateSpace: .immersiveSpace ) .targetedToAnyEntity() .onEnded { value in handleTapOnFloor(value: value) } ) My understanding is that an entity has to have input and collision components for gestures like this to work. How can we add a collision to an AnchorEntity when we don't know its size or shape? I've been trying for days to understand what is happening here and I just don't get it. It is even more frustrating that the example project that Apple released does not contain any of these features. I would like to be able Detect the floor plane Get the position/transform of the floor plane Add a collider to the floor plane Enable collisions and physics on the floor plane Enable gestures on the floor plane It seems to me that the Anchor Entity is placed as an entirely arbitrary position. It has absolutely no relationship to the rectangle with the floor label that I can see in the Xcode visualization. It is just a point, not a plane or rect that I can use. I've tried manually calculating the collision shape after the anchor is detected, but nothing that I have tried works. I can't tap on the floor with gestures. I can't drop entities onto the floor. I can't seem to do ANYTHING at all with this floor anchor other than place entity at the totally arbitrary location somewhere on the floor. Is there anyway at all with Spatial Tracking Session and Anchor Entity to get the actual plane that was detected? struct FloorExample: View { @State var trackingSession: SpatialTrackingSession = SpatialTrackingSession() @State var subject: Entity? @State var floor: AnchorEntity? var body: some View { RealityView { content, attachments in let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.plane]) _ = await session.run(configuration) self.trackingSession = session let floorAnchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: SIMD2(x: 0.1, y: 0.1))) floorAnchor.anchoring.physicsSimulation = .none floorAnchor.name = "FloorAnchorEntity" floorAnchor.components.set(InputTargetComponent()) floorAnchor.components.set(CollisionComponent(shapes: .init())) content.add(floorAnchor) self.floor = floorAnchor // This is just here to let me see where visinoOS decided to "place" the floor anchor. let floorPlaced = ModelEntity( mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .black, isMetallic: false)]) floorAnchor.addChild(floorPlaced) if let scene = try? await Entity(named: "AnchorLabsFloor", in: realityKitContentBundle) { content.add(scene) if let subject = scene.findEntity(named: "StepSphereRed") { self.subject = subject } // I can see when the anchor is added _ = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in event.anchor.generateCollisionShapes(recursive: true) // this doesn't seem to work print("**anchor changed** \(event)") print("**anchor** \(event.anchor)") } // place the reset button near the user if let panel = attachments.entity(for: "Panel") { panel.position = [0, 1, -0.5] content.add(panel) } } } update: { content, attachments in } attachments: { Attachment(id: "Panel", { Button(action: { print("**button pressed**") if let subject = self.subject { subject.position = [-0.5, 1.5, -1.5] // Remove the physics body and assign a new one - hack to remove momentum if let physics = subject.components[PhysicsBodyComponent.self] { subject.components.remove(PhysicsBodyComponent.self) subject.components.set(physics) } } }, label: { Text("Reset Sphere") }) }) } } }
2
0
853
Jan ’25