Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

[WebXR] Support for AR module in VisionOS 2.x
Thank you again for pushing the web forward in VisionOS 2, super exciting! The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences. Samples such as this one are not supported: https://immersive-web.github.io/webxr-samples/immersive-ar-session.html In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything. Is this the expected behavior at this time? Any developer preview(s) we could try?
10
6
2.4k
3w
How to get multiple animations into USDZ
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender. When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation. In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation. In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation. I can see in the Apple sample apps models can have multiple animations in their Animation Library. I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
3
1
548
Oct ’25
Reality Composer Project and Xcode 16
After upgrading to Xcode 16 my app, which utilizes imported project files from my iPad's Reality Composer app, now has two issues that I have found so far. I am using an ARView as a UIViewRepresentable with SwiftUI. (Prior to upgading to Xcode 16 everything worked well.) First, there are now several duplicate rcp_export.usdz resources in the "Copy Bundle Resources" build phase section. Even though each file is in a separate folder with a unique UUID, it was causing a compile error saying there are duplicate files. I was able to open the RC project folder and delete the older rcp_project versions which now allows the app to compile. I mention it as it may or may not be related to the second issue. Second, Xcode isn't generating the project code for rcproject, so when I call the RCProject.loadSceneAsync function I am getting an error that says "Cannot find 'RCProject' in scope"
5
6
951
Jan ’25
WebXR and PSVR2
We got very excited when we saw support for the PSVR2 on WWDC! Particularly interesting is WebXR to us, so we got the controllers to give it a try. Unfortunately they only register as gamepads in the navigator but not as XRInputDevice's We went through the experimental flags and didn't find something that is directly related. Is there a flag we missed? If not, when do you have PSVR2 support planned for WebXR?
1
5
219
Jun ’25
How do we use the new Unified Coordinate Conversion features in visionOS 26?
The landing page for visionOS 26 mentions The Unified Coordinate Conversion API makes moving views and entities between scenes straightforward — even between views and ARKit accessory anchors. This WWDC session very briefly shows a single example of using this, but with no context. For example, they discuss a way to tell the distance between a Model3D and an entity in a RealityView. But they don't provide any details for how they are referencing the entity (bolts in the slide). The session used the BOT-anist example project that we saw in visionOS 2, but the version on in the Sample Code library has not been updated with these examples. I was able to put together a simple example where we can get the position of a window relative to the world origin. It even updates when the user recenters. struct Lab080: View { @State private var posX: Float = 0 @State private var posY: Float = 0 @State private var posZ: Float = 0 var body: some View { GeometryReader3D { geometry in VStack { Text("Unified Coordinate Conversion") .font(.largeTitle) .padding(24) VStack { Text("X: \(posX)") Text("Y: \(posY)") Text("Z: \(posZ)") } .font(.title) .padding(24) } .onGeometryChange3D(for: Point3D.self) { proxy in try! proxy .coordinateSpace3D() .convert(value: Point3D.zero, to: .worldReference) } action: { old, new in posX = Float(new.x) posY = Float(new.y) posZ = Float(new.z) } } } } This is all that I've been able to figure out so far. What other features are included in this new Unified Coordinate Conversion? Can we use this to get the position of one window relative to another? Can we use this to get the position of a view in a window relative to an entity in a RealityView, for example in a Volume or Immersive Space? What else can Unified Coordinate Conversion do? Are there documentation pages that I'm missing? I'm not sure what to search for. Are there any Sample projects that use these features? Any additional information would be very helpful.
2
5
1.4k
Sep ’25
Entities moved with Manipulation Component in visionOS Beta 4 are clipped by volume bounds
In Beta 1,2, and 3, we could pick up and inspect entities, bringing them closer while moving them outside of the bounds of a volume. As of Beta 4, these entities are now clipped by the bounds of the volume. I'm not sure if this is a bug or an intended change, but I files a Feedback report (FB19005083). The release notes don't mention a change in behavior–at least not that I can find. Is this an intentional change or a bug? Here is a video that shows the issue. https://youtu.be/ajBAaSxLL2Y In the previous versions of visionOS 26, I could move these entities out of the volume and inspect them close up. Releasing would return them to the volume. Now they are clipped as soon as they reach the end of the volume. I haven't had a chance to test with windows or with the SwiftUI modifier version of manipulation.
1
4
373
Jul ’25
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
11
3
962
Oct ’25
What's in Reality Composer Pro 26
That title would have made a great WWDC Sessions. Unfortunately, it seems like nothing is new in Reality Composer Pro this year. I've noticed at all versions of the Xcode Beta this summer have shipped with Reality Composer Pro version 2.0. There have been slight bumps in the build number. I haven't found any new features or seen any documentation to indicate that anything has changed. So the question is, what is the state of Reality Composer Pro? Should we continue to use this tool or start doing everything in code? A huge number of Sample Projects use Reality Composer Pro, so it seems like Apple is still using it even if they didn't update it this year.
1
3
438
Aug ’25
TextComponent on iOS/macOS pixelated when viewed from short distance
Hello, I've been tinkering a bit with TextComponent. Based on the docs it seems like this component should always render sharp and nice text, no matter how close the user gets: RealityKit dynamically adjusts the backing size to a value that results in high-fidelity text at its current location. And it does on visionOS, but on iOS and macOS the text gets pixelated when I get close to it, as if its just rendering it once as a plain image texture. Can anyone tell me if this is expected behavior or a bug? Here two screenshots for comparison (iPhone and Vision Pro): Thanks!
2
2
101
Aug ’25
visionOS – Starting GroupActivity FaceTime Call dismisses Immersive Space
Hello, I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed. This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected. Looking at the console on my Mac I found this log: Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050 Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful. Thanks!
6
1
886
4w
Barcode Detection Enterprise API
I am attempting to use the Barcode Detection enterprise API. I have the necessary entitlements and license file. I'm following the sample code online, and whenever I attempt to run the barcode detection using arKitSession.run I get the following error message: ar_barcode_detection_provider_t <0x300d82130>: Failed to run provider with transient error code: 1 It obviously isn't running the barcode detection, even though it's running in an immersive space in mixed mode. Any idea what might be going on?
18
3
1.6k
Dec ’24
Can we access a "Locked in Place" value when a window has been locked without being snapped to a surface?
Starting in visionOS 26, users can snap windows to surfaces. These windows are locked in place and are later restored by visionOS. We can access the snapped data with surfaceSnappingInfo docs. Users can also lock a free-floating (unsnapped) window from a context menu in the window controls. Is there a way to tell when a window has been locked without being snapped to a surface?
7
3
170
Jun ’25
Developer Strap Gen 2 - Only USB2 Speeds
I am testing out the Gen 2 of the developer strap on my Vision Pro M2 and I have only been able to get USB 2 speeds when connecting it to my MacBook Pro Max M3. I used the official Apple Thunderbolt 4 cable, which does get Thunderbolt speeds on my T7 Touch drive. Has anyone figured out a solution for this issue? The Gen 2 developer strap does advertise 20 Gb/s speeds.
5
2
1.2k
1w
Where can we access the new enterprise license files mentioned in the WWDC session?
Hi everyone, I’m trying to verify something mentioned in the WWDC session “Explore enhancements to your spatial business app.” At timestamp 3:36, the presenter states: “You can now access your enterprise license files directly within your Apple Developer account.” I’ve checked every section of my Developer account, including: • Membership and Agreements • Certificates, Identifiers & Profiles • App Store Connect • Additional Resources • Account settings …but no UI or section exposes these enterprise license files. Since the Vision Entitlement Services framework actively checks these licenses (for example, mainCameraAccess entitlement approval), I need to confirm the location of the new license file. Could someone from Apple or anyone who has seen this feature clarify: 1. Where exactly do these enterprise license files appear in the Developer account UI, or 2. Whether this feature has not rolled out yet? Any guidance or screenshots from those who have access would be invaluable. Thanks,
1
0
201
1w
Multiple-frames BlendShape (failed) Animation in Reality Composer Pro
Goal: To render in an apple vision pro app, the solid-mechanics 3D simulation results coming form an FEA code. Starting point: I have surface vtks with deformations on each node. Each time step has a a mesh with the nodal coordinates. This is straighforward translatable to a usd MeshSequence. Unfortunately, the results cannot be simplified to a scaling o linear transformation as you would do with other game-oriented animations. Tools: Right now, I am using Xcode and reality composer pro (RCP) to build the scenes. Technical limitations: I am aware that RCP can do animations with BlendMesh and skeletons and that MeshSequence is not a problem. Progress: Coverting to the sequence of vtk meshes to a usd MeshSequence is straighforward. This animates correctly in Preview and Blender (see screenshot). I managed to convert from MeshSequence to multiple keys and BlendMesh. This also animates correctly in Blender and preview. Unfortunately, the BlendMesh of multiple blended meshes shows a zero animation time in RCP (see screenshot below) Also, see below usda file scheme for the animation. Of course I am not showing full vectors such as faceVertexCounts, faceVertexIndex, normals. Question: what is the right set up to create a BlendMesh animation that RCP will correctly import and animate, form a set of Meshes or multiple key shapes? Blender animation Time zero RCP "animations" #usda 1.0 ( defaultPrim = "BlendMeshRoot" doc = "Blender v4.5.3 LTS" endTimeCode = 48 framesPerSecond = 24 metersPerUnit = 1 startTimeCode = 0 timeCodesPerSecond = 24 upAxis = "Z" ) def Xform "BlendMeshRoot" ( customData = { dictionary Blender = { bool generated = 1 } } ) { def SkelRoot "Mesh" { custom string userProperties:blender:object_name = "Mesh" float3 xformOp:rotateXYZ = (89.99999, -0, 0) float3 xformOp:scale = (0.009999999, 0.01, 0.01) double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"] def Mesh "Mesh" ( active = true prepend apiSchemas = ["MaterialBindingAPI", "SkelBindingAPI"] ) { uniform bool doubleSided = 1 float3[] extent = [(25.091871, -34.121277, -13.298501), (299.94482, 245.10088, 202.35126)] int[] faceVertexCounts = [3, 3, ... int[] faceVertexIndices = [0, 10293, ... rel material:binding = </BlendMeshRoot/_materials/MeshSequence_Default> normal3f[] normals = [(-0.3632836, -0.9102419, -0.19870725), .... point3f[] points = [(244.41148, 155.42062, 70.454926),..... float3[] primvars:node_displacement = [(93.54703, 110.9341, 48.37992).... float3[] primvars:Normals = [(-0.0050530406, -0.9910114, -0.13368203),... int[] primvars:skel:jointIndices = [0, 0, 0, 0, 0 ... float[] primvars:skel:jointWeights = [1, 1, 1, 1, 1... uniform token[] skel:blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"] rel skel:blendShapeTargets = [ </BlendMeshRoot/Mesh/Mesh/frame_0000>, ....... </BlendMeshRoot/Mesh/Mesh/frame_0005>, ] prepend rel skel:skeleton = </BlendMeshRoot/Mesh/Skel> uniform token subdivisionScheme = "none" custom string userProperties:blender:data_name = "Mesh" custom float userProperties:originalTime float userProperties:originalTime.timeSamples = { 0: 0, } def BlendShape "frame_0000" { uniform vector3f[] offsets = [(0, 0, 0), (0, 0, 0),..... uniform int[] pointIndices = [0, 1, 2, ..... } ..... ..... #### BlendShape frame to 0005 ..... def Skeleton "Skel" ( prepend apiSchemas = ["SkelBindingAPI"] ) { uniform matrix4d[] bindTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )] uniform token[] joints = ["joint1"] uniform matrix4d[] restTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )] prepend rel skel:animationSource = </BlendMeshRoot/Mesh/Skel/Anim> def SkelAnimation "Anim" { uniform token[] blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"] float[] blendShapeWeights.timeSamples = { 0: [1, 0, 0, 0, 0, 0], 1: [0.9697085, 0.03029152, 0, 0, 0, 0], 2: [0.88787615, 0.11212383, 0, 0, 0, 0], ..... 46: [0, 0, 0, 0, 0.11212379, 0.8878762], 47: [0, 0, 0, 0, 0.030291557, 0.96970844], 48: [0, 0, 0, 0, 0, 1], } } } } def Scope "_materials" { def Material "MeshSequence_Default" { token outputs:surface.connect = </BlendMeshRoot/_materials/MeshSequence_Default/Principled_BSDF.outputs:surface> custom string userProperties:blender:data_name = "MeshSequence_Default" def Shader "Principled_BSDF" { uniform token info:id = "UsdPreviewSurface" float inputs:clearcoat = 0 float inputs:clearcoatRoughness = 0.03 color3f inputs:diffuseColor = (0.8, 0.4, 0.3) float inputs:ior = 1.5 float inputs:metallic = 0 float inputs:opacity = 1 float inputs:roughness = 0.5 float inputs:specular = 0.2 token outputs:surface } } } def Scope "AnimationClips" { custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim> } def RealityKitComponent "AnimationLibrary" { custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim> custom token info:id = "RealityKit.AnimationLibrary" custom double realitykit:approximateDuration = 2 custom double[] realitykit:clipDurations = [2] custom string[] realitykit:clipNames = ["Anim"] custom rel realitykit:clipTargets = </BlendMeshRoot/Mesh/Skel/Anim> custom double realitykit:frameRate = 24 custom bool realitykit:isAnimationLibrary = 1 } }
2
1
351
Oct ’25
How do we author a "reality file" like the ones on Apple's Gallery?
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/ ?? For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer. And I don't see any way to export/compile to a "reality file" from within Xcode. How can I use multiple animations within a single GLTF file? How can I set up multiple "tap target" on a single object, where each one triggers a different action? How do we author something similar? What tools do we use? Thanks
6
2
1.9k
Nov ’24
Can not remove final World Anchor
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider. worldTracking.removeAnchor(forID: uuid) // Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)` This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor. do { // This always run, but it doesn't seem to "save" the removal when there is only one anchor left. try await worldTracking.removeAnchor(forID: uuid) } catch { // I have never seen this block fire! print("Failed to remove world anchor \(uuid) with error: \(error).") } I posted a video on my website if you want to see it happening. https://stepinto.vision/labs/lab-051-issues-with-world-tracking/ Here is the full code. Can you see if I’m doing something wrong? Is this a bug? struct Lab051: View { @State var session = ARKitSession() @State var worldTracking = WorldTrackingProvider() @State var worldAnchorEntities: [UUID: Entity] = [:] @State var placement = Entity() @State var subject : ModelEntity = { let subject = ModelEntity( mesh: .generateSphere(radius: 0.06), materials: [SimpleMaterial(color: .stepRed, isMetallic: false)]) subject.setPosition([0, 0, 0], relativeTo: nil) let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)]) let input = InputTargetComponent() subject.components.set([collision, input]) return subject }() var body: some View { RealityView { content in guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return } content.add(scene) if let placementEntity = scene.findEntity(named: "PlacementPreview") { placement = placementEntity } } update: { content in for (_, entity) in worldAnchorEntities { if !content.entities.contains(entity) { content.add(entity) } } } .modifier(DragGestureImproved()) .gesture(tapGesture) .task { try! await setupAndRunWorldTracking() } } var tapGesture: some Gesture { TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.name == "PlacementPreview" { // If we tapped the placement preview cube, create an anchor Task { let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil)) try await worldTracking.addAnchor(anchor) } } else { Task { // Get the UUID we stored on the entity let uuid = UUID(uuidString: value.entity.name) ?? UUID() do { try await worldTracking.removeAnchor(forID: uuid) } catch { print("Failed to remove world anchor \(uuid) with error: \(error).") } } } } } func setupAndRunWorldTracking() async throws { if WorldTrackingProvider.isSupported { do { try await session.run([worldTracking]) for await update in worldTracking.anchorUpdates { switch update.event { case .added: let subjectClone = subject.clone(recursive: true) subjectClone.isEnabled = true subjectClone.name = update.anchor.id.uuidString subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform) worldAnchorEntities[update.anchor.id] = subjectClone print("🟢 Anchor added \(update.anchor.id)") case .updated: guard let entity = worldAnchorEntities[update.anchor.id] else { print("No entity found to update for anchor \(update.anchor.id)") return } entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform) print("🔵 Anchor updated \(update.anchor.id)") case .removed: worldAnchorEntities[update.anchor.id]?.removeFromParent() worldAnchorEntities.removeValue(forKey: update.anchor.id) print("🔴 Anchor removed \(update.anchor.id)") if let remainingAnchors = await worldTracking.allAnchors { print("Remaining Anchors: \(remainingAnchors.count)") } } } } catch { print("ARKit session error \(error)") } } } }
1
2
171
May ’25
Accessing LiDAR Depth Data and Scene Reconstruction on Apple Vision Pro
Hello, I'm developing a visionOS application for Apple Vision Pro that aims to scan unknown physical objects, capture their 3D data (such as meshes or point clouds), and export them as 3D models. Ideally, I'd also like to visualize these reconstructions in real-time within the headset. This functionality is similar to what's available in Reality Composer on iPad and iPhone, but I'm seeking to implement it natively on Vision Pro. I've reviewed the visionOS documentation but haven't found clear guidance on accessing LiDAR depth data or performing scene reconstruction. Specifically, I'm interested in: 1.Accessing LiDAR or depth data from Vision Pro's sensors. 2.Utilizing ARKit's scene reconstruction capabilities on visionOS. 3.Exporting captured 3D data as models (e.g., USDZ or OBJ formats). Are there APIs or frameworks in visionOS that support these features?
0
2
113
May ’25