Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Created

Possible to detect multiple images at the same time on VisionPro?
I'm working on a project that uses imageTrackingProvider through ARKit on VisionPro, and I want to detect multiple images(about 5) and show info at the same time. However, I found that it seems only 1 image could be detected by device at one time. And the api of maximumNumberOfTrackedImages doing this seems not available for visionOS but only iOS. Anyone knows possible ways to detect multiple images at the same time on VisionPro?
0
0
443
Oct ’25
Displaying multiple immersive movies in spheres in an immersive environment
In visionOS, I'm trying to create an immersive environment which would feature several spheres in which immersive movies are visible. I'm starting from a sample code which creates a sphere, sets an immersive movie as its material, and opens it as an immersive environment. This works fine. But if I create a sphere in an open immersive environment using Reality Composer Pro and sets its material to an immersive movie, I can see the movie on the sphere while I move outside of it but if I try to get inside the sphere, it disappears. What would be the right way of doing this ?
1
1
671
Oct ’25
HoverEffectStyle in visionOS 26.0
This is no longer highlighting my entity when looking at it: RealityView { content let hoverComponent = HoverEffectComponent(.spotlight( HoverEffectComponent.SpotlightHoverEffectStyle( color: .white, strength: 2.0 ) )) entity.components.set(hoverComponent) The entity is in a window. The same code works in an immersive view. Collision Component and Input type are set in RCP. It's also stopped working on my published app (built under visionOS 2.x) using my visionOS 26 device. If I use a 2.x simulator, it works. Is this a bug or is there something I'm missing? Thanks.
2
0
830
Oct ’25
ManipulationComponent in both parent and child entities
Hello, In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities. So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API. Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B: let mc = ManipulationComponent() model.components.set(mc) var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25) boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25)) ManipulationComponent.configureEntity(model, collisionShapes: [boxShape]) if var mc = model.components[ManipulationComponent.self] { mc.releaseBehavior = .stay mc.dynamics.inertia = .low model.components.set(mc) } I am using visionOS 26.0; let me know if there's any additional information needed.
1
0
344
Oct ’25
Multiple-frames BlendShape (failed) Animation in Reality Composer Pro
Goal: To render in an apple vision pro app, the solid-mechanics 3D simulation results coming form an FEA code. Starting point: I have surface vtks with deformations on each node. Each time step has a a mesh with the nodal coordinates. This is straighforward translatable to a usd MeshSequence. Unfortunately, the results cannot be simplified to a scaling o linear transformation as you would do with other game-oriented animations. Tools: Right now, I am using Xcode and reality composer pro (RCP) to build the scenes. Technical limitations: I am aware that RCP can do animations with BlendMesh and skeletons and that MeshSequence is not a problem. Progress: Coverting to the sequence of vtk meshes to a usd MeshSequence is straighforward. This animates correctly in Preview and Blender (see screenshot). I managed to convert from MeshSequence to multiple keys and BlendMesh. This also animates correctly in Blender and preview. Unfortunately, the BlendMesh of multiple blended meshes shows a zero animation time in RCP (see screenshot below) Also, see below usda file scheme for the animation. Of course I am not showing full vectors such as faceVertexCounts, faceVertexIndex, normals. Question: what is the right set up to create a BlendMesh animation that RCP will correctly import and animate, form a set of Meshes or multiple key shapes? Blender animation Time zero RCP "animations" #usda 1.0 ( defaultPrim = "BlendMeshRoot" doc = "Blender v4.5.3 LTS" endTimeCode = 48 framesPerSecond = 24 metersPerUnit = 1 startTimeCode = 0 timeCodesPerSecond = 24 upAxis = "Z" ) def Xform "BlendMeshRoot" ( customData = { dictionary Blender = { bool generated = 1 } } ) { def SkelRoot "Mesh" { custom string userProperties:blender:object_name = "Mesh" float3 xformOp:rotateXYZ = (89.99999, -0, 0) float3 xformOp:scale = (0.009999999, 0.01, 0.01) double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"] def Mesh "Mesh" ( active = true prepend apiSchemas = ["MaterialBindingAPI", "SkelBindingAPI"] ) { uniform bool doubleSided = 1 float3[] extent = [(25.091871, -34.121277, -13.298501), (299.94482, 245.10088, 202.35126)] int[] faceVertexCounts = [3, 3, ... int[] faceVertexIndices = [0, 10293, ... rel material:binding = </BlendMeshRoot/_materials/MeshSequence_Default> normal3f[] normals = [(-0.3632836, -0.9102419, -0.19870725), .... point3f[] points = [(244.41148, 155.42062, 70.454926),..... float3[] primvars:node_displacement = [(93.54703, 110.9341, 48.37992).... float3[] primvars:Normals = [(-0.0050530406, -0.9910114, -0.13368203),... int[] primvars:skel:jointIndices = [0, 0, 0, 0, 0 ... float[] primvars:skel:jointWeights = [1, 1, 1, 1, 1... uniform token[] skel:blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"] rel skel:blendShapeTargets = [ </BlendMeshRoot/Mesh/Mesh/frame_0000>, ....... </BlendMeshRoot/Mesh/Mesh/frame_0005>, ] prepend rel skel:skeleton = </BlendMeshRoot/Mesh/Skel> uniform token subdivisionScheme = "none" custom string userProperties:blender:data_name = "Mesh" custom float userProperties:originalTime float userProperties:originalTime.timeSamples = { 0: 0, } def BlendShape "frame_0000" { uniform vector3f[] offsets = [(0, 0, 0), (0, 0, 0),..... uniform int[] pointIndices = [0, 1, 2, ..... } ..... ..... #### BlendShape frame to 0005 ..... def Skeleton "Skel" ( prepend apiSchemas = ["SkelBindingAPI"] ) { uniform matrix4d[] bindTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )] uniform token[] joints = ["joint1"] uniform matrix4d[] restTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )] prepend rel skel:animationSource = </BlendMeshRoot/Mesh/Skel/Anim> def SkelAnimation "Anim" { uniform token[] blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"] float[] blendShapeWeights.timeSamples = { 0: [1, 0, 0, 0, 0, 0], 1: [0.9697085, 0.03029152, 0, 0, 0, 0], 2: [0.88787615, 0.11212383, 0, 0, 0, 0], ..... 46: [0, 0, 0, 0, 0.11212379, 0.8878762], 47: [0, 0, 0, 0, 0.030291557, 0.96970844], 48: [0, 0, 0, 0, 0, 1], } } } } def Scope "_materials" { def Material "MeshSequence_Default" { token outputs:surface.connect = </BlendMeshRoot/_materials/MeshSequence_Default/Principled_BSDF.outputs:surface> custom string userProperties:blender:data_name = "MeshSequence_Default" def Shader "Principled_BSDF" { uniform token info:id = "UsdPreviewSurface" float inputs:clearcoat = 0 float inputs:clearcoatRoughness = 0.03 color3f inputs:diffuseColor = (0.8, 0.4, 0.3) float inputs:ior = 1.5 float inputs:metallic = 0 float inputs:opacity = 1 float inputs:roughness = 0.5 float inputs:specular = 0.2 token outputs:surface } } } def Scope "AnimationClips" { custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim> } def RealityKitComponent "AnimationLibrary" { custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim> custom token info:id = "RealityKit.AnimationLibrary" custom double realitykit:approximateDuration = 2 custom double[] realitykit:clipDurations = [2] custom string[] realitykit:clipNames = ["Anim"] custom rel realitykit:clipTargets = </BlendMeshRoot/Mesh/Skel/Anim> custom double realitykit:frameRate = 24 custom bool realitykit:isAnimationLibrary = 1 } }
2
1
351
Sep ’25
Shared/GroupImmersive Space – Query Local Device Transform
Hi, I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true Now visionOS establishes a shared coordinate space for the immersive space. From the docs: To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform to position content in front of the user (plus some Z Offset and smooth interpolation). This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset. Here is a video of the issue in action: https://streamable.com/205r2p The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position. The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset. Is there any way I can query query this transform locally for the user during a FaceTime call? Also I would like to know if it's possible to disable this automatic entity transform syncing behavior? Setting entity.synchronization = nil results in the entity not showing up at all. https://developer.apple.com/documentation/realitykit/synchronizationcomponent Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach? Thank you!
2
0
255
Sep ’25
CapturedRoom.Section is missing a lot of information
The Section struct only publicly makes the center property available, but this is a SIMD3 that doesn't seem to line up with the rest of the model. All other objects have a 4x4 transform matrix that accurately gives each position and rotation. When inspecting a Section in the debugger, many more properties are visible such as polygon and transform. Why are these not visible? The transform in particular seems necessary to make any sort of use of the Sections.
1
0
342
Sep ’25
VisionPro Enterprise.license file
I have read in the apple documentation and on forums that in order to access the camera and capture images on VisionPro, both an Entitlement and an Enterprise.license are required. I already have the Entitlement, but I don’t yet have the Enterprise.license. I would like to ask: is the Enterprise.license strictly required to gain camera access for capturing images? How can I obtain this file, and does it require an Enterprise account? Currently, my developer account is a regular Developer 99$, not an Enterprise account.
2
0
361
Sep ’25
Header Blur Effect on visionOS SwiftUI
Hi, I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS? Thanks!
0
0
110
Sep ’25
Spatial Computing, ARPointCloud (rawFeaturePoints)
https://developer.apple.com/documentation/arkit/arpointcloud https://developer.apple.com/documentation/arkit/arframe/rawfeaturepoints The point cloud (collection of points/features) main intention is a debug visualization to what the underlying tracking algorithm processes and is not designed for additional algorithms on top of that. But, we are utilizing information contained in the points/features collected by ARKit. Currently, the range of rawfeaturepoints is limited to about 10 meter from the device. We see a great chance if the range is unlock. The global localization will be more robust and accurate. ARPointCloud - Apple ARKit - FindSurface YouTube SIdQRiLj2jY
6
0
1k
Sep ’25
New Spatial Rendering App on macOS doesn't display on visionOS device
I created a new Spatial Rendering App from the template in Xcode 26.0.1. When I run the app, click 'Show Immersive Space' and select my Vision Pro from the pop-up dialog, the content in the dialog flickers (which seems to indicate something crashed) and nothing appears on my Vision Pro. I'm running the released macOS 26.0 (25A354) and visionOS 26.0 (23M336). Filed as FB20397093.
0
0
325
Sep ’25
ARKit with 422 pixel format and Apple Log colorspace
Hi, I’m trying to configure camera feed in ARKit to be in Apple Log color space. I can change Capture Device’s format to one that has Apple Log and I see one frame being in proper log-gray colors but then all AR tracking stops and tracking state hangs at “initializing”. In other combinations I see error “sensor failed to initialize” and session restarts with default format. I suspect that this is because normal AR capture formats are 420f, whereas ones that have Apple Log are 422. Could someone confirm if it’s even possible to run ARKit session with camera feed in a different pixel format? I’m trying it on iphone 15 pro
0
0
187
Sep ’25
Xcode Cloud builds don't work with *.usdz files in a RealityComposer package
In courses like Compose interactive 3D content in Reality Composer Pro Realitykit Engineers recommended working with Reality Composer Pro to create RealityKit packages to embed in our Realitykit Xcode projects. And, comparing the workflow to Unity/Unreal, I can see the reasoning since it is nice to prepare scenes/materials/assets visually. Now when we also want to run a Xcode Cloud CI/CD pipeline this seems to come into conflict: When adding a basic *.usdz to the RealityKitContent.rkassets folder, every build we run on Xcode cloud fails with: Compile Reality Asset RealityKitContent.rkassets ❌realitytool requires Metal for this operation and it is not available in this build environment I have also found this related forum post here but it was specifically about compiling a *.skybox.
4
1
407
Sep ’25
RealityView doesn't free up memory after disappearing
Basically, take just the Xcode 26 AR App template, where we put the ContentView as the detail end of a NavigationStack. Opening app, the app uses < 20MB of memory. Tapping on Open AR the memory usage goes up to ~700MB for the AR Scene. Tapping back, the memory stays up at ~700MB. Checking with Debug memory graph I can still see all the RealityKit classes in the memory, like ARView, ARRenderView, ARSessionManager. Here's the sample app to illustrate the issue. PS: To keep memory pressure on the system low, there should be a way of freeing all the memory the AR uses for apps that only occasionally show AR scenes.
0
0
105
Sep ’25
vision shareplay nearby codes expired
it looks like one week after accepting as a nearby other AVP device... it expires since we are providing our clients for a timeless app to walk inside archtiecture, it's a shame that not technical staff should connect every week 5 devices to work together is there any roundabout for this issue or straight to the wishlist ? thanks for the support !!
0
0
41
Sep ’25
Does Apple Spatial Audio Format documentation exist
The WWDC25 video and notes titled “Learn About Apple Immersive Video Technologies” introduced the Apple Spatial Audio Format (ASAF) and codec (APAC). However, despite references throughout on using immersive video, there is scant information on ASAF/APAC (including no code examples and no framework references), and I’ve found no documentation in Apple’s APIs/Frameworks about its implementation and use months on. I want to leverage ambisonic audio in my app. I don’t want to write a custom AU if APAC will be opened up to developers. If you read the notes below along with the iPhone 17 advertising (“Video is captured with Spatial Audio for immersive listening”), it sounds like this is very much a live feature in iOS26. Anyone know the state of play? I’m across how the PHASE engine works, which is unrelated to what I’m asking about here. Original quote from video referenced above: “ASAF enables truly externalized audio experiences by ensuring acoustic cues are used to render the audio. It’s composed of new metadata coupled with linear PCM, and a powerful new spatial renderer that’s built into Apple platforms. It produces high resolution Spatial Audio through numerous point sources and high resolution sound scenes, or higher order ambisonics.” ”ASAF is carried inside of broadcast Wave files with linear PCM signals and metadata. You typically use ASAF in production, and to stream ASAF audio, you will need to encode that audio as an mp4 APAC file.” ”APAC efficiently distributes ASAF, and APAC is required for any Apple immersive video experience. APAC playback is available on all Apple platforms except watchOS, and supports Channels, Objects, Higher Order Ambisonics, Dialogue, Binaural audio, interactive elements, as well as provisioning for extendable metadata.”
4
0
629
Sep ’25
SharePlay on the VisionOS with remote participants.
Hi everyone, I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively. I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants. However, I’m stuck on one point: How can I share a world anchor with remote participants who join via FaceTime as spatial personas? Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible. For context, I’m building an immersive app with Metal rendering. Any guidance or examples would be greatly appreciated! Thanks, Jens
2
1
358
Sep ’25