Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

iOS needs to allow for background bluetooth scanning. I can't fully build my app.
iOS currently restricts background Bluetooth advertising and scanning in order to preserve battery life and protect user privacy. While these restrictions serve important purposes, they also limit legitimate use cases where users have explicitly opted in to proximity-based experiences. The core challenge is that modern social applications need a way to detect when users are physically present at the same location or event without requiring every participant to keep their app in the foreground. Under the current system, background BLE advertising is heavily throttled and can only transmit a limited payload, background scanning intervals are sparse and unpredictable, peer-to-peer proximity detection cannot be maintained reliably when apps are in the background, and Background App Refresh is non-deterministic, making any kind of time-based proximity validation impossible. A proposed enhancement would be to introduce an “Enhanced Proximity Permission.” This would allow developers to enable reliable background BLE advertising and scanning for declared time windows, such as a maximum of eight hours. It would also allow devices running the same app to detect each other’s proximity using ephemeral, rotating identifiers that preserve privacy, with clear user consent and prominent indicators whenever the feature is active. Unlocking this capability would open up new categories of applications. Live events could offer automatic attendance tracking at concerts, conferences, or sports venues. Retail environments could support opt-in foot traffic analysis and dwell-time insights. Social apps could allow users to find friends at festivals, campuses, or other large venues. Safety applications could extend to crowd density monitoring and contact tracing beyond COVID-era needs. Gaming could offer real-world multiplayer experiences based on physical proximity, and transportation providers could verify rideshare pickups or measure public transit flows automatically. Privacy safeguards would remain central. Permissions would be time-boxed and expire after an event or session. A mandatory visual indicator would be displayed whenever proximity tracking is active. A user-facing dashboard would show all apps granted enhanced proximity access. Permissions would automatically be revoked after a period of non-use, and only ephemeral tokens not permanent identifiers would be broadcast. The industry impact would be significant. With this enhancement, iOS could power the next generation of location-aware social platforms while maintaining Apple’s leadership in privacy through explicit user control and transparency. Current alternatives, such as requiring users to keep apps in the foreground or deploying dedicated hardware beacons, produce poor user experiences and constrain innovation in spatial computing and social applications. Can anyone from Apple consider this change? Having to buy iBeacons is brutal and means slower adoption. Please reconsider this for users who opt in.
1
0
933
5d
RealityKit / visionOS – Memory not released after dismissing ImmersiveSpace with USDZ models
Hi everyone, I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup. App Context The app showcases apartments in real scale using AR. Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures). Users can walk inside the apartments, and performance is good even close to hardware limits. Flow The app starts in a full immersive space (RealityView) for selecting the apartment. When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads. The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes. When the user dismisses the experience, we attempt cleanup: Nulling out all entity references. Removing ModelComponents. Clearing cached textures and skyboxes. Forcing dictionaries/collections to empty. Despite this cleanup, memory usage remains very high. Problem After dismissing the ImmersiveSpace, memory does not return to baseline. Check the attached screenshot of the profiling made using Instruments: Initial state: ~30MB (main menu). After loading models sequentially: ~3.3GB. Skybox textures bring it near ~4GB. After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB). When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure. The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected. So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures. Cleanup Code Example Here’s a simplified version of the cleanup I’m doing: func clearAllRoomEntities() { for (entityName, entity) in entityFromMarker { entity.removeFromParent() if let modelEntity = entity as? ModelEntity { modelEntity.components.removeAll() modelEntity.children.forEach { $0.removeFromParent() } modelEntity.clearTexturesAndMaterials() } entityFromMarker[entityName] = nil removeSkyboxPortals(from: entityName) } entityFromMarker.removeAll() } extension ModelEntity { func clearTexturesAndMaterials() { guard var modelComponent = self.model else { return } for index in modelComponent.materials.indices { removeTextures(from: &modelComponent.materials[index]) } modelComponent.materials.removeAll() self.model = modelComponent self.model = nil } private func removeTextures(from material: inout any Material) { if var pbr = material as? PhysicallyBasedMaterial { pbr.baseColor.texture = nil pbr.emissiveColor.texture = nil pbr.metallic.texture = nil pbr.roughness.texture = nil pbr.normal.texture = nil pbr.ambientOcclusion.texture = nil pbr.clearcoat.texture = nil material = pbr } else if var simple = material as? SimpleMaterial { simple.color.texture = nil material = simple } } } Questions Is this expected RealityKit behavior (textures/meshes cached internally)? Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used? Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently? Any guidance, best practices, or confirmation would be hugely appreciated. Thanks in advance!
3
0
999
6d
Misaligned visionOS Simulator Home Position
Using Xcode v26 Beta 6 on macOS v26 Beta 25a5349a When pressing on the home button of the visionOS simulator, I am not positioned in the middle of the room like would normally be. This occurred when moving a lot in the space to find an element added to an ImmersiveSpace. How to resolve: restart simulator device. See attached the pictures of the visionOSSimulatorCorrectHomePosition and the visionOSSimulatorMisallignedHomePosition.
2
0
845
1w
Bones/joints data issue - USD file export from Blender to RCP
Hi, I'm developing a prototype VisionOS game. How to access the bones or joints information when exporting a USD file from Blender to RCP? The animation in RCP works fine and the joints' information is correctly embedded in the USDA file (with usdchecker). However, RCP does not read it in USDA, USDC or USDZ. It should be possible based on Apple WWDC24 (Compose Interactive 3D content in RCP). I want to attach and detach an entity to a particular bone in certain moments. So I need the bones' data. They are standard mixamo animations. My mesh is a single unified mesh. Using Blender 4.4
1
0
544
1w
visionOS plane anchor rotation and wall direction are inconsistent
I have a problem with the wall plane detection using visionOS/ARKit: I am using ARKitSession's PlaneDetectionProvider detection.wall in the space of visionOS. I recorded the position and rotation information of the first detected plane, but found that the rotation value will be facing when the user starts the space. There is a deviation in different directions. That is to say, even if the plane is located on the same wall, the rotation quaternion will be different. I hope that no matter from which direction the user enters the scan, the real direction of the wall can be correctly obtained so that the virtual content can be accurately aligned with the wall. I have tried to use anchor.originFromAnchorTransform or Transform.rotation directly, but the rotation value is still affected by the user's initial orientation. In addition, I would like to know whether the user's initial orientation will affect the location information. If so, please provide a solution. Thank you!
1
0
409
1w
How to mix Animation and IKRig in RealityKit
I want an AR character to be able to look at a position while still playing the characters animation. So far, I managed to manually adjust a single bone rotation using skeletalComponent.poses.default = Transform( scale: baseTransform.scale, rotation: lookAtRotation, translation: baseTransform.translation ) which I run at every rendering update, while a full body animation is running. But of course, hardcoding single joints to point into a direction (in my case the head) does not look as nice, as if I were to run some inverse cinematic that includes, hips + neck + head joints. I found some good IKRig code in Composing interactive 3D content with RealityKit and Reality Composer Pro. But when I try to adjust rigs while animations are playing, the animations are usually winning over the IKRig changes to the mesh.
1
0
465
2w
Ornaments in Presentations
We can add ornaments to popovers shown by PresentationComponent, but I’m not sure if we should. While working on the editor for entities in a Volume-based app, I had the idea to add ornaments to the presented views. The entire app exists inside a volume. A user can tap a item to present a popoverUI to edit it. This is displayed using the new PresentationComponent in visionOS 26. Ornaments have a new attachment anchor option this year: .parent(). .ornament(attachmentAnchor: .parent(.top), ornament: {...}) This works well in the Simulator. We can add ornaments around this popover view just like we would with a window. Unfortunately, when I run this on device I get a different experience. Any part of the ornament that overlaps with the popover content isn’t rendered correctly. Sometimes it entirely disappears, other times it becomes partially transparent. We could use content alignment to try to make sure the ornament doesn’t overlap the popover content. .ornament(attachmentAnchor: .parent(.top), contentAlignment: .bottom, ornament: {...}) This works sometimes–but not all the time. It’s not clear if this is a bug or not, because I’m not sure if we are even supposed to be able to use ornaments in this way. Here is my hierarchy: An app opens as a Volume Volume presenting a RealityView, with its own ornament using .scene() anchor Multiple Entities with Presentation Component show an edit view The view uses .parent() anchor to add ornaments. What makes me unsure is that other methods for drawing UI in RealityView don’t seem to work with ornaments. For example, if I add an attachment to show a view with the ornament–even when I use the .parent() anchor–the ornament is anchor to the volume, not the attachment view. So what do we think? Is this a rendering bug? Are ornaments intended to work with attachments and presentations?
2
0
322
2w
What's in Reality Composer Pro 26
That title would have made a great WWDC Sessions. Unfortunately, it seems like nothing is new in Reality Composer Pro this year. I've noticed at all versions of the Xcode Beta this summer have shipped with Reality Composer Pro version 2.0. There have been slight bumps in the build number. I haven't found any new features or seen any documentation to indicate that anything has changed. So the question is, what is the state of Reality Composer Pro? Should we continue to use this tool or start doing everything in code? A huge number of Sample Projects use Reality Composer Pro, so it seems like Apple is still using it even if they didn't update it this year.
1
3
374
2w
How to get multiple animations into USDZ
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender. When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation. In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation. In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation. I can see in the Apple sample apps models can have multiple animations in their Animation Library. I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
2
0
340
3w
FromToByAnimation triggers availableAnimations not the single bone animation
So, I was trying to animate a single bone using FromToByAnimation, but when I start the animation, the model instead does the full body animation stored in the availableAnimations. If I don't run testAnimation nothing happens. If I run testAnimation I see the same animation as If I had called entity.playAnimation(entity.availableAnimations[0],..) here's the full code I use to animate a single bone: func testAnimation() { guard let jawAnim = jawAnimation(mouthOpen: 0.4) else { print("Failed to create jawAnim") return } guard let creature, let animResource = try? AnimationResource.generate(with: jawAnim) else { return } let controller = creature.playAnimation(animResource, transitionDuration: 0.02, startsPaused: false) print("controller: \(controller)") } func jawAnimation(mouthOpen: Float) -> FromToByAnimation<JointTransforms>? { guard let basePose else { return nil } guard let index = basePose.jointNames.firstIndex(of: jawBoneName) else { print("Target joint \(self.jawBoneName) not found in default pose joint names") return nil } let fromTransforms = basePose.jointTransforms let baseJawTransform = fromTransforms[index] let maxAngle: Float = 40 let angle: Float = maxAngle * mouthOpen * (.pi / 180) let extraRot = simd_quatf(angle: angle, axis: simd_float3(x: 0, y: 0, z: 1)) var toTransforms = basePose.jointTransforms toTransforms[index] = Transform( scale: baseJawTransform.scale * 2, rotation: baseJawTransform.rotation * extraRot, translation: baseJawTransform.translation ) let fromToBy = FromToByAnimation<JointTransforms>( jointNames: basePose.jointNames, name: "jaw-anim", from: fromTransforms, to: toTransforms, duration: 0.1, bindTarget: .jointTransforms, repeatMode: .none, ) return fromToBy } PS: I can confirm that I can set this bone to a specific position if I use guard let index = newPose.jointNames.firstIndex(of: boneName) ... let baseTransform = basePose.jointTransforms[index] newPose.jointTransforms[index] = Transform( scale: baseTransform.scale, rotation: baseTransform.rotation * extraRot, translation: baseTransform.translation ) skeletalComponent.poses.default = newPose creatureMeshEntity.components.set(skeletalComponent) This works for manually setting the bone position, so the jawBoneName and the joint-transformation can't be that wrong.
1
0
228
3w
Manipulation stops working when changing rooms
This post documents an issue I reported in feedback FB19610114 and see if anyone knows of a workaround. Here is a copy of the feedback. Short version Manipulation (SwiftUI OR RealityKit) fails to translate entities after changing rooms. By changing rooms, I mean a human wearing an Apple Vision Pro leaving one room and entering another room. Once this issue occurs, it impacts all apps that use these features. A device restart is the only solution I have to fix it. Feedback FB19610114 This is an odd one. I'm using the new Manipulation Component in visionOS 26. Most of the time this works well. Sometime it stops working and when it does the only way to get it working again is to reboot the headset. When this happens, I can continue to rotate and scale items, but translation no longer works. It is as if the item is stuck to a fixed point in the parent scene (window, volume, etc). When this bug occurs, it affects every app across the entire operating system that is using manipulation, including the RealityKit component AND the SwiftUI version. This is not limited to one app and is not limited to apps that I am working on. Once this error occurs, it affects literally any application across the operating system that is using this API, including apps from Apple. I won't speculate on the cause of this, but I do know of one way where I can always get it to happen. Here is how to reproduce it: Make an Xcode project with a single entity that uses the Manipulation Component. There is no need to customize the configuration of this component. The default implementation will work. Build and run this app on device. You can keep running from device or quit and launch the app like normal on device. Open the app and manipulate the entity - it should work as expected. Physically walk into another room. It is vital that you leave the current room that you are in and enter a different room entirely. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - it should still be working as expected at this point. Physically, move yourself and your headset into the original room where you started. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - you should now see the issue. When I follow the steps above, then 100% of the time manipulation translation stops working at this point. It will impact any application using this API. The only way to fix it is to restart my headset. A few points to keep in mind It does not matter if an app is actively being run from Xcode. When this occurs, it impacts every single app, not just one. When this occurs, rotation and scaling continue to work, but the entity/view cannot be translated. This impacts BOTH the SwiftUI version and the RealityKit version. When this occurs, the only way to "fix" it is to reboot the device.
0
3
125
3w
Unexpected behavior when writing entities and loading realityFiles.
I have a simple visionOS app that creates an Entity, writes it to the device, and then attempts to load it. However, when the entity file get overwritten, it affects the ability for the app to load it correctly. Here is my code for saving the entity. import SwiftUI import RealityKit import UniformTypeIdentifiers struct ContentView: View { var body: some View { VStack { ToggleImmersiveSpaceButton() Button("Save Entity") { Task { // if let entity = await buildEntityHierarchy(from: urdfPath) { let type = UTType.realityFile let filename = "testing.\(type.preferredFilenameExtension ?? "bin")" let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent(filename) do { let mesh = MeshResource.generateBox(size: 1, cornerRadius: 0.05) let material = SimpleMaterial(color: .blue, isMetallic: true) let modelComponent = ModelComponent(mesh: mesh, materials: [material]) let entity = Entity() entity.components.set(modelComponent) print("Writing \(fileURL)") try await entity.write(to: fileURL) } catch { print("Failed writing") } } } } .padding() } } Every time I press "Save Entity", I see a warning similar to: Writing file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality Failed to set dependencies on asset 1941054755064863441 because NetworkAssetManager does not have an asset entity for that id. When I open the immersive space, I attempt to load the same file: import SwiftUI import RealityKit import UniformTypeIdentifiers struct ImmersiveView: View { @Environment(AppModel.self) private var appModel var body: some View { RealityView { content in guard let type = UTType.realityFile.preferredFilenameExtension else { return } let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] let fileURL = documentsURL.appendingPathComponent("testing.\(type)") guard FileManager.default.fileExists(atPath: fileURL.path) else { print("❌ File does not exist at path: \(fileURL.path)") return } if let entity = try? await Entity(contentsOf: fileURL) { content.add(entity) } } } } I also get errors after I overwrite the entity (by pressing "Save Entity" after I have successfully loaded it once). The warnings that appear when the Immersive space attempts to load the new entity are: Asset 13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'. Asset 8308977590385781534 Scene (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Scene_0.compiledscene failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to read archive entry. AssetLoadRequest failed because asset failed to load '13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh' (Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.) The order of operations to make this happen: Launch app Press "Save Entity" to save the entity "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request Also Launch app, the entity should still be save from last time the app ran "Open Immersive Space" to view entity Press "Save Entity" to overwrite the entity "Open Immersive Space" to view entity, failed asset load request NOTE: It appears I can get it to work slightly better by pressing the "Save Entity" button twice before attempting to view it again in the immersive space.
0
0
114
3w
Perspective problem
Hi, I called it "perspective problem", but I'm not quite sure what it is. I have a tag that I track with builtin camera. I calculate its pose, then use extrinsics and device anchor to calculate where to place entity with model. When I place an entity that overlaps with physical object and start to look at it from different angles, the virtual object begins to move. Initially I thought that it's something wrong with calculations, or some image distortion closer to camera edges is affecting tag detection. To check, I calculated the position only once and displayed entity there, the physical tracked object is not moving. Now, when I move my head, so the object is more to the left, or right in my field of view, the virtual object becomes misaligned to the left, or right. It feels like a parallax effect, but distance from me to entity and to physical object are exactly the same. Is that expected, because of some passthrough correction magic? And if so, can I somehow correct it back, so the entity always overlaps with object? I'm currently on v26 beta 5. I also don't quite understand the camera extrinsics, because it seems that I need to flip it around X by 180 degrees to make it work in deviceAnchor * extrinsics.inverse * tag (shouldn't it be in same coordinates as all other RealityKit things?).
3
0
185
3w