Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Nested RealityKit entity collisions priority
Hello, I'm struggling trying to interact with a RealityKit entity nested (or at least visually nested) in a second one. I think I've the same issue as mentioned on this StackOverflow post, but I can't manage to reproduce the solution. https://stackoverflow.com/questions/79244424/how-to-prioritize-a-specific-entity-when-collision-boxes-overlap-in-realitykit What I'd like to achieve is to translate the red box using a DragGesture, while still be able to interact using a TapGesture on the sphere. Currently, I can only do one at a time. Does anyone know the solution? extension CollisionGroup { static let parent: CollisionGroup = CollisionGroup(rawValue: 1 << 0) static let child: CollisionGroup = CollisionGroup(rawValue: 1 << 1) } struct ImmersiveView: View { var body: some View { RealityView { content in let boxMesh = MeshResource.generateBox(size: 0.35) let boxMaterial = SimpleMaterial(color: .red.withAlphaComponent(0.25), isMetallic: false) let boxEntity = ModelEntity(mesh: boxMesh, materials: [boxMaterial]) let sphereMesh = MeshResource.generateSphere(radius: 0.05) let sphereMaterial = SimpleMaterial(color: .blue, isMetallic: false) let sphereEntity = ModelEntity(mesh: sphereMesh, materials: [sphereMaterial]) content.add(sphereEntity) content.add(boxEntity) boxEntity.components.set(InputTargetComponent()) boxEntity.components.set( CollisionComponent( shapes: [ShapeResource.generateBox(size: SIMD3<Float>(repeating: 0.35))], isStatic: true, filter: CollisionFilter( group: .parent, mask: .parent.subtracting(.child) ) ) ) sphereEntity.components.set(InputTargetComponent()) sphereEntity.components.set(HoverEffectComponent()) sphereEntity.components.set( CollisionComponent( shapes: [ShapeResource.generateSphere(radius: 0.05)], isStatic: true, filter: CollisionFilter( group: .child, mask: .child.subtracting(.parent) ) ) ) } } }
2
0
89
53m
Sample code not working as expected: Implementing SharePlay for immersive spaces in visionOS
The following sample code project does not seem to work as expected: https://developer.apple.com/documentation/visionos/implementing-shareplay-for-immersive-spaces-in-visionos Have tried to get this project working with a client, but while we were able to see nearby users and make facetime calls, the color changing cube experience always remained a single color. Are there step-by-step instructions that Apple has used to verify this sample code so I can try to recreate this sample code's expected behavior for both nearby participants and those in a Facetime call?
0
0
79
16h
Help with visionOS pushWindow issues requested
I first started using the SwiftUI pushWindow API in visionOS 26.2, and I've reported several bugs I discovered, listed below. Under certain circumstances, pushed window relationships may break, and this behavior affects all other apps, not just the app that caused the problem, until the next device reboot. In other cases, the system may crash and restart. (FB21287011) When a window presented with pushWindow is dismissed, its parent window reappears in the wrong location (FB21294645) Pinning a pushed window to a wall breaks pushWindow for all other apps on the system (FB21594646) pushWindow interacts poorly with the window bar close app option (FB21652261) If a window locked to a wall calls pushWindow, the original window becomes unlocked (FB21652271) If a window locked in place calls pushWindow and the pushed window is closed, the system freezes (FB21828413) pushWindow, UIApplication.open, and a dismissed immersive space result in multiple failures that require a device reboot (FB21840747) visionOS randomly foregrounds a backgrounded immersive space app with a pushed window's parent window visible instead of the pushed window (FB21864652) When a running app is selected in the visionOS home view, windows presented with pushWindow spontaneously close (FB21873482) Pushed windows use the fixed scaling behavior instead of the dynamic scaling behavior I'm posting the issues here in case this information is helpful to other developers. I'd also like to hear about other pushWindow issues developers have encountered, so I can watch out for them. Questions: I've discovered that some of the issues above can be partially worked around by applying the defaultLaunchBehavior and restorationBehavior scene modifiers to suppress window restoration and locking, which pushWindow appears to interact poorly with. Are there other recommended workarounds? I've observed that the Photos and Settings apps, which predate the pushWindow API, are not affected by the issues I reported. Are there other more reliable ways I could achieve the same behavior as pushWindow without relying on that API? I'd appreciate any guidance Apple engineers could provide. Thank you.
0
2
69
20h
Buttons become unresponsive after using .windowStyle(.plain) with auto-hiding menu
I'm developing a visionOS panorama viewer app where I need to implement an auto-hiding floating menu in immersive space. The menu should: Show for 3 seconds when entering immersive mode Auto-hide after 3 seconds, Reappear when user taps anywhere (using SpatialTapGesture). Buttons should respond to gaze + pinch interaction The Problem: When I add .windowStyle(.plain) to achieve transparent window background for the auto-hide effect, all buttons in the menu become completely unresponsive to gaze + pinch interaction. The buttons only respond to direct finger touch (poking). Without .windowStyle(.plain): Buttons work correctly with gaze + pinch, but I cannot achieve transparent window background for hiding. With .windowStyle(.plain): Window can be transparent, but buttons lose gaze + pinch interaction. Code: App.swift: @main struct MyApp: App { @StateObject private var model = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(model) } .defaultSize(width: 900, height: 700) .windowResizability(.contentSize) .windowStyle(.plain) // <-- This causes the interaction issue ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environmentObject(model) } } } ContentView.swift (simplified): struct ContentView: View { @EnvironmentObject var model: AppModel @State private var isMenuVisible: Bool = true var body: some View { VStack { if model.isImmersiveViewActive { if isMenuVisible { // This menu's buttons don't respond to gaze+pinch immersiveControlMenu } } else { mainMenuButtons } } .glassBackgroundEffect() } private var immersiveControlMenu: some View { HStack { Button("Exit") { exitImmersiveSpace() } .buttonStyle(.bordered) // Also tried .plain, same issue } .padding() .glassBackgroundEffect() } } ImmersiveView.swift: struct ImmersiveView: View { @EnvironmentObject var model: AppModel var body: some View { RealityView { content in // Panorama sphere let sphere = ModelEntity(mesh: .generateSphere(radius: 1000), materials: [material]) content.add(sphere) // Tap detector for menu toggle let tapDetector = Entity() tapDetector.components.set(CollisionComponent(shapes: [.generateSphere(radius: 900)])) tapDetector.components.set(InputTargetComponent()) content.add(tapDetector) } .gesture( SpatialTapGesture() .targetedToAnyEntity() .onEnded { _ in model.shouldShowMenu = true } ) } } Environment: Xcode 26.2 visionOS 26.3 Vision Pro device Questions: Is .windowStyle(.plain) expected to affect button interaction behavior? What is the recommended approach to achieve a transparent/hidden window in immersive mode while maintaining button interactivity? Is there an alternative to .windowStyle(.plain) for hiding window chrome in visionOS? Thank you for any guidance!
5
0
914
3d
ModelEntity position values are stale after world recenter (Crown long press)
Hi everyone, I’m working with RealityKit on visionOS and I’m seeing unexpected behavior when the user long-presses the Digital Crown, which recenters the world. Observed behavior: When the world is recentered via long-pressing the Crown, the models remain visually in the correct place (as expected). However, if I query the model’s position or transform immediately after recentering (e.g. entity.position or similar), I still get the old values from before recenter. As soon as I interact with the model using a gesture (drag/rotate/scale), the position updates and then querying it returns the correct, updated values. So effectively: Recenter happens Visual position is correct Programmatic position remains stale First gesture causes the position to “snap” to the correct updated value Questions: Is there any event, notification, or callback that fires when the world is recentered due to a long press of the Crown button? Is there a recommended way to get the updated world-space transform immediately after recenter, without waiting for a gesture? Is this expected behavior due to deferred/lazy transform updates in RealityKit? Right now it feels like recentering updates the coordinate system but doesn’t immediately commit new transform values to entities until some interaction occurs. Any guidance or best-practice patterns for handling this would be appreciated. Thanks!
2
0
586
4d
Guided Access - Detect when setup (Eyes + Hands) is done
Hello, I am building a kiosk-style app for VisionOS which will be used in Guided Access mode, to be given to various visitors. So each of them will do hands + eyes setup, standard Guided Access thing. I want my experience to auto-start playing content when setup is done. I looked everywhere, but found no way do detect whether setup is complete? Also adding any kind of interface to start the app manually is risky, since buttons etc remain visible an interactable WHILE setup takes place. Delay-based approach also wont work, since setup can be skipped, or failed, or be done quickly, slowly... So it takes between 10 seconds and a few minutes. So the question is - is there any way to get notification, or check some bool or something that will tell me that Hands + Eyes setup in Guided mode is complete (or skipped)? Thanks in advance!
2
1
586
1w
With manipulation component, once you let go, how to prevent the entity from disappearing while animating it back into the volume
So with the new ManipulationComponent, we can choose "stay" and then if you drag it out of your volume, once you let go it will instantly disappear. We can "animate" it back to inside the volume, eg.: content.subscribe(to: ManipulationEvents.WillRelease.self) { event in Entity.animate( .easeInOut(duration: 1), body: { event.entity.position = [0, 0.2, 0] }, completion: {} ) }, Howeve,r for the duration that it travels outside of the volume it's invisible the whole time. In this apple video, it seems to be visible when dragging and when letting go, but perhaps that's not a volume they're dragging it out of? https://youtu.be/VtenPKrvPOU?si=y1zoZOs2IMyDzOm6&t=1748 Does anyone know how to keep the entity visible even when after letting the entity go while you animate it back towards inside of your volume?
1
1
900
1w
Nearby Sharing a Volume won't work
Hi, we've developed an app for Vision Pro that utilises the GroupActivitites SDK to provide shared experiences for our users. Remote Participation works great, but we can't get nearby sharing to work. The behaviour we're observing: User 1 engages share sheet from Volume, 2nd Vision Pro is visible. User 1 starts nearby sharing Session initialisation runs for approx. 30 seconds, then fails Sometimes, the nearby participant doesn't show up at all after the initialisation has failed once. As stated in the Configure your visionOS app for sharing with people nearby article, we didn't make any changes to our implementation to support nearby sharing. Any help would be greatly appreciated. Kind regards, David
3
0
835
1w
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
12
4
1.3k
1w
Manipulation stops working when changing rooms
This post documents an issue I reported in feedback FB19610114 and see if anyone knows of a workaround. Here is a copy of the feedback. Short version Manipulation (SwiftUI OR RealityKit) fails to translate entities after changing rooms. By changing rooms, I mean a human wearing an Apple Vision Pro leaving one room and entering another room. Once this issue occurs, it impacts all apps that use these features. A device restart is the only solution I have to fix it. Feedback FB19610114 This is an odd one. I'm using the new Manipulation Component in visionOS 26. Most of the time this works well. Sometime it stops working and when it does the only way to get it working again is to reboot the headset. When this happens, I can continue to rotate and scale items, but translation no longer works. It is as if the item is stuck to a fixed point in the parent scene (window, volume, etc). When this bug occurs, it affects every app across the entire operating system that is using manipulation, including the RealityKit component AND the SwiftUI version. This is not limited to one app and is not limited to apps that I am working on. Once this error occurs, it affects literally any application across the operating system that is using this API, including apps from Apple. I won't speculate on the cause of this, but I do know of one way where I can always get it to happen. Here is how to reproduce it: Make an Xcode project with a single entity that uses the Manipulation Component. There is no need to customize the configuration of this component. The default implementation will work. Build and run this app on device. You can keep running from device or quit and launch the app like normal on device. Open the app and manipulate the entity - it should work as expected. Physically walk into another room. It is vital that you leave the current room that you are in and enter a different room entirely. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - it should still be working as expected at this point. Physically, move yourself and your headset into the original room where you started. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - you should now see the issue. When I follow the steps above, then 100% of the time manipulation translation stops working at this point. It will impact any application using this API. The only way to fix it is to restart my headset. A few points to keep in mind It does not matter if an app is actively being run from Xcode. When this occurs, it impacts every single app, not just one. When this occurs, rotation and scaling continue to work, but the entity/view cannot be translated. This impacts BOTH the SwiftUI version and the RealityKit version. When this occurs, the only way to "fix" it is to reboot the device.
6
3
939
2w
Extending or disabling the 1.5-meter boundary in ImmersiveSpace
I’m currently developing an app for visionOS and working with an ImmersiveSpace. I’ve noticed that the system automatically enforces a safety boundary at approximately 1.5 meters. If the user moves beyond this limit, the content fades out or the system reverts to Passthrough. Is there any way to disable this boundary or extend its radius? This app is currently in the experimental/verification phase, and it is intended to be run on a Vision Pro in Developer Mode. Since the primary goal is to test large-scale spatial interactions during development, I am looking for any way—including developer-specific settings or configurations—to bypass or expand this limit. If there isn't a direct API to change the boundary size, are there any recommended workarounds for testing movement within large environments? Any insights would be greatly appreciated!
1
0
445
2w
Low-latency live streaming using APMP Wide FoV
Is it possible to achieve sub-second end-to-end latency when displaying live streaming video using APMP (Apple Projected Media Profile) with Wide FoV? APMP supports HLS playback, but my understanding is that standard HLS introduces several seconds of latency. I would like to know whether APMP (especially Wide FoV) supports Low-Latency HLS, or if there are inherent limitations that make sub-second latency impractical. If APMP is not suitable for this use case, are there any recommended alternatives within AVFoundation or related frameworks for rendering wide-FoV live video with very low latency? Thank you for any insights.
1
0
374
2w
How to Renew visionOS Enterprise API Entitlements?
How can I renew visionOS Enterprise API? I've spent so much time contacting Apple Developer Support. They said they don't know the renewal process either and are "checking with the internal operations team" - but it's been 2 months with no updates. The official documentation (https://developer.apple.com/documentation/visionOS/building-spatial-experiences-for-business-apps-with-enterprise-apis#Request-the-entitlements) says: "The license file comes with an expiration date, so you need to renew it before then to ensure your entitlements continue to function." But it doesn't explain HOW to renew. When I asked Apple Support about this, they told me: "After Apple approves your app for one or more entitlements, you receive a license file, along with additional instructions." But I never received any instructions when I was first approved, and I still don't know how to renew. There's also no direct way to contact the Enterprise API team. Now my visionOS Enterprise API license has been expired for 2 months. I submitted a renewal request, but I still haven't heard anything back. Is it normal to take more than 2 months for approval? Any advice or shared experiences would be really helpful. Thanks!
1
0
470
2w
Degraded RoomPlan performance
We have been using RoomPlan in our app for 2+ years. Through a combination of in-app and manual coaching on scanning best practices, most users are able to achieve high-quality scans on a consistent basis. In recent weeks, however, we have observed an increase in reports of degraded scanning performance, even from veteran users who had not previously encountered issues. The RoomCaptureView overlay is jittery and crooked, and the resulting scan file has significant issues, even for simple, well-lit rectangular rooms. It is difficult to troubleshoot these issues given the number of variables at play, and the overall volume of reports is still relatively low, but we'd appreciate any guidance on known issues or workarounds that could help unblock our users who are being affected by this. I noticed that this post includes an acknowledgement of FB14454922 and FB15035788. Our issues seem slightly different as the scans are simply inaccurate and jittery without failing outright. I haven't found any other threads on similar issues.
3
2
2.2k
3w
onWorldRecenter memory leak and duplicate callbacks in ImmersiveSpace
Posting this here in case this information is helpful to other developers: As of visionOS 26.3 beta 1, onWorldRecenter has two significant issues: (FB21557639) Memory Leak: When onWorldRecenter is assigned to a RealityView within an ImmersiveSpace, it appears to retain a strong reference to the view's internal SwiftUI context. When the immersive space is dismissed, the view's @State objects will not be deallocated. Also, each time the immersive space view's body is executed, additional state storage will be allocated and leaked. Multiple Callbacks: When the user long-presses the Digital Crown, the onWorldRecenter closure will be called multiple times, once for each past view body execution, including those of immersive space views that have been previously dismissed. Although these issues seem to be most prevalent when onWorldRecenter is used with an ImmersiveSpace, they may also occur in the context of a WindowGroup under certain circumstances. It's possible to work around this problem by moving onWorldRecenter to an empty overlay view within the app's primary WindowGroup and forwarding the world recenter events to ImmersiveSpace views through a notification system, coupled with a debouncer as an extra precaution.
1
1
985
3w