Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

How can I implement the expand effect when clicking on a contact's avatar like in visionos's messages apps?
I found that there is such a click-to-expand horizontally and smoothly effect in the system application called "message", which is good. I wonder if I can add a similar effect to my own app. If possible, are there any implementation ideas or examples that I can refer to? Thanks!
0
0
580
Oct ’24
Keep Apple Vision Pro Awake
I have two Apple Vision Pros so that I can make and test a multi-player immersive reality game. But I am one developer, so I need to be able to take one Apple Vision Pro off, and put the other one on to see what the other device is seeing, and to ensure my game information is correctlly being sent over the network with multipeer connectivity. But when I take one off, the Apple Vision Pro immediately goes to sleep. With Apple Vision Pro OS 1, I could put a piece of paper into the pro and they would stay on for hours, and I could take them on and off and debug my game. But now with VisionOS 2, even with the paper they soon go to sleep. Is there a setting I can change or override as a developer to stop this auto sleep or auto lock? I need to check things like: when two devices are on the network, can I see them both so that players can select each other from a menu? can i send object positions back and forth Thank you.
0
2
571
Oct ’24
MagnifyGesture in RealityView does not work on macOS
I have tested the MagnifyGesture code below on multiple devices: Vision Pro - working iPhone - working iPad - working macOS - not working In Reality Composer Pro, I have also added the below components to the test model entity: Input Target Collision For macOS, I tried the touchpad pinch gesture and mouse scroll wheel, but neither approach works. How to resolve this issue? Thank you. import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } } .gesture(MagnifyGesture() .targetedToAnyEntity() .onChanged(onMagnifyChanged) .onEnded(onMagnifyEnded)) } func onMagnifyChanged(_ value: EntityTargetValue<MagnifyGesture.Value>) { print("onMagnifyChanged") } func onMagnifyEnded(_ value: EntityTargetValue<MagnifyGesture.Value>) { print("onMagnifyEnded") } }
0
0
32
Mar ’25
Apple Vision Pro Enterprise API configuration issue
My name is Tom Shannon, a developer with Omnia (d.b.a Aequilibrium Inc.). We were recently approved for some of the Enterprise APIs for the Vision Pro. You can reference the history through our Case-ID: 9237594 We are contacting you for assistance as we have downloaded the entitlement license provided and added it to our target for an application under the bundle id: com.omnia.spatialbrowser Then under my project and with my developer account, which is under the Aequilibrium Inc. account (279PV9XKZ2), we tried to add the Barcode Scanner Enterprise API entitlement, but this does not show up as an option for us. I am on XCode 16.1 beta (16B5001e) for reference! Any help would be greatly appreciated. Best,
0
0
610
Oct ’24
Prevent Window (or Volume) Mouse Focus
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window. Is there anyway to reject that focus?
0
0
402
Nov ’24
Is dismissWindow actually asynchronous?
On visionOS, I have discovered that if dismissWindow is followed immediately by a call to openWindow, the new window does not open where the user is looking at. Instead, it appears at the same location as the dismissed window. However, if I open the new window after a small delay, or after UIScene's willDeactivateNotification, the new window correctly opens in front of the user. (I tested this within a opened immersive space.) Does this imply that dismissWindow is actually asynchronous, in the sense that it requires extra time to reset certain internal states before the next openWindow can be called? What is the best practice to close a window, then open a new window in front of the user's current head position?
0
0
422
Oct ’24
WebXR Immersive AR Support & DOM Overlays
As mentioned in https://forums.developer.apple.com/forums/thread/756736?answerId=810096022#810096022 Is there any update about the full support to WebXR AR Module, which should enable immersive-ar mode? Are the features such as DOM overlays and WebGPU bindings on the roadmap? Is it possible to capture stereoscopic video either internally or externally or via airplay for debugging purposes? Thanks
0
5
769
Oct ’24
Trouble with initializing a SharePlay and using GroupSessionJournal.
I am having trouble with initializing the SharePlay. It works but we have to leave the game (click the close button) and rejoin it, sometimes several times, for it to establish the connection. I am also having trouble sharing images over SharePlay with GroupSessionJournal. I am not able to get it to transfer any amount of data or even get recognition on the other participants in the SharePlay that an image is being sent. We have look at all the information we can find online and are not able to establish a connection. I am not sure if I am missing a step, or if I am incorrectly sending the data through the GroupSessionJournal. Here are the steps I took take to replicate the issue I have: FaceTime another person with the app. Open the app and click the SharePlay button to SharePlay it with the other person. Establish the SharePlay and by making sure that the board states are syncronized across participants. If its not click the close button and click open app again to rejoin the SharePlay. (This is one of the bugs that I would like to fix. This is just a work around we developed to establish the SharePlay. We would like it so that when you click SharePLay and they join the session it works.) Once the SharePlay has been established, change the image by clicking change 1 image. Select a jpg image. The image that represents 1 should be not set. If you dont see the image click on any of the X in the squares and it will change to the image. The image should appear on the other participant in the SharePlay. (This does not happen and is what we have not been able to figure out how to get working.) Here are the classes for the example project I created: Content View Game Model Class Activity Manager Main Starter Class
0
0
529
Sep ’24
Accessing LiDAR Depth Data and Scene Reconstruction on Apple Vision Pro
Hello, I'm developing a visionOS application for Apple Vision Pro that aims to scan unknown physical objects, capture their 3D data (such as meshes or point clouds), and export them as 3D models. Ideally, I'd also like to visualize these reconstructions in real-time within the headset. This functionality is similar to what's available in Reality Composer on iPad and iPhone, but I'm seeking to implement it natively on Vision Pro. I've reviewed the visionOS documentation but haven't found clear guidance on accessing LiDAR depth data or performing scene reconstruction. Specifically, I'm interested in: 1.Accessing LiDAR or depth data from Vision Pro's sensors. 2.Utilizing ARKit's scene reconstruction capabilities on visionOS. 3.Exporting captured 3D data as models (e.g., USDZ or OBJ formats). Are there APIs or frameworks in visionOS that support these features?
0
2
100
May ’25
[Rnedering] Always display a Window in front of any Entity in a mixedImmersiveView?
Currently I am using mixed style immersive view to place both my WindowView(plain style) and ImmersiveView content together. The issue is that the rendering depth testing may always let the virtual content block my normal WindowView. Is it possible to manually set windowedVIew always displays in the front of my virtual view in mixed style immersion? (I know modelSortGroup but it doesn't quite fits here) Or if I can dynamically change the .progressive value when the immersive space is open (set the value to zero means .mixed itself right?)
0
0
61
Mar ’25
Animation handling on Scene change
I work on a game where I use timeline animations in Reality Composer Pro. The game runs in an immersive space, but can be paused where I then move the whole level root entity from the immersive space to another RealityView in a Window Group. When the player continues I do it exactly the other way around to move the level root from the window group back to my immersive space RealityView. And it seems like all animations get automatically stopped and restarted when the scene gets changed. The problem is, it does not resume where it stopped before, it completely starts again from where it stopped and therefore, has for example a wrong y offset as visible in the picture. For example in the picture, the yellow sphere loops the following animation: 0 to 100 100 to -100 -100 to 0 If I now pause the game (and basically switch scenes), the previous animation gets stopped and restarted at position y = 100. So now it loops: 100 to 200 200 to 0 0 to 100 I already tried all kind of setups - like: Setting the animations relative to root, parent, local Using behaviors (on Added to Scene, on Notification) And finally even by accessing the availableAnimations directly and saving the playback controller of the animation There I saw, if I manually trigger the following code before switching the scene, everything works as expected: Button("Reset") { animationPlaybackController.time = 0 animationPlaybackController.pause() animationPlaybackController.stop(blendOutDuration: 0.00001) } But if I use time = 0 with .stop() directly, the time = 0 seems to be ignored and I get the same behavior as before that it stops in a wrong y offset, hence my assumption that animations get stopped and invalidated once they change the scene. I tried to call the code manually on ImmersiveSpace.onDisappear, WindowGroup.onAppear and different kind of SceneEvents subscriptions, but unfortunately nothing worked. So am I doing something wrong in general or is there a way to fix this?
0
0
55
Apr ’25
Cannot reassign worldTracking / planeDetection providers in my PlacementManager when switching environments
Environment Xcode: 16.2 VisionOS SDK 2.4 Swift 6.1 Targets: Apple Vision Pro (immersive space) Frameworks: ARKit, RealityKit, SwiftUI What I’m Trying to Do I have a view-model class PlacementManager that holds two AR providers: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit). What’s Happening If I declare them as : private let worldTracking = WorldTrackingProvider() private let planeDetection = PlaneDetectionProvider() I get compile-errors when I later do: self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant If I change them to un-initialized vars: private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider then in my init() I get: self used in property access 'worldTracking' before all stored properties are initialized Code snipet @Observable final class PlacementManager : ObservableObject { private var worldTracking: WorldTrackingProvider private var planeDetection: PlaneDetectionProvider // … other props … @MainActor init() { // error: self.worldTracking used before init… planeAnchorHandler = PlaneAnchorHandler(rootEntity: root) persistenceManager = PersistenceManager( worldTracking: worldTracking, rootEntity: root ) // … } @MainActor func setEnvironment(env: Environnement) async { let newWorldTracking = WorldTrackingProvider() let newPlaneDetection = PlaneDetectionProvider() try await appState!.arkitSession.run( [ newWorldTracking, newPlaneDetection ] ) self.worldTracking = newWorldTracking self.planeDetection = newPlaneDetection // … } } What I’ve Tried Giving them default values at declaration (= WorldTrackingProvider()) Initializing them at the top of init() before any use Passing the new providers into arkitSession.run(...) My Question What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that: They’re fully initialized before use in init(), and I can swap them out later in setEnvironment(...) without compiler errors? Any pointers (or links to forum threads / docs) would be greatly appreciated!
0
0
87
May ’25
How to add visual thickness to a glass background view
Hi guys, In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness. However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness. I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value. My question is: Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture? Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
0
0
81
May ’25
Envision the future: Build great apps for visionOS
I am honored that I successfully participated in the "Envision the future: Build great apps for visionOS"(https://developer.apple.com/events/view/ZCH7ZUY24C/dashboard) conference. However, unfortunately, I am in China, and due to the visa problem (because it usually takes at least 2 months to apply for a visa, but it only takes about 10 days from the time I received the notice to the meeting), I can't go to the United States to participate in the site. So I hope Apple can place an iPad on site and create a FaceTime link, and then I can make a call to this iPad. I also told Apple about this suggestion, but now there are only a few days left to start. They didn't reply to me. Even Apple has sent me the ticket to the Developer Center and asked me to add it to the Apple Wallet App, which means that Apple has not There is a request to deal with me. So I hope you can give me some advice or help me for those who know about this aspect. For Apple's internal engineers, if possible, I hope you can contact the person who manages this meeting. I'm very grateful for this. Thank you.
0
0
613
Oct ’24
visionOS App Not Receiving Latest Heart Rate Data from HealthKit
Hello everyone, I'm developing an app for visionOS that utilizes HealthKit to query heart rate data. However, I'm encountering an issue where the app doesn't retrieve the latest heart rate values. Specifically, it fails to get live heart rate data even after the data has been saved to the Health app. The readings my app displays are outdated and do not match the current values shown in the Health app. Here's what I've tried so far: Fetching Heart Rate Samples: Used HKSampleQuery and HKAnchoredObjectQuery to fetch the most recent heart rate samples. Despite this, the data retrieved is still not up-to-date. Checking Permissions: Ensured that all necessary HealthKit permissions are granted. The app has authorization to read heart rate data and write workout data. My questions are: Is there a known issue or limitation with HealthKit on visionOS that prevents apps from accessing the latest heart rate data? Are there additional steps or configurations required to access live heart rate data in visionOS apps? Has anyone successfully implemented live heart rate monitoring on visionOS, and if so, could you share how you achieved it?
0
0
458
Sep ’24
Apple Vision Pro - hand tracking with gloves
I am considering adding finger pad haptics (Data flow for haptic feedback is directed from the AVP to the fingers, not vice versa). Simple piezos wired to a wrist connection holding the driver/battery. But I'm concerned it will impact the hand tracking. Any guidance regarding gloves and/or the size of any peripherals attached to fingers? Or, if anyone has another (inexpensive) low profile option on the market please LMK. Thanks
0
0
211
Feb ’25
How to make .blur(radius:) visually affect RealityView content?
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred. Here’s the test code: struct ContentView: View { var body: some View { VStack(spacing: 20) { Text("Above the RealityView") .font(.title) RealityView { content, attachments in if let text = attachments.entity(for: "2dView") { text.position.y = 0.1 content.add(text) } let box = ModelEntity( mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)] ) content.add(box) } attachments: { Attachment(id: "2dView") { Text("Above the Box") .font(.title) } } .frame(width: 300, height: 300) .border(.blue) .blur(radius: 99) // Has no visual effect Text("Below the RealityView") .font(.subheadline) } .padding() } } My question: How can I make .blur(radius:) visually affect the content rendered in a RealityView? Can you provide a working example that .blur() to visually affect any part of a RealityView? Thanks!
0
0
73
May ’25
How to access the Scene instance of a visionOS app using immersive space in a unit test?
I have a visionOS app using immersive space with RealityView. The app adds RealityKit entities to the app's Scene instance, and uses raycast to find CollisionCastHits. I want now to write a unit test to check if the app finds the right hits.
To do so, I have to access the Scene instance to add entities, and to check if they are hit by scene.raycast. But how can I access the scene instance? I can access it e.g. after creating the RealityView via its content parameter, or via @Environment(\.realityKitScene). But this seems not to be possible in a unit test. I tried the following test function: @MainActor @Test func test() async throws { var scene: RealityKit.Scene? await withCheckedContinuation { continuation in _ = RealityView(make: { content in print("make") let entity = Entity() content.add(entity) scene = entity.scene continuation.resume() }) } #expect(scene != nil) } But this logs ◇ Test test() started.
SWIFT TASK CONTINUATION MISUSE: test() leaked its continuation! The reason is apparently that the make closure of RealityView is only called when SwiftUI calls it within the body of a SwiftUI View. So, is it possible at all to access the app's scene i a unit test?
0
0
386
Dec ’24