Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

Object tracking on Vision Pro using Vision
I'm wondering if it's possible to implement object tracking on Vision Pro using the Vision framework of Apple? I see that the Vision documentation offers a variety of classes for computer vision which have a tag "visionOS", but all the example codes in the documentation are only for iOS, iPadOS or macOS. So can those classes also be used for developing Vision Pro apps? If so, how do they get data feed from the camera of Vision Pro?
1
0
211
May ’24
Apple Vision Pro no longer run destination
I am developing an iPhone app, but I've been targeting the AVP, as well. In fact, since I got the AVP, I've mainly be building and running my app on it. This morning, I had an upgrade to Xcode 15.4 (15F31d). Ever since I have not been able to see my AVP as a run destination. It does show up in the device list, although there are no provisioning files on it for some reason. But I can't target it for building. I've tried unpairing and turning developer mode off and on. Has anyone else seen this problem after upgrading Xcode? Any help is appreciated.
1
1
333
May ’24
Bluetooth keyboard events in fully immersive Vision Pro app?
I'm writing a Vision Pro app that's fully immersive and rendered using Metal. Occasionally, some users of this app would benefit from being able to use a physical keyboard (or other accessory like a game controller). It seems very straightforward to capture and handle spatial gesture events, but I cannot find an interface that allows the detection, capture, or handling of keyboard events in any of the objects associated with fully immersive metal rendering: CompositorServices, LayerRenderer, and its associated .frame, .drawable, and .drawable.view don't seem to have any accessory awareness. Can you help me handle a keyboard event?
2
0
419
May ’24
Fully immersive content using Metal is not getting the correct gesture locations
We followed this documentation https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal to display a fully immersive map using our metal rendering engine, which worked great. But this part of the article: https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal#4193614 mentions how to use the onSpatialEvent callback to receive gesture events. We are receiving the gesture events but the location property of the event (https://developer.apple.com/documentation/swiftui/spatialeventcollection/event/location) is always coming back as (x: 0, y:0) which is not helpful. We are unable to get a single valid location of any gesture, therefore, we are unable to hook up these gestures. We tried this on a simulator and a Vision Pro device.
2
0
228
May ’24
Vision OS - How to detect when digital crown is PRESSED in an immersive space?
Have a bug I'm trying to resolve on an app review through the store. The basic flow is this: User presses a button and enters a fully immersive space While in the the fully immersive space, user presses the digital crown button to exit fully immersive mode and return to shared space (Note: this is not rotating the digital crown to control immersion level) At this point I need an event or onchange (or similar) to know when a user is in immersive mode so I can reset a flag I've been manually setting to track whether or not the user is currently viewing an immersive space. I have an onchange watching the scenePhase changes and printing to console the old/new values however this is never triggered. Seems like it might be an edge case but curious if there's another way to detect whether or not a user is in an immersive scene.
1
0
303
4w
Align a virtual copy of a real object with the real one
Hi, We are currently trying to implement very simple test application using Vision Pro. We display a virtual copy of an object (based on CAD data) and then we try to align the real object with the virtual one. It seems to be impossible! You can align them to a certain degree but if you walk around the object to control the alignment it seems the reality us warping and wobbling for almost 2 cm. Is there any way to fix this?
3
0
240
3w
Entering full immersion without grab bar and close icon
I'd like to enter a fully immersive scene without the grab bar and close icon. The full immersion app that comes with Xcode doesn't exit the immersion state when "x" is hit, but all the grab etc disappears - if I could only do that programmatically! I've tried conditionally removing the View that launches the ImmersiveSpace, but the WindowGroup seems to be the thing that puts out the UI I'm trying to hide... WindowGroup { if(gameState.immersiveSpaceOpened){ ContentView() .environmentObject(gameState) } }
1
0
241
3w
DestinationVideo -- MV-HEVC Files
In the code example provided there is a bool in the Video object to set a video as 3D: /// A Boolean value that indicates whether the video contains 3D content. let is3D: Bool I have a hosted spatial video that I know works correctly on the AVP player. When I point the Videos.json file to the this URL and set is3D=true my 3D video doesn't show up and I get the follow error: iPVC/1-0 Received playback error: [Error Domain=AVFoundationErrorDomain Code=-11850 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The server is not correctly configured., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x30227c510 {Error Domain=CoreMediaErrorDomain Code=-12939 "byte range length mismatch - should be length 2 is length 2434" UserInfo={NSDescription=byte range length mismatch - should be length 2 is length 2434, NSURL=https: <omitted for post> }}}] Can anyone tell me what might be going on? The error is telling me my server is not configured correctly. For context, I'm using a google drive to deliver dynamic images/videos using: https://drive.google.com/uc?export=download&id= <file ID> And the above works great for my images and 2d videos. Is there something I need to do specifically when delivering MV-HEVC videos?
1
0
293
3w
Restoring visionOS windows to spatial location upon returning from immersive scene
If I have windows that occupy the shared space and are located in various spatial locations for the user, say various rooms in a home, how do I have those windows reappear/open in the same spatial location when the user returns to the shared space after being in a full immersive space? I assume it has to do with somehow setting the state of the windows before heading into the immersive view, but I’m not sure how to begin thinking about this or the type of code I would need for that. Any help for this question would be appreciated.
2
0
238
2w
Inputs Functionality in VisionOS 2.0?
Inputs Updates to inputs on Apple Vision Pro let you decide if you want the user’s hands to appear in front of or behind the digital content. Trying to understand why this is being introduced? Why would one corrupt the spatial experience by forcing your hands to appear in front of or behind digital content? Won't this be confusing to users? i.e., it should be a natural mixed reality experience where occlusion occurs when needed. if your "physical hand" is in front of a virtual obj, then it remains visible... and likewise, if you move it behind, then it disappears (not a semi-transparent view of your hand through the model).
1
0
129
2w
Multi-scene/window shareplay on visionOS
Hi all, been working with visionOS for a bit so far and am trying to develop a feature that allows users to shareplay and interact with a 3D model pulled from the cloud (icloud in this case, but may use a regular backend service in the future). Ideally, I would want to be able to click a custom button on a regular window that starts the group activity/shareplay with another person in the facetime call and opens the volumetric window with the model and can switch to an immersive space freely. TLDR/questions at the very end for reference point. I was able to get this working when only working with a single window group (i.e. a volumetric window group scene and an immersive space scene). However I am running into trouble getting shareplay to correctly grab the desired scene (or any scene at all) when I have multiple window group scenes defined. I have been referencing the following documentation in my attempts to implement this: https://developer.apple.com/documentation/groupactivities/sceneassociationbehavior https://developer.apple.com/documentation/groupactivities/adding-spatial-persona-support-to-an-activity https://developer.apple.com/documentation/groupactivities/defining-your-apps-shareplay-activities https://developer.apple.com/documentation/groupactivities/joining-and-managing-a-shared-activity No luck so far however. Additionally, here is a quick breakdown of what I've done so far to attempt implementation: Define group activity that contains static var activityIdentifier: String and var metadata: GroupActivityMetadata as well as conforms to GroupActivity. Provide methods to start shareplay via a startShareplay() method that instantiates the above group activity and switch awaits on activity.prepareForActivation() to activate the activity if case .activationPreferred. I have also provided a separate group activity registration method to start shareplay via airdrop as mentioned in the Building spatial SharePlay experiences developer video (timestamped), which does expose a group activity to the share context menu/ornament but does not indicate being shared afterwards. On app start, trigger a method to configure group sessions and provide listeners for sessions (including subscribers for active participants, session state, messages of the corresponding state type - in my case it's ModelState.self, journal attachments for potentially providing models that the other user may not have as we are getting models from cloud/backend, local participant states, etc). At the very end, call groupSession.join(). Add external activation handlers to the corresponding scenes in the scene declaration (as per this documentation on SceneAssociationBehavior using the handlesExternalEvents(matching:) scene modifier to open the scene when shareplay starts). I have also attempted using the view modifier handlesExternalEvents(preferring:allowing:) on views but also no luck. Both are being used with the corresponding activityIdentifier from the group activity and I've also tried passing a specific identifier while using the .content(_) sceneAssociationBehavior as well but no luck there either. I have noted that in this answer regarding shareplay in visionOS, the VP engineer notes that when the app receives the session, it should setup any necessary UI then join the session, but I would expect even if the UI isn't being set up via the other person's session that the person who started shareplay should still see the sharing ornament turn green on the corresponding window which doesn't seem to occur. In fact, none of the windows that are open even get the green sharing ornament (just keeps showing "Not shared"). TLDR: Added external events handling and standard group activity stuff to multi-window app. When using shareplay, no windows are indicated as being shared. My questions thus are: Am I incorrect in my usage of the scene/view modifiers for handlesExternalEvents to open and associate a specific windowgroup/scene with the group activity? In regards to opening a specific window when the group activity is activated, how do we pass any values if the window group requires it? i.e. if it's something like WindowGroup(id: ..., for: URL.self) { url in ... } Do I still need to provide UI setup in the session listener (for await session in MyActivity.sessions())? Is this just a simple openWindow? Past the initializing shareplay stuff above, what are the best practices for sharing 3d models that not all users in the session might have? Is it adding it as an attachment to GroupSessionJournal? Or should I pass the remote URL to everyone to download the model locally instead? Thanks for any help and apologies for the long post. Please let me know if there's any additional information I can provide to help resolve this.
3
0
262
2w
How to track gaze?
My visionOS app was rejected for not supporting gaze-and-pinch features, but as far as I can tell there is no way to track the users eye movements in visionOS. (The intended means of interaction for my app is an extended gamepad.) I am able to detect pinch but can't find any way to detect where the user is looking. ARFaceAnchor and lookAtPoint appear to not be available in visionOS. So how do we go about doing this? For context, I am porting a metal game to vision OS from iOS.
4
0
230
2w