Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

USD/Z schemas and Declarations
How is it possible to add a schema for ar to a usd file using the python tools (or any other way). Following the instructions in: https://developer.apple.com/documentation/arkit/arkit_in_ios/usdz_schemas_for_ar/actions_and_triggers/preliminary_behavior The steps are to have the following declaration: class Preliminary_Behavior "Preliminary_Behavior" ( inherits = </Typed> ) and a usd file #usda 1.0 def Preliminary_Behavior "TapAndFlip" { rel triggers = [ <Tap> ] rel actions = [ <Entry> ] def Preliminary_Trigger "Tap" ( inherits = </TapGestureTrigger> ) { rel affectedObjects = [ </Cube> ] } def Preliminary_Action "Entry" ( inherits = </GroupAction> ) { uniform token type = "parallel" rel actions = [ <Flip> ] } def Preliminary_Action "Flip" ( inherits = </EmphasizeAction> ) { rel affectedObjects = [ </Cube> ] uniform token motionType = "flip" } } def Cube "Cube" { } How do these parts fit together? I saved the usda file, but it didn't have any interactions. Obviously, I have to add that declaration, but how do I do this? is this all in an AR Xcode project? Or can I do this with python tools (I would prefer something very lightweight).
0
0
657
May ’24
Build errors for iOS for my visionOS app
I'm taking my iOS/iPadOS app and converting it so it runs on visionOS. I’m trying to compile my app, build it, for both visionOS and iOS. When I try to build for an iPhone and iPad simulator, I get the following error:  Building for 'iphonesimulator', but realitytool only supports [xros, xrsimulator] I’m thinking I might need to do a # if conditional compilation statement for visionOS so iOS doesn’t try to build lines of code but I can’t for this particular error find out for which file or code I need to do the conditional compilation. Anyone know how to get rid of this error? 
2
0
1.2k
May ’24
SceneReconstructionProvider stops providing updates
I have found that my Vision Pro device can get into a state where my app is no longer receiving fresh SceneReconstructionProvider updates. It reports that the SceneReconstructionProvider goes into the DataProviderState.running state, and .anchorUpdates will report a set of stale mesh anchors when first fired up, but does not produce any further updates. Once the device gets into this state, I can force quit the app, and even uninstall and re-install it, and I get the same few mesh updates, but no fresh updates until I restart the device. Sample async function below. I can confirm that print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates") never gets executed, so it stays inside the sceneReconstruction.anchorUpdates loop. let session = ARKitSession() var handTracking = HandTrackingProvider() let sceneReconstruction = SceneReconstructionProvider() let planeDetection = PlaneDetectionProvider(alignments: [.horizontal, .vertical]) let worldTracking = WorldTrackingProvider() ... func start() async { do { await requestAuth() if dataProvidersAreSupported && isReadyToRun && !isRunning { // print("ARKitSession starting.") try await session.run([sceneReconstruction, handTracking, planeDetection, worldTracking]) startCount += 1 // TODO: Fail gracefully if we have to attempt start too many (# TBD) times } else { print("dataProvidersAreSupported: \(dataProvidersAreSupported). isReadyToRun: \(isRunning)") print("handTracking.state: \(handTracking.state), sceneReconstruction.state: \(sceneReconstruction.state) worldTracking.state: \(worldTracking.state), planeDetection.state; \(planeDetection.state)") } }catch { print("ARKitSession error:", error) } } ... func processReconstructionUpdates() async { while (true) { for await update in sceneReconstruction.anchorUpdates { let meshAnchor = update.anchor guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue } switch update.event { case .added: let entity = try! await generateModelEntity(geometry: meshAnchor.geometry) entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision = CollisionComponent(shapes: [shape], isStatic: true) entity.components.set(InputTargetComponent()) entity.name = "mesh" entity.physicsBody = PhysicsBodyComponent(mode: .static) let sortComponent = ModelSortGroupComponent(group: modelSortGroup, order: 1) entity.components.set(sortComponent) entity.components.set(OpacityComponent(opacity: 0.5)) meshEntities[meshAnchor.id] = entity meshesParent.addChild(entity, preservingWorldTransform: true) case .updated: guard let entity = meshEntities[meshAnchor.id], let updatedEntity = try? await generateModelEntity(geometry: meshAnchor.geometry) else { continue } entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform) entity.collision?.shapes = [shape] if let newMesh = updatedEntity.model?.mesh { entity.model?.mesh = newMesh } case .removed: meshEntities[meshAnchor.id]?.removeFromParent() meshEntities.removeValue(forKey: meshAnchor.id) } print("We now have '\(meshEntities.count)' mesh entities") } print("WE FELL OFF THE END OF sceneReconstruction.anchorUpdates") try? await Task.sleep(nanoseconds: 1_000_000) }
5
0
722
May ’24
Object tracking on Vision Pro using Vision
I'm wondering if it's possible to implement object tracking on Vision Pro using the Vision framework of Apple? I see that the Vision documentation offers a variety of classes for computer vision which have a tag "visionOS", but all the example codes in the documentation are only for iOS, iPadOS or macOS. So can those classes also be used for developing Vision Pro apps? If so, how do they get data feed from the camera of Vision Pro?
1
0
523
May ’24
Apple Vision Pro no longer run destination
I am developing an iPhone app, but I've been targeting the AVP, as well. In fact, since I got the AVP, I've mainly be building and running my app on it. This morning, I had an upgrade to Xcode 15.4 (15F31d). Ever since I have not been able to see my AVP as a run destination. It does show up in the device list, although there are no provisioning files on it for some reason. But I can't target it for building. I've tried unpairing and turning developer mode off and on. Has anyone else seen this problem after upgrading Xcode? Any help is appreciated.
1
1
1.1k
May ’24
Bluetooth keyboard events in fully immersive Vision Pro app?
I'm writing a Vision Pro app that's fully immersive and rendered using Metal. Occasionally, some users of this app would benefit from being able to use a physical keyboard (or other accessory like a game controller). It seems very straightforward to capture and handle spatial gesture events, but I cannot find an interface that allows the detection, capture, or handling of keyboard events in any of the objects associated with fully immersive metal rendering: CompositorServices, LayerRenderer, and its associated .frame, .drawable, and .drawable.view don't seem to have any accessory awareness. Can you help me handle a keyboard event?
2
0
1k
May ’24
Fully immersive content using Metal is not getting the correct gesture locations
We followed this documentation https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal to display a fully immersive map using our metal rendering engine, which worked great. But this part of the article: https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal#4193614 mentions how to use the onSpatialEvent callback to receive gesture events. We are receiving the gesture events but the location property of the event (https://developer.apple.com/documentation/swiftui/spatialeventcollection/event/location) is always coming back as (x: 0, y:0) which is not helpful. We are unable to get a single valid location of any gesture, therefore, we are unable to hook up these gestures. We tried this on a simulator and a Vision Pro device.
2
0
603
May ’24
ARConfidenceLevel values lower on iPhone 15 Pro and M4 iPad Pro
[tldr version: all the point cloud capture apps rushed out an update when the iPhone 15 Pro was released because they were capturing far fewer points on that device. The same is observed with the new M4 iPad Pro. What was the fix for compatibility with these new devices?] I am running an ARKit replay file through "Displaying a Point Cloud Using Scene Depth" from WWDC20 and recording the ARConfidenceLevel values of the incoming ARDepthData. I am doing this side-by-side on an iPhone 12 Pro and an iPhone 15 Pro. The ARKit replay file was originally recorded on the 12 Pro. We get a certain percentage of points where the ARConfidenceLevel is not ".high" when running on the iPhone 12 Pro. It's varies a lot by frame but averages about 5% and is the same on all devices prior to the iPhone 15 Pro. The same test using the same iPhone 12 Pro replay file on an iPhone 15 Pro gives about twice as many points where the ARConfidenceLevel is not ".high" (about 10% on average on this particular replay file). This corresponds with real-world usage of our app on the iPhone 15 Pro where far fewer points are captured on that device compared with all previous models. (Our app filters out points where the ARConfidenceLevel is not ".high".) Apple's interpretation of the same LiDAR data is clearly different on the iPhone 15 Pro and M4 iPad Pro when compared to earlier devices. Can you please advise how to maintain equivalent behaviour on the new devices? Steps to reproduce: Run "Displaying a Point Cloud Using Scene Depth" from WWDC20 session 10611: Explore ARKit 4 following the instructions at https://developer.apple.com/documentation/arkit/arkit_in_ios/environmental_analysis/displaying_a_point_cloud_using_scene_depth Use Xcode's setting to replay data to ARKit while running on an iPhone 12 Pro (using any replay file recorded on that device). An iPhone 13 or 14 Pro will work just as well. Record what % of points have an ARConfidenceLevel that is not .high. Now do it again running the same replay file on an iPhone 15 Pro. Note that the % of points have an ARConfidenceLevel that is not .high is much higher.
0
1
466
May ’24
Vision OS - How to detect when digital crown is PRESSED in an immersive space?
Have a bug I'm trying to resolve on an app review through the store. The basic flow is this: User presses a button and enters a fully immersive space While in the the fully immersive space, user presses the digital crown button to exit fully immersive mode and return to shared space (Note: this is not rotating the digital crown to control immersion level) At this point I need an event or onchange (or similar) to know when a user is in immersive mode so I can reset a flag I've been manually setting to track whether or not the user is currently viewing an immersive space. I have an onchange watching the scenePhase changes and printing to console the old/new values however this is never triggered. Seems like it might be an edge case but curious if there's another way to detect whether or not a user is in an immersive scene.
1
0
944
May ’24
Align a virtual copy of a real object with the real one
Hi, We are currently trying to implement very simple test application using Vision Pro. We display a virtual copy of an object (based on CAD data) and then we try to align the real object with the virtual one. It seems to be impossible! You can align them to a certain degree but if you walk around the object to control the alignment it seems the reality us warping and wobbling for almost 2 cm. Is there any way to fix this?
1
0
534
Jun ’24
Entering full immersion without grab bar and close icon
I'd like to enter a fully immersive scene without the grab bar and close icon. The full immersion app that comes with Xcode doesn't exit the immersion state when "x" is hit, but all the grab etc disappears - if I could only do that programmatically! I've tried conditionally removing the View that launches the ImmersiveSpace, but the WindowGroup seems to be the thing that puts out the UI I'm trying to hide... WindowGroup { if(gameState.immersiveSpaceOpened){ ContentView() .environmentObject(gameState) } }
1
0
640
Jun ’24
DestinationVideo -- MV-HEVC Files
In the code example provided there is a bool in the Video object to set a video as 3D: /// A Boolean value that indicates whether the video contains 3D content. let is3D: Bool I have a hosted spatial video that I know works correctly on the AVP player. When I point the Videos.json file to the this URL and set is3D=true my 3D video doesn't show up and I get the follow error: iPVC/1-0 Received playback error: [Error Domain=AVFoundationErrorDomain Code=-11850 "Operation Stopped" UserInfo={NSLocalizedFailureReason=The server is not correctly configured., NSLocalizedDescription=Operation Stopped, NSUnderlyingError=0x30227c510 {Error Domain=CoreMediaErrorDomain Code=-12939 "byte range length mismatch - should be length 2 is length 2434" UserInfo={NSDescription=byte range length mismatch - should be length 2 is length 2434, NSURL=https: <omitted for post> }}}] Can anyone tell me what might be going on? The error is telling me my server is not configured correctly. For context, I'm using a google drive to deliver dynamic images/videos using: https://drive.google.com/uc?export=download&id= <file ID> And the above works great for my images and 2d videos. Is there something I need to do specifically when delivering MV-HEVC videos?
1
0
890
Jun ’24
Multi-platform app for visionOS and iOS: How to include 3D models for both?
I created an app for visionOS, using Reality Composer Pro. Now I want to turn this app into a multi-platform app for iOS as well. RCP files are not supported on iOS, however. So I tried to use the "old" Reality Composer instead, but that doesn't seem to work either. Xcode 15 does not include it anymore, and I read online that files created with Xcode 14's Reality Composer cannot be included in Xcode 15 files. Also, Xcode 14 does not run on my M3 Mac with Sonoma. That's a bummer. What is the recommended way to include 3D content in apps that support visionOS AND iOS?! (I also read that a solution might be using USDZ for both. But how would that workflow look like? Are there samples out there that support both platforms? Please note that I want to setup the anchors myself, using code. I just need the composing tool to the create 3D content that will be placed on these anchors.)
1
0
888
Jun ’24