Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Reality kit Entities Appearing to Lag in a Full or Progressive Style Immersive Space When Opened with Environment Turned On
PLATFORM AND VERSION Vision OS Development environment: Xcode 16.2, macOS 15.2 Run-time configuration: visionOS 2.3 (On Real Device, Not simulator) Please someone confirm I'm not crazy and this issue is actually out of my control. Spent hours trying to fix my app and running profiles because thought it was an issue related to my apps performance. Finally considered chance it was issue with API itself and made sample app to isolate problem, and it still existed in it. The issue is when a model entity moves around in a full space that was launched when the system environment immersion was turned up before opening it, the entities looks very choppy as they move around. If you take off the headset while still in the space, and put it back on, this fixes it and then they move smoothly as they should. In addition, you can also leave the space, and then turn the system environment immersion all the way down before launching the full space again, this will also make the entity moves smoothly as it should. If you launch a mixed immersion style instead of a full immersion style, this issue never arrises. The issue only arrises if you launch the space with either a full style, or progressive style, while the system immersion level is turned on. STEPS TO REPRODUCE https://github.com/nathan-707/ChoppyEntitySample Open my test project, its a small, modified vision os project template that shows it clearly. otherwise: create immersive space with either full or progressive immersion style. setup a entity in kinematic mode, apply a velocity to it to make it pass over your head when the space appears. if you opened the space while the Apple Vision Pros system environment was turned up, the entity will look choppy. if you take the headset off while in the same space, and put it back on, it will fix the issue and it will look smooth. alternatively if you open the space with the system immersion environment all the way down, you will also not run into the issue. Again, issue also does not happen if space launched is in mixed style.
1
0
522
Jan ’25
How to convert in iOS a RealityKit SpatialTapGesture value to an Entity coordinate?
I have an app with a visionOS target, and I want to add an iOS target. Both are based on RealityKit. I want to use a SpatialTapGesture to get the tap coordinate local to the entity tapped. In visionOS this is easy: SpatialTapGesture(coordinateSpace: .local) .targetedToAnyEntity() .onEnded { tap in let entity = tap.entity let localPoint3D = tap.convert(tap.location3D, from: .local, to: entity) // … } However, according to the docs, the convert function seems to exist only in visionOS, not in iOS. So how can I do this conversion in iOS? PS: This was already posted on StackOverflow without success. There, I tried to find a workaround, but I failed.
8
0
641
Oct ’24
Trouble with initializing a SharePlay and using GroupSessionJournal.
I am having trouble with initializing the SharePlay. It works but we have to leave the game (click the close button) and rejoin it, sometimes several times, for it to establish the connection. I am also having trouble sharing images over SharePlay with GroupSessionJournal. I am not able to get it to transfer any amount of data or even get recognition on the other participants in the SharePlay that an image is being sent. We have look at all the information we can find online and are not able to establish a connection. I am not sure if I am missing a step, or if I am incorrectly sending the data through the GroupSessionJournal. Here are the steps I took take to replicate the issue I have: FaceTime another person with the app. Open the app and click the SharePlay button to SharePlay it with the other person. Establish the SharePlay and by making sure that the board states are syncronized across participants. If its not click the close button and click open app again to rejoin the SharePlay. (This is one of the bugs that I would like to fix. This is just a work around we developed to establish the SharePlay. We would like it so that when you click SharePLay and they join the session it works.) Once the SharePlay has been established, change the image by clicking change 1 image. Select a jpg image. The image that represents 1 should be not set. If you dont see the image click on any of the X in the squares and it will change to the image. The image should appear on the other participant in the SharePlay. (This does not happen and is what we have not been able to figure out how to get working.) Here are the classes for the example project I created: Content View Game Model Class Activity Manager Main Starter Class
0
0
529
Sep ’24
RealityView Limits VisionOS
Im asking myself we are the limits of RealityView. For example is it possible to place an entity on postion (x=800m,y=0,z=-900m) What happens if i walk from my (0,0,0) to this point, will i see the entity then ? Does someone know where are the limits ?
1
0
299
Oct ’24
DragGesture that pivots with the user in visionOS
Apple published a set of examples for using system gestures to interact with RealityKit entities. I've been using DragGesture a lot in my apps and noticed an issue when using it in an immersive space. When dragging an entity, if I turn my body to face another direction, the dragged entity does not stay relative to my hand. This can lead to situations where the entity is pulled very close to me, or pushed far way, or even ends up behind me. In the examples linked above, there are two versions of how they use drag. handleFixedDrag: This is similar to what I'm doing now. It uses the value from value.gestureValue.translation3D as the basis for the drag handlePivotDrag: This version aims to solve the problem I described above by using value.inputDevicePose3D as the basis of the gesture. I've tried the example from handlePivotDrag, but it has one limitation. Using this version, I can move the entity around me as if it were on the inside of an arc or sphere. However, I can no longer move the entity further or closer. It stays within a similar (though not exact) distance relative to me while I drag. Is there a way to combine these concepts? Ideally, I would like to use a gesture that behaves the same way that visionOS windows do. When we drag windows, I can move them around relative to myself, pull them closer, push them further, all while avoiding the issues described above. Example from handleFixedDrag mutating private func handleFixedDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } if !state.isDragging { state.isDragging = true state.dragStartPosition = entity.scenePosition } let translation3D = value.convert(value.gestureValue.translation3D, from: .local, to: .scene) let offset = SIMD3<Float>(x: Float(translation3D.x), y: Float(translation3D.y), z: Float(translation3D.z)) entity.scenePosition = state.dragStartPosition + offset if let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } } Example from handlePivotDrag mutating private func handlePivotDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } // The transform that the pivot will be moved to. var targetPivotTransform = Transform() // Set the target pivot transform depending on the input source. if let inputDevicePose = value.inputDevicePose3D { // If there is an input device pose, use it for positioning and rotating the pivot. targetPivotTransform.scale = .one targetPivotTransform.translation = value.convert(inputDevicePose.position, from: .local, to: .scene) targetPivotTransform.rotation = value.convert(AffineTransform3D(rotation: inputDevicePose.rotation), from: .local, to: .scene).rotation } else { // If there is not an input device pose, use the location of the drag for positioning the pivot. targetPivotTransform.translation = value.convert(value.location3D, from: .local, to: .scene) } if !state.isDragging { // If this drag just started, create the pivot entity. let pivotEntity = Entity() guard let parent = entity.parent else { fatalError("Non-root entity is missing a parent.") } // Add the pivot entity into the scene. parent.addChild(pivotEntity) // Move the pivot entity to the target transform. pivotEntity.move(to: targetPivotTransform, relativeTo: nil) // Add the targeted entity as a child of the pivot without changing the targeted entity's world transform. pivotEntity.addChild(entity, preservingWorldTransform: true) // Store the pivot entity. state.pivotEntity = pivotEntity // Indicate that a drag has started. state.isDragging = true } else { // If this drag is ongoing, move the pivot entity to the target transform. // The animation duration smooths the noise in the target transform across frames. state.pivotEntity?.move(to: targetPivotTransform, relativeTo: nil, duration: 0.2) } if preserveOrientationOnPivotDrag, let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } }
1
0
457
Feb ’25
How to get the floor plane with Spatial Tracking Session and Anchor Entity
In the WWDC session titled "Deep dive into volumes and immersive spaces", the developers discussed adding a Spatial Tracking Session and an Anchor Entity to detect the floor. They then glossed over some important details. They added a spatial tap gesture to let the user place content relative to the floor anchor, but they left a lot of information. .gesture( SpatialTapGesture( coordinateSpace: .immersiveSpace ) .targetedToAnyEntity() .onEnded { value in handleTapOnFloor(value: value) } ) My understanding is that an entity has to have input and collision components for gestures like this to work. How can we add a collision to an AnchorEntity when we don't know its size or shape? I've been trying for days to understand what is happening here and I just don't get it. It is even more frustrating that the example project that Apple released does not contain any of these features. I would like to be able Detect the floor plane Get the position/transform of the floor plane Add a collider to the floor plane Enable collisions and physics on the floor plane Enable gestures on the floor plane It seems to me that the Anchor Entity is placed as an entirely arbitrary position. It has absolutely no relationship to the rectangle with the floor label that I can see in the Xcode visualization. It is just a point, not a plane or rect that I can use. I've tried manually calculating the collision shape after the anchor is detected, but nothing that I have tried works. I can't tap on the floor with gestures. I can't drop entities onto the floor. I can't seem to do ANYTHING at all with this floor anchor other than place entity at the totally arbitrary location somewhere on the floor. Is there anyway at all with Spatial Tracking Session and Anchor Entity to get the actual plane that was detected? struct FloorExample: View { @State var trackingSession: SpatialTrackingSession = SpatialTrackingSession() @State var subject: Entity? @State var floor: AnchorEntity? var body: some View { RealityView { content, attachments in let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.plane]) _ = await session.run(configuration) self.trackingSession = session let floorAnchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: SIMD2(x: 0.1, y: 0.1))) floorAnchor.anchoring.physicsSimulation = .none floorAnchor.name = "FloorAnchorEntity" floorAnchor.components.set(InputTargetComponent()) floorAnchor.components.set(CollisionComponent(shapes: .init())) content.add(floorAnchor) self.floor = floorAnchor // This is just here to let me see where visinoOS decided to "place" the floor anchor. let floorPlaced = ModelEntity( mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .black, isMetallic: false)]) floorAnchor.addChild(floorPlaced) if let scene = try? await Entity(named: "AnchorLabsFloor", in: realityKitContentBundle) { content.add(scene) if let subject = scene.findEntity(named: "StepSphereRed") { self.subject = subject } // I can see when the anchor is added _ = content.subscribe(to: SceneEvents.AnchoredStateChanged.self) { event in event.anchor.generateCollisionShapes(recursive: true) // this doesn't seem to work print("**anchor changed** \(event)") print("**anchor** \(event.anchor)") } // place the reset button near the user if let panel = attachments.entity(for: "Panel") { panel.position = [0, 1, -0.5] content.add(panel) } } } update: { content, attachments in } attachments: { Attachment(id: "Panel", { Button(action: { print("**button pressed**") if let subject = self.subject { subject.position = [-0.5, 1.5, -1.5] // Remove the physics body and assign a new one - hack to remove momentum if let physics = subject.components[PhysicsBodyComponent.self] { subject.components.remove(PhysicsBodyComponent.self) subject.components.set(physics) } } }, label: { Text("Reset Sphere") }) }) } } }
2
0
784
Jan ’25
Reality Composer Pro timelines management, but using code
Is it possible to manage the behavior of timeline totally from code? I am exploring the Compose interactive 3D content in Reality Composer Pro sample project after seeing the related video, but the example shows only the use of Behaviors from RCP to activate timelines actions. I was wondering if it is possible to, somehow, retrieve some kind of timeline controller that allows me access to its informations just like the AnimationPlaybackController does with single animations. What I would like to achieve is being able to play/pause/retrieve timestamp from them in order to allow synchronization between different users on SharePlay
1
0
585
Oct ’24
Screenshot using visionOS (Code) on Apple Vision Pro
I want to create a screenshot (static image) of the current view on the Apple Vision Pro using written code in visionOS. Unfortunately, I currently can’t find a way to achieve this. The only option I’ve found so far is through Reality Composer Pro. However, since I want to accomplish this directly through code, this approach is not an option for me.
1
0
311
Jan ’25
Capturing External Object Images via Vision Pro Passthrough Camera with Enterprise APIs
We are currently working with the Enterprise APIs for visionOS 2 and have successfully obtained the necessary entitlements for passthrough camera access. Our goal is to capture images of external real-world objects using the passthrough camera of the Vision Pro, not just take screenshots or screen captures. Our specific use case involves: 1. Accessing the raw passthrough camera feed. 2. Capturing high-resolution images of objects in the real world through the camera. 3. Processing and saving these images for further analysis within our custom enterprise app. We would greatly appreciate any guidance, tutorials, or sample code that could help us achieve this functionality. If there are specific APIs or best practices for handling real-world image capture via passthrough cameras with the Enterprise APIs, please let us know.
0
0
598
Oct ’24
Build not working
[xrsimulator] Component Compatibility: EnvironmentLightingConfiguration not available for 'xros 1.0', please update 'platforms' array in Package.swift [xrsimulator] Exception thrown during compile: compileFailedBecause(reason: "compatibility faults") Tool exited with code 1
1
0
400
Jan ’25
Proper way of handing opening ImmersiveSpace?
if you check the code here, https://developer.apple.com/documentation/compositorservices/interacting-with-virtual-content-blended-with-passthrough var body: some Scene { ImmersiveSpace(id: Self.id) { CompositorLayer(configuration: ContentStageConfiguration()) { layerRenderer in let pathCollection: PathCollection do { pathCollection = try PathCollection(layerRenderer: layerRenderer) } catch { fatalError("Failed to create path collection \(error)") } let tintRenderer: TintRenderer do { tintRenderer = try TintRenderer(layerRenderer: layerRenderer) } catch { fatalError("Failed to create tint renderer \(error)") } Task(priority: .high) { @RendererActor in Task { @MainActor in appModel.pathCollection = pathCollection appModel.tintRenderer = tintRenderer } let renderer = try await Renderer(layerRenderer, appModel, pathCollection, tintRenderer) try await renderer.renderLoop() Task { @MainActor in appModel.pathCollection = nil appModel.tintRenderer = nil } } layerRenderer.onSpatialEvent = { pathCollection.addEvents(eventCollection: $0) } } } .immersionStyle(selection: .constant(appModel.immersionStyle), in: .mixed, .full) .upperLimbVisibility(appModel.upperLimbVisibility) the only way it's dealing with the error is fatalError. And don't think I can throw anything or return anything else? Is there a way I can gracefully handle this and show a message box in UI? I was hoping I could somehow trigger a failure and have https://developer.apple.com/documentation/swiftui/openimmersivespaceaction return fail. but couldn't find a nice way to do so. Let me know if you have ideas.
1
0
543
Oct ’24
3D display from 3D camera
I want to pursue for a project involving 3D VR visualisation. I would like to know if there is a 3D stereoscopic camrea setup that is able to connect straight to Apple Vision Pro display yet. Of course, with no compatibility issues with MV-HEVC. Any recommendation is appreciated.
0
0
349
Oct ’24
How to Play Timeline Animations via code
Hi everyone, I need to synchronize the playback of RealityKit Timelines via SharePlay. To do this I am trying to get the references of the timelines using "AnimationPlaybackController" and "AnimationResource". In my realitykit scene I have configured both an animation (with blender), and a timeline, the animation starts correctly when the realitykit scene starts, the timeline not. Below the code: struct ContentView: View { @State private var subscriptions = [EventSubscription]() @Environment(AppModel.self) private var appModel let rootEntity = Entity() @State var testEntity: Entity? @State var testAnimation: AnimationResource? @State var testController: AnimationPlaybackController? init() { CubeComponent.registerComponent() } var body: some View { RealityView { content in content.add(rootEntity) if let scene = try? await Entity(named: "Room", in: realityKitContentBundle) { rootEntity.addChild(scene) playAnimations(from: content) } } .gesture(SpatialTapGesture().targetedToAnyEntity() .onEnded({ value in _ = value.entity.applyTapForBehaviors() if let testEntity, let testAnimation { testController = testEntity.playAnimation(testAnimation.repeat()) } }) ) } func playAnimations(from content: RealityViewContent) { subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: AnimationLibraryComponent.self, { event in let entity = event.entity entity.components[AnimationLibraryComponent.self]?.animations.forEach({ (key, value) in if value.definition is AnimationGroup { if key == "/Room/TestTimeline" { let controller = entity.playAnimation(value.repeat()) testEntity = entity testAnimation = value appModel.syncronizedAnimations[key] = .init(name: key, animationController: controller, entityName: entity.name) } } else { if entity.name == "SphereInteractable" { let controller = entity.playAnimation(value.repeat()) appModel.syncronizedAnimations[key] = .init(name: key, animationController: controller, entityName: entity.name) } } }) })) } } the variables testEntity, testAnimation and testController are for testing purposes only. If I try to start the animations in the playAnimations function, only the animation created via blender starts (the one related to the object "SphereInteractable"), the Timeline starts only if I save a reference and I play it with a tap gesture or with a delay of ! seconds with DispatchQueue.asyncAfter called in the onAppear. is there a better way to handle this? The goal is to have a reference of the AnimationPlaybackController of the timeline, in order to sync the animation via shareplay. Thanks
3
0
846
Oct ’24
ARView.Environment.SceneUnderstanding.Options.occlusion not working on models that aren't opaque
Is this behaviour expected? For example, if I'm using let materials = [SimpleMaterial(color: .red, isMetallic: false)] occlusion works normally, but with let materials = [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)] i can see my cube through real-world objects, like tables, columns, etc. I'm getting the same behaviour if using CustomMaterial from shader and applying customMaterial.blending = .opaque and customMaterial.blending = .transparent(opacity: ) respectively
0
0
493
Dec ’24
[Vision, visionOS] Is it possible using Vision Framework on visionOS for body tracking feature?
Hello, I checked following documentations. Vision | Apple Developer Documentation Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer I saw Vision Framework is available on visionOS. So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
1
0
474
Jan ’25
Synchronize 3D object animation inside a volume with SharePlay
I have been experimenting some experiences in which I would like to use SharePlay to allow the app to be used by multiple users. Currently I achieved sharing a volume containing a Reality Composer Pro scene inside of it, the scene contains some entities with an animation. So far I have been able to correctly share the volume and its content, with the animation playing without problems, but once I activate SharePlay different users see different moments of the animation if no animation at all. Is there a way to synchronize animations between all the users, no matter when someone entered the SharePlay session, aside from communicating the animation time once someone joins?
0
0
562
Oct ’24