Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

RotateGesture3D auto constrained to axis
Hi, On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations. What I want to know is if it is possible to constrain the rotation on axis automatically. Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on. This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes. RotateGesture3D(constrainedToAxis: .x) RotateGesture3D(constrainedToAxis: .y) RotateGesture3D(constrainedToAxis: .z) Is it possible to do this? If so, what would be the best way to do it? A code example would be greatly appreciated. Regards Tof
3
0
411
Feb ’25
Tracking geographic locations in AR - Sample App Code
Hello, I was looking back into downloading the Tracking geographic locations in AR sample app from https://developer.apple.com/documentation/arkit/tracking-geographic-locations-in-ar Unfortunately the Download links to the .zip of the DisplayingAPointCloudUsingSceneDepth sample project. The exact same issue occurs when trying to download the sample code from https://developer.apple.com/documentation/ARKit/creating-a-fog-effect-using-scene-depth Wondering if those links are deliberately broken because of possible deprecations. Thanks to any Apple Engineer willing to look into that.
2
0
428
Feb ’25
How to move a camera in immersive space and render its output on 2D window using RealityKit
I'm trying to develop an immersive visionOS app, which you can move an Entity having a PerspectiveCamera as its child in immersive space, and render the camera view on 2D window. According to this thread, this seems to can be achieved using RealityRenderer. But when I added the scene entity loaded from realityKitContentBundle to realityRenderer.entities, I needed to clone all entities of the scene, otherwise all entities in the immersive space will disappear. @Observable @MainActor final class OffscreenRenderModel { private let renderer: RealityRenderer private let colorTexture: MTLTexture init(scene: Entity) throws { renderer = try RealityRenderer() // If not clone entities in the scene, all entities in the immersive space will disappear renderer.entities.append(scene.clone(recursive: true)) let camera = PerspectiveCamera() renderer.activeCamera = camera renderer.entities.append(camera) ... } } Is this the expected behavior? Or is there any other way to do this (move camera in immersive space and render its output on 2D window)? Here is my sample code: https://github.com/TAATHub/RealityKitPerspectiveCamera
4
0
471
Feb ’25
Setting clip shape of a RealityView
I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)): struct StereoImage: View { var body: some View { let spacing: CGFloat = 10.0 let padding: CGFloat = 40.0 VStack(spacing: spacing) { Text("Stereoscopic Image Example") .font(.largeTitle) RealityView { content in let creator = StereoImageCreator() guard let entity = await creator.createImageEntity() else { print("Failed to create the stereoscopic image entity.") return } content.add(entity) } .frame(depth: .zero) } .padding(padding) .clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE! } } This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers. How can I properly apply a clipshape to RealityViews like this? Thanks!
3
0
462
Feb ’25
Entity HoverEffect Fired When inside Another Entity Collider
Hi folks, I’m new to Vision Pro stack, still trying to learn all the nuances. Here is a problem I can’t seem to find an answer. I placed entity A( a small .02 radius sphere) inside entity B( size:.1 box). Both entities have HoverEffectComponent, and both inputcomponent is set to .direct. Entity A is NOT a child of Entity B. When I direct touch Entity B, I noticed that Entity A’s hover effect is fired as well. This only happens if Entity A‘s position is inside Entity B. The gesture that is only targeted at Entity A doesn’t work either. I double checked Entity A collider which sits inside entity B collider, my direct touch shouldn’t have trigger its hove effect. Having one collider inside another seems to produce unpredictable behavior? Thanks in advance 🙏🙏🙏 Context: I’m trying to create an invisible bound around Entity A, so when my hand approaches the bound to grab Entity A, a nice spotlight hover effect would fire first on the bound before hand reaching entity A.
2
0
329
Feb ’25
Building a custom render pipeline with RealityKit
Hello experts, and question seekers, I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me. The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats. However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer. It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information. Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages: Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path. exiting spatial tracking service update thread because wait returned 37” I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows. Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/ Inside draw(in view: MTKView), in a class derived by MTKViewDelegate: guard let currentDrawable = view.currentDrawable else { return } let realityRenderer = try! RealityRenderer() try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in print("Rendering scheduled") }, onComplete: { RealityRenderer in print("Rendering completed") }) Can you please tell me, what I am doing wrong? Is there any solution, that enables me to use RealityKit with for example Gaussian splats? Any help is greatly appreciated. All the best, Ethem Kurt
2
1
742
Feb ’25
Issue with Behaviors + Timelines in VisionOS
Hello, I'm unable to activate a timeline in my application through an OnTap, OnAddedToScene or OnNotification. In RCP I can test and play the timelines easily. When running in the simulator or on device the timelines simply do not run, regardless of the method through which I try to call the API. I have two questions: How can I check that my timelines are in my RCP project that's loaded into the scene? I don't see timelines in the entity hierarchy when I debug in RealityKit Debugger Is Behaviors a component I can manually set at runtime? I can very clearly see the behaviors component attached to my entity in RCP, but when running this code: .gesture( TapGesture() .targetedToAnyEntity() .onEnded { value in if value.entity.applyTapForBehaviors() { print("Success!") } else { print("Failure.") } } ) It prints "Failure." every time indicating to me that the entity does not have a Behavior attached to it (whether that's a component or however else the Behavior is associated with the entity) I also have not had success using the Notification system or even the OnAddedToScene behavior trigger which should theoretically work if a behavior is attached to the entity which the tap experiment indicates it's not. For context this is my notification trigger code: private let notificationTrigger = NotificationCenter.default .publisher(for: Notification.Name("RealityKit.NotificationTrigger")) @Environment(\.realityKitScene) var scene Attachment(id: "home") { Button { NotificationCenter.default.post( name: NSNotification.Name("RealityKit.NotificationTrigger"), object: nil, userInfo: [ "RealityKit.NotificationTrigger.Scene": scene, "RealityKit.NotificationTrigger.Identifier": "test" ] ) } label: { Text("Test") } .padding(20) .glassBackgroundEffect() } .onReceive(notificationTrigger) { _ in print("test notification received") I am receiving "test notification received" print statements as well. I'm using Xcode 16.0 with VisionOS 2.0 on MacOS 15.3.1
1
0
504
Feb ’25
WorldTrackingProvider stops working on device
After re-launching the immersive space in my app 5-10 times, the WorldTrackingProvider stops working. Only restarting the app will allow it to start working again. Only on device, not the simulator. I get these errors when it happens: The device_anchor can only be queried when the world tracking provider is running. ARPredictorRemoteService <0x107cbb5e0>: Service configured with error: Error Domain=com.apple.arkit.error Code=501 "(null)" Remote Service was invalidated: <ARPredictorRemoteService: 0x107cbb5e0>, will stop all data_providers. ARRemoteService: remote object proxy failed with error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process." UserInfo={NSDebugDescription=The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process.} ARRemoteService: weak self released before invalidation @Observable class VisionPro { let session = ARKitSession() let worldTracking = WorldTrackingProvider() func transformMatrix() async -> simd_float4x4 { guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return .init() } return deviceAnchor.originFromAnchorTransform } func runArkitSession() async { Task { try? await session.run([worldTracking]) } } } which I call from my RealityView: .task { await visionPro.runArkitSession() }
3
0
420
Feb ’25
Why can't I load a ModelEntity from RealityKitContentBundle?
If I place the .usdz file in the project directory alongside other .swift files, ModelEntity loads it perfectly. However, if I try to load the same file from Reality Composer Pro under RealityKitContent.rkassets, I get the error: resourceNotFound("heart"). Could someone help me with this? Thank you so much Code: // // TestttttttApp.swift // Testtttttt // // Created by Zhendong Chen on 2/17/25. // import SwiftUI @main struct TestttttttApp: App { var body: some Scene { WindowGroup { ContentView() } .windowStyle(.volumetric) } } // // ContentView.swift // Testtttttt // // Created by Zhendong Chen on 2/17/25. // import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { @State private var enlarge = false var body: some View { RealityView { content in do { // MARK: Work let scene = try await ModelEntity(named: "heart") content.add(scene) // MARK: Doesn't work // let scene = try await ModelEntity(named: "heart", in: realityKitContentBundle) // content.add(scene) } catch { print(error) } } } } #Preview(windowStyle: .volumetric) { ContentView() }
1
0
474
Feb ’25
Reading scenePhase from custom Scene
Hi, I've encountered a thread where an Apple engineer points out that there are 2 possible ways to anchor scenePhase, either App or View implementation: https://developer.apple.com/forums/thread/757429 This thread also links to documentation which states If you read the phase from within a custom Scene instance, the value similarly reflects an aggregation of all the scenes that make up the custom scene: This doesn't seem to be the case on visionOS 2, I tried the following code starting from an empty app template: import SwiftUI @main struct SceneTestApp: App { var body: some Scene { MyScene() WindowGroup(id: "extra") { Text("Extra window") } } } struct MyScene: Scene { @Environment(\.scenePhase) private var scenePhase @Environment(\.openWindow) private var openWindow var body: some Scene { WindowGroup { ContentView() .onAppear { openWindow(id: "extra") } } .onChange(of: scenePhase) { oldValue, newValue in print("scenePhase changed") } } } The result was that I didn't get onChange callback if I only closed the extra window, the callback only came after I closed both windows and the whole app was suspended. Is this expected behavior?
3
0
369
Feb ’25
Body segmentation/occlusion on the Apple Vision Pro
Hello, I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools? Thanks! Mehdi
1
0
384
Feb ’25
How to show only Spatial video using UIDocumentPickerViewController
Is there a suitable UTType type to satisfy the need to pick up only SpatialVideo in UIDocumentPickerViewController? I already know that PHPickerFilter in PHPickerViewController can do this, but not in UIDocumentPickerViewController. Our app needs to adapt both of these ways to pick spatial videos So is there anything that I can try in UIDocumentPickerViewController to fulfill such picker functionality?
1
0
456
Feb ’25
VisionOS: Detect plane to place objects issue for animated objects
Hi, I have used the template code for Plane Detection and placing models on them from here https://developer.apple.com/documentation/visionos/placing-content-on-detected-planes This source code did not copy the animations in the preview model to the PlacedModel and hence I modified it to do a manual copy of animations and textures. There is a function called materialize() that does this and I was able to modify it to get it working where the placed models are now animating. The issue is when I apply gestures on them like drag or rotate. For those models that go through this logic I'm unable to add gestures even though I'm making sure that Collision and Input Target is set on the Placed Models. Has anyone been able to get this working or is it even a possibility? My materialize function func materialize() -> PlacedObject { let shapes = previewEntity.components[CollisionComponent.self]!.shapes // Clone render content first as we need its materials let clonedRenderContent = renderContent.clone(recursive: true) print("To be finding main model: \(descriptor.displayName)") // Find the main model in preview hierarchy func findMainModel(_ entity: Entity) -> Entity? { if entity.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model: \(entity.name)") return entity } for child in entity.children { if child.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model in children: \(child.name)") return child } } return nil } // Clone hierarchy preserving structure, names, and materials func cloneHierarchy(_ entity: Entity) -> Entity { print("Cloning: \(entity.name)") let cloned: Entity if let model = entity as? ModelEntity { // Clone with recursive false to handle children manually cloned = model.clone(recursive: false) if let clonedModel = cloned as? ModelEntity, let originalMaterials = model.model?.materials { // Preserve the original model's materials clonedModel.model?.materials = originalMaterials } } else { cloned = Entity() } // Preserve name and transform cloned.name = entity.name cloned.transform = entity.transform // Clone children for child in entity.children { let clonedChild = cloneHierarchy(child) cloned.addChild(clonedChild) } return cloned } print("=== Cloning Preview Structure ===") // Clone the preview hierarchy with proper structure let clonedStructure = cloneHierarchy(previewEntity) // Find and use the main model if let mainModel = findMainModel(clonedStructure) { print("Using main model for PlacedObject") let modelEntity: ModelEntity if let asModel = mainModel as? ModelEntity { print("Using asModel ") modelEntity = asModel } else { modelEntity = ModelEntity() modelEntity.name = mainModel.name // Copy children and transforms for child in mainModel.children { modelEntity.addChild(child) } modelEntity.transform = mainModel.transform } // Add collision component here let collisionComponent = CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all)) modelEntity.components.set(collisionComponent) // Create the placed object let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: modelEntity, shapes: shapes) // Set input target on the placed object itself placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } else { print("Fallback to original render content") let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: clonedRenderContent, shapes: shapes) placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } } My PlacedObject class where the init has the recursive cloning removed because it is handled in materialize class PlacedObject: Entity { let fileName: String // The 3D model displayed for this object. private let renderContent: ModelEntity static let collisionGroup = CollisionGroup(rawValue: 1 << 29) // The origin of the UI attached to this object. // The UI is gravity aligned and oriented towards the user. let uiOrigin = Entity() var affectedByPhysics = false { didSet { guard affectedByPhysics != oldValue else { return } if affectedByPhysics { components[PhysicsBodyComponent.self]!.mode = .static } else { components[PhysicsBodyComponent.self]!.mode = .static } } } var isBeingDragged = false { didSet { affectedByPhysics = !isBeingDragged } } var positionAtLastReanchoringCheck: SIMD3<Float>? var atRest = false init(descriptor: ModelDescriptor, renderContentToClone: ModelEntity, shapes: [ShapeResource]) { fileName = descriptor.fileName // renderContent = renderContentToClone.clone(recursive: true) renderContent = renderContentToClone super.init() name = renderContent.name // Apply the rendered content’s scale to this parent entity to ensure // that the scale of the collision shape and physics body are correct. scale = renderContent.scale renderContent.scale = .one // Make the object respond to gravity. let physicsMaterial = PhysicsMaterialResource.generate(restitution: 0.0) let physicsBodyComponent = PhysicsBodyComponent(shapes: shapes, mass: 1.0, material: physicsMaterial, mode: .static) components.set(physicsBodyComponent) components.set(CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all))) addChild(renderContent) addChild(uiOrigin) uiOrigin.position.y = extents.y / 2 // Position the UI origin in the object’s center. // Allow direct and indirect manipulation of placed objects. components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) // Add a grounding shadow to placed objects. renderContent.components.set(GroundingShadowComponent(castsShadow: true)) } required init() { fatalError("`init` is unimplemented.") } } Thanks
4
0
435
Feb ’25
Automatically Enabling Spatial Personas in App
Is it possible to create a button in my app that will turn on the spatial personas for the user? Currently the only way I know of turning on spatial personas is by selecting the cube icon in the FaceTime window which is quite clunky for people unfamiliar with the Vision Pro's UI. Any help would be appreciated.
1
0
405
Feb ’25
How to use `EnvironmentLightEstimationProvider` to capture a environment texture and apply it on an model entity?
I am a newby of spatial computing. Here I am learning how to use ARKit to capture the environment texture and apply it on a ModelEntity of RealityKit on Vision Pro. But I do not find a demo of how to use EnvironmentLightEstimationProvider. After checking the documentation, I also have some questions: EnvironmentProbeAnchor.environmentTexture is a MTLTexture, but EnvironmentResource needs a CGImage. How do I translate MTLTexture to CGImage(Forgive me that I do not know much about Metal or other framework, so It will be better if there is a code that I can copy and paste directly) It seems that the EnvironmentProbeAnchor can only get the light information around the device. But what should I do if I want get the light information around the ModelEntity so that I can apply the environment texture on it. It will be better if you can provide a code demo about how to use the new api. Thank you!
1
0
565
Feb ’25
DragGesture that pivots with the user in visionOS
Apple published a set of examples for using system gestures to interact with RealityKit entities. I've been using DragGesture a lot in my apps and noticed an issue when using it in an immersive space. When dragging an entity, if I turn my body to face another direction, the dragged entity does not stay relative to my hand. This can lead to situations where the entity is pulled very close to me, or pushed far way, or even ends up behind me. In the examples linked above, there are two versions of how they use drag. handleFixedDrag: This is similar to what I'm doing now. It uses the value from value.gestureValue.translation3D as the basis for the drag handlePivotDrag: This version aims to solve the problem I described above by using value.inputDevicePose3D as the basis of the gesture. I've tried the example from handlePivotDrag, but it has one limitation. Using this version, I can move the entity around me as if it were on the inside of an arc or sphere. However, I can no longer move the entity further or closer. It stays within a similar (though not exact) distance relative to me while I drag. Is there a way to combine these concepts? Ideally, I would like to use a gesture that behaves the same way that visionOS windows do. When we drag windows, I can move them around relative to myself, pull them closer, push them further, all while avoiding the issues described above. Example from handleFixedDrag mutating private func handleFixedDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } if !state.isDragging { state.isDragging = true state.dragStartPosition = entity.scenePosition } let translation3D = value.convert(value.gestureValue.translation3D, from: .local, to: .scene) let offset = SIMD3<Float>(x: Float(translation3D.x), y: Float(translation3D.y), z: Float(translation3D.z)) entity.scenePosition = state.dragStartPosition + offset if let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } } Example from handlePivotDrag mutating private func handlePivotDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } // The transform that the pivot will be moved to. var targetPivotTransform = Transform() // Set the target pivot transform depending on the input source. if let inputDevicePose = value.inputDevicePose3D { // If there is an input device pose, use it for positioning and rotating the pivot. targetPivotTransform.scale = .one targetPivotTransform.translation = value.convert(inputDevicePose.position, from: .local, to: .scene) targetPivotTransform.rotation = value.convert(AffineTransform3D(rotation: inputDevicePose.rotation), from: .local, to: .scene).rotation } else { // If there is not an input device pose, use the location of the drag for positioning the pivot. targetPivotTransform.translation = value.convert(value.location3D, from: .local, to: .scene) } if !state.isDragging { // If this drag just started, create the pivot entity. let pivotEntity = Entity() guard let parent = entity.parent else { fatalError("Non-root entity is missing a parent.") } // Add the pivot entity into the scene. parent.addChild(pivotEntity) // Move the pivot entity to the target transform. pivotEntity.move(to: targetPivotTransform, relativeTo: nil) // Add the targeted entity as a child of the pivot without changing the targeted entity's world transform. pivotEntity.addChild(entity, preservingWorldTransform: true) // Store the pivot entity. state.pivotEntity = pivotEntity // Indicate that a drag has started. state.isDragging = true } else { // If this drag is ongoing, move the pivot entity to the target transform. // The animation duration smooths the noise in the target transform across frames. state.pivotEntity?.move(to: targetPivotTransform, relativeTo: nil, duration: 0.2) } if preserveOrientationOnPivotDrag, let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } }
1
0
482
Feb ’25