Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

How to integrate Apple Immersive Video into the app you are developing.
Hello, Let me ask you a question about Apple Immersive Video. https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/ I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format. First, I would like to know if it is possible to integrate Apple Immersive Video into an app. Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app? It would be great if you could also share any helpful website resources. I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos. As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements. I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed? I’ve asked several questions, and that’s all for now. Thank you in advance!
0
0
137
1d
Implementing multi-pass rendering in VisionOS
I’m working on a Vision Pro app using Metal and need to implement multi-pass rendering. Specifically, I want to render intermediate results to a texture, then use that texture in a second pass for post-processing before presenting the final output. What’s the best approach in visionOS? Should I use multiple render passes in a single command buffer or separate command buffers? Any insights on efficiently handling this in RealityKit or Metal? Thanks!
0
0
85
1d
Volumetric Windwos anchores
Hi, we would like to create something where you can open multiple volumetric windows and place them in a room, our biggest issue is that we want these windows to be persistent, so when I close and reopen the app, the windows to be in the same position. We can't use immersive spaces because we also want to have the possibility to access the shared space. Is it possible with the current features and capabilities to do that? If yes do you have some advices how can we achieve this? The alternative is if is it possible to open the virtual display in immersive spaces or if we have the possibility to implement our own virtual display.
1
0
140
2d
How to show only Spatial video using UIDocumentPickerViewController
Is there a suitable UTType type to satisfy the need to pick up only SpatialVideo in UIDocumentPickerViewController? I already know that PHPickerFilter in PHPickerViewController can do this, but not in UIDocumentPickerViewController. Our app needs to adapt both of these ways to pick spatial videos So is there anything that I can try in UIDocumentPickerViewController to fulfill such picker functionality?
1
0
161
3d
DragGesture that pivots with the user in visionOS
Apple published a set of examples for using system gestures to interact with RealityKit entities. I've been using DragGesture a lot in my apps and noticed an issue when using it in an immersive space. When dragging an entity, if I turn my body to face another direction, the dragged entity does not stay relative to my hand. This can lead to situations where the entity is pulled very close to me, or pushed far way, or even ends up behind me. In the examples linked above, there are two versions of how they use drag. handleFixedDrag: This is similar to what I'm doing now. It uses the value from value.gestureValue.translation3D as the basis for the drag handlePivotDrag: This version aims to solve the problem I described above by using value.inputDevicePose3D as the basis of the gesture. I've tried the example from handlePivotDrag, but it has one limitation. Using this version, I can move the entity around me as if it were on the inside of an arc or sphere. However, I can no longer move the entity further or closer. It stays within a similar (though not exact) distance relative to me while I drag. Is there a way to combine these concepts? Ideally, I would like to use a gesture that behaves the same way that visionOS windows do. When we drag windows, I can move them around relative to myself, pull them closer, push them further, all while avoiding the issues described above. Example from handleFixedDrag mutating private func handleFixedDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } if !state.isDragging { state.isDragging = true state.dragStartPosition = entity.scenePosition } let translation3D = value.convert(value.gestureValue.translation3D, from: .local, to: .scene) let offset = SIMD3<Float>(x: Float(translation3D.x), y: Float(translation3D.y), z: Float(translation3D.z)) entity.scenePosition = state.dragStartPosition + offset if let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } } Example from handlePivotDrag mutating private func handlePivotDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } // The transform that the pivot will be moved to. var targetPivotTransform = Transform() // Set the target pivot transform depending on the input source. if let inputDevicePose = value.inputDevicePose3D { // If there is an input device pose, use it for positioning and rotating the pivot. targetPivotTransform.scale = .one targetPivotTransform.translation = value.convert(inputDevicePose.position, from: .local, to: .scene) targetPivotTransform.rotation = value.convert(AffineTransform3D(rotation: inputDevicePose.rotation), from: .local, to: .scene).rotation } else { // If there is not an input device pose, use the location of the drag for positioning the pivot. targetPivotTransform.translation = value.convert(value.location3D, from: .local, to: .scene) } if !state.isDragging { // If this drag just started, create the pivot entity. let pivotEntity = Entity() guard let parent = entity.parent else { fatalError("Non-root entity is missing a parent.") } // Add the pivot entity into the scene. parent.addChild(pivotEntity) // Move the pivot entity to the target transform. pivotEntity.move(to: targetPivotTransform, relativeTo: nil) // Add the targeted entity as a child of the pivot without changing the targeted entity's world transform. pivotEntity.addChild(entity, preservingWorldTransform: true) // Store the pivot entity. state.pivotEntity = pivotEntity // Indicate that a drag has started. state.isDragging = true } else { // If this drag is ongoing, move the pivot entity to the target transform. // The animation duration smooths the noise in the target transform across frames. state.pivotEntity?.move(to: targetPivotTransform, relativeTo: nil, duration: 0.2) } if preserveOrientationOnPivotDrag, let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } }
1
0
204
4d
Body segmentation/occlusion on the Apple Vision Pro
Hello, I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools? Thanks! Mehdi
1
0
128
4d
Reading scenePhase from custom Scene
Hi, I've encountered a thread where an Apple engineer points out that there are 2 possible ways to anchor scenePhase, either App or View implementation: https://developer.apple.com/forums/thread/757429 This thread also links to documentation which states If you read the phase from within a custom Scene instance, the value similarly reflects an aggregation of all the scenes that make up the custom scene: This doesn't seem to be the case on visionOS 2, I tried the following code starting from an empty app template: import SwiftUI @main struct SceneTestApp: App { var body: some Scene { MyScene() WindowGroup(id: "extra") { Text("Extra window") } } } struct MyScene: Scene { @Environment(\.scenePhase) private var scenePhase @Environment(\.openWindow) private var openWindow var body: some Scene { WindowGroup { ContentView() .onAppear { openWindow(id: "extra") } } .onChange(of: scenePhase) { oldValue, newValue in print("scenePhase changed") } } } The result was that I didn't get onChange callback if I only closed the extra window, the callback only came after I closed both windows and the whole app was suspended. Is this expected behavior?
3
0
175
1w
WorldTrackingProvider stops working on device
After re-launching the immersive space in my app 5-10 times, the WorldTrackingProvider stops working. Only restarting the app will allow it to start working again. Only on device, not the simulator. I get these errors when it happens: The device_anchor can only be queried when the world tracking provider is running. ARPredictorRemoteService <0x107cbb5e0>: Service configured with error: Error Domain=com.apple.arkit.error Code=501 "(null)" Remote Service was invalidated: <ARPredictorRemoteService: 0x107cbb5e0>, will stop all data_providers. ARRemoteService: remote object proxy failed with error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process." UserInfo={NSDebugDescription=The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process.} ARRemoteService: weak self released before invalidation @Observable class VisionPro { let session = ARKitSession() let worldTracking = WorldTrackingProvider() func transformMatrix() async -> simd_float4x4 { guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return .init() } return deviceAnchor.originFromAnchorTransform } func runArkitSession() async { Task { try? await session.run([worldTracking]) } } } which I call from my RealityView: .task { await visionPro.runArkitSession() }
3
0
158
1w
Issue in TabletopKit Sample scene + shared immersive space
Hello Community, I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars. I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat. I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well. When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template." So my questions are: What needs to be changed so the TabletopKit can handle seating correctly? How can I correctly use immersive scenes in combination with the TabletopKit? I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now. I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
3
4
217
1w
Digital Crown press when both immersive space and additional windows are presented
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View. Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
4
0
162
1w
The multiview video screen turns blank when returning
I am encountering an issue while using the multiview video demo provided at this link "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/". Specifically, when running on versions of visionOS prior to 2.2, navigating back results in a blank screen. Has anyone else experienced this problem and found a solution? Any advice or workaround would be greatly appreciated.
1
0
146
1w
Setting clip shape of a RealityView
I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)): struct StereoImage: View { var body: some View { let spacing: CGFloat = 10.0 let padding: CGFloat = 40.0 VStack(spacing: spacing) { Text("Stereoscopic Image Example") .font(.largeTitle) RealityView { content in let creator = StereoImageCreator() guard let entity = await creator.createImageEntity() else { print("Failed to create the stereoscopic image entity.") return } content.add(entity) } .frame(depth: .zero) } .padding(padding) .clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE! } } This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers. How can I properly apply a clipshape to RealityViews like this? Thanks!
3
0
222
1w
VisionOS: Detect plane to place objects issue for animated objects
Hi, I have used the template code for Plane Detection and placing models on them from here https://developer.apple.com/documentation/visionos/placing-content-on-detected-planes This source code did not copy the animations in the preview model to the PlacedModel and hence I modified it to do a manual copy of animations and textures. There is a function called materialize() that does this and I was able to modify it to get it working where the placed models are now animating. The issue is when I apply gestures on them like drag or rotate. For those models that go through this logic I'm unable to add gestures even though I'm making sure that Collision and Input Target is set on the Placed Models. Has anyone been able to get this working or is it even a possibility? My materialize function func materialize() -> PlacedObject { let shapes = previewEntity.components[CollisionComponent.self]!.shapes // Clone render content first as we need its materials let clonedRenderContent = renderContent.clone(recursive: true) print("To be finding main model: \(descriptor.displayName)") // Find the main model in preview hierarchy func findMainModel(_ entity: Entity) -> Entity? { if entity.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model: \(entity.name)") return entity } for child in entity.children { if child.name == descriptor.displayName.replacingOccurrences(of: " ", with: "_") { print("Found main model in children: \(child.name)") return child } } return nil } // Clone hierarchy preserving structure, names, and materials func cloneHierarchy(_ entity: Entity) -> Entity { print("Cloning: \(entity.name)") let cloned: Entity if let model = entity as? ModelEntity { // Clone with recursive false to handle children manually cloned = model.clone(recursive: false) if let clonedModel = cloned as? ModelEntity, let originalMaterials = model.model?.materials { // Preserve the original model's materials clonedModel.model?.materials = originalMaterials } } else { cloned = Entity() } // Preserve name and transform cloned.name = entity.name cloned.transform = entity.transform // Clone children for child in entity.children { let clonedChild = cloneHierarchy(child) cloned.addChild(clonedChild) } return cloned } print("=== Cloning Preview Structure ===") // Clone the preview hierarchy with proper structure let clonedStructure = cloneHierarchy(previewEntity) // Find and use the main model if let mainModel = findMainModel(clonedStructure) { print("Using main model for PlacedObject") let modelEntity: ModelEntity if let asModel = mainModel as? ModelEntity { print("Using asModel ") modelEntity = asModel } else { modelEntity = ModelEntity() modelEntity.name = mainModel.name // Copy children and transforms for child in mainModel.children { modelEntity.addChild(child) } modelEntity.transform = mainModel.transform } // Add collision component here let collisionComponent = CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all)) modelEntity.components.set(collisionComponent) // Create the placed object let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: modelEntity, shapes: shapes) // Set input target on the placed object itself placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } else { print("Fallback to original render content") let placedObject = PlacedObject(descriptor: descriptor, renderContentToClone: clonedRenderContent, shapes: shapes) placedObject.components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) return placedObject } } My PlacedObject class where the init has the recursive cloning removed because it is handled in materialize class PlacedObject: Entity { let fileName: String // The 3D model displayed for this object. private let renderContent: ModelEntity static let collisionGroup = CollisionGroup(rawValue: 1 << 29) // The origin of the UI attached to this object. // The UI is gravity aligned and oriented towards the user. let uiOrigin = Entity() var affectedByPhysics = false { didSet { guard affectedByPhysics != oldValue else { return } if affectedByPhysics { components[PhysicsBodyComponent.self]!.mode = .static } else { components[PhysicsBodyComponent.self]!.mode = .static } } } var isBeingDragged = false { didSet { affectedByPhysics = !isBeingDragged } } var positionAtLastReanchoringCheck: SIMD3<Float>? var atRest = false init(descriptor: ModelDescriptor, renderContentToClone: ModelEntity, shapes: [ShapeResource]) { fileName = descriptor.fileName // renderContent = renderContentToClone.clone(recursive: true) renderContent = renderContentToClone super.init() name = renderContent.name // Apply the rendered content’s scale to this parent entity to ensure // that the scale of the collision shape and physics body are correct. scale = renderContent.scale renderContent.scale = .one // Make the object respond to gravity. let physicsMaterial = PhysicsMaterialResource.generate(restitution: 0.0) let physicsBodyComponent = PhysicsBodyComponent(shapes: shapes, mass: 1.0, material: physicsMaterial, mode: .static) components.set(physicsBodyComponent) components.set(CollisionComponent(shapes: shapes, isStatic: false, filter: CollisionFilter(group: PlacedObject.collisionGroup, mask: .all))) addChild(renderContent) addChild(uiOrigin) uiOrigin.position.y = extents.y / 2 // Position the UI origin in the object’s center. // Allow direct and indirect manipulation of placed objects. components.set(InputTargetComponent(allowedInputTypes: [.direct, .indirect])) // Add a grounding shadow to placed objects. renderContent.components.set(GroundingShadowComponent(castsShadow: true)) } required init() { fatalError("`init` is unimplemented.") } } Thanks
4
0
240
1w