Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

For a third year, no screenshot capability for immersive visionOS apps... here's a workaround?
Since only the user can take a screenshot using the Apple Vision Pro's top buttons, the only workaround available to an immersive app that needs a screenshot to document the user's creative interior design choices is ask the user to take a screenshot wait until the user taps a button indicating the screenshot has been taken then the app asks the user to select the screenshot when the app opens the PhotoPicker when the user presses Done, the screenshot is handed off to the app. One wonders why there is no Apple Api for doing this in a simple privacy protective way such as: When called, the Apple api captures the screenshot in Apple secured memory The api displays the screenshot to the user with appropriate privacy warnings and asks if the user wants to a. share this screenshot with the app, or b. cancel, c. retake the screenshot If the user approves, the app receives the screenshot
3
0
63
Jun ’25
Realitykit asset loading
With Xcode 26, loading ressources with RealityKit is extremely slow. Here my project takes almost 50 seconds to load. I also get multiple Hang detected messages in the console: When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds. I'm using RealityKit asynchronous loading: private static func loadFromRealityComposerPro( named entityName: String, fromSceneNamed sceneName: String ) async -> Entity? { var entity: Entity? do { let scene = try await Entity( named: sceneName, in: visionPetsContentBundle ) entity = scene.findEntity(named: entityName) } catch { print( "Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)" ) } return entity } Anyone having the same problem?
2
0
55
Jun ’25
Cannot extract imagePair from generated Spatial Photos
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here. Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo let photos = PHAsset.fetchAssets(with: .image, options: nil) // enumerating photos .... if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) { spatialAsset = asset } // other code show below I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos). // imageCount is 1 when it comes to generated spatial photo let imageCount = CGImageSourceGetCount(source) I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource. The full code below, the imagePair extraction will stop at "no groups found": func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) { let options = PHImageRequestOptions() options.isNetworkAccessAllowed = true options.deliveryMode = .highQualityFormat options.resizeMode = .none options.version = .original return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) { imageData, _, _, _ in guard let imageData, let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil) else { completion(nil) return } let stereoImagePair = stereoImagePair(from: imageSource) completion(stereoImagePair) } } } func stereoImagePair(from source: CGImageSource) -> StereoImagePair? { guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else { return nil } let imageCount = CGImageSourceGetCount(source) print(String(format: "%d images found", imageCount)) guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else { /// function returns here print("no groups found") return nil } guard let stereoGroup = groups.first(where: { let groupType = $0[kCGImagePropertyGroupType] as! CFString return groupType == kCGImagePropertyGroupTypeStereoPair }) else { return nil } guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int, let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int, let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil), let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil), let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil), let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil) else { return nil } return (leftImage, rightImage, self.identifier) } Any suggestion? Thanks visionOS 2.4
3
0
106
Jun ’25
Spatial Gallery App functionality
Similar to the visionOS Spatial Gallery app, I'm developing a visionOS app that will show spatial photos and videos. Is it possible to re-create the horizontal (or a vertical) scrolling functionality that shows spatial photos and spatial video previews? Does the Spatial Gallery app use private APIs to create this functionality? I've been looking at the Quick Look documentation and have been able to use the PreviewApplication to show a single preview, but do not see anything for a collection of files as the Spatial Gallery app presents in the scrolling view. Any insights or direction on how this may be done is greatly appreciated.
2
0
117
Jun ’25
Looking for a way to implement the video display effect in Apple's 'Spatial Gallery'
Hi guys, I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel. May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code. Thanks!
3
1
182
Jun ’25
Metal (Compositor Services) or RealityKit on visionOS
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
4
0
159
Jun ’25
Creating a voxel mesh and render it using metal within a RealityKit ImmersiveView
Hi everyone, I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here: https://apps.apple.com/us/app/arcade-topology/id6742103633 The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop. Thanks! —Alejandro
3
0
104
Jun ’25
Issue: Closing Bounded Volume Never Re-Opens
Greetings. I am having this issue with a Unity Polyspatial VisionOS app. We have our main Bounded Volume for our app. We have other Native UI windows that appear when we interact with objects in our Bounded Volume. If a user closes our main Bounded Volume...sometimes it quits the app. Sometimes it doesn't. If we go back to the home screen and reopen the app, our main Bounded Volume doesn't always appear, and just the Native UI windows we left open are visible. But, we can sometimes still hear sounds that are playing in our Bounded Volume. What solutions are there to make sure our Bounded Volume always appears when the app is open?
1
0
84
Jun ’25
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS
SpatialEventGesture Not Working to Show Hidden Menu in Immersive Panorama View - visionOS Problem Description I'm developing a Vision Pro app that displays 360° panoramic photos in a full immersive space. I have a floating menu that auto-hides after 5 seconds, and I want users to be able to show the menu again using spatial gestures (particularly pinch gestures) when it's hidden. However, the SpatialEventGesture implementation is not working as expected. The menu doesn't appear when users perform pinch gestures or other spatial interactions in the immersive space. Current Implementation Here's the relevant gesture detection code in my ImmersiveView: import SwiftUI import RealityKit struct ImmersiveView: View { @EnvironmentObject var appModel: AppModel @Environment(\.openWindow) private var openWindow var body: some View { RealityView { content in // RealityView content setup with panoramic sphere... let rootEntity = Entity() content.add(rootEntity) // Load panoramic content here... } // Using SpatialEventGesture to handle multiple spatial gestures .gesture( SpatialEventGesture() .onEnded { eventCollection in // Check menu visibility state if !appModel.isPanoramaMenuVisible { // Iterate through event collection to handle various gestures for event in eventCollection { switch event.kind { case .touch: print("Detected spatial touch gesture, showing menu") showMenuWithGesture() return case .indirectPinch: print("Detected spatial pinch gesture, showing menu") showMenuWithGesture() return case .pointer: print("Detected spatial pointer gesture, showing menu") showMenuWithGesture() return @unknown default: print("Detected unknown spatial gesture: \(event.kind)") showMenuWithGesture() return } } } } ) // Keep long press gesture as backup .simultaneousGesture( LongPressGesture(minimumDuration: 1.5) .onEnded { _ in if !appModel.isPanoramaMenuVisible { print("Detected long press gesture, showing menu") showMenuWithGesture() } } ) } private func showMenuWithGesture() { if !appModel.isPanoramaMenuVisible { appModel.showPanoramaMenu() if !appModel.windowExists(id: "PanoramaMenu") { openWindow(id: "PanoramaMenu", value: "menu") } } } } What I've Tried Multiple SpatialTapGesture approaches: Originally tried using multiple .gesture() modifiers with SpatialTapGesture(count: 1) and SpatialTapGesture(count: 2), but realized they override each other. SpatialEventGesture implementation: Switched to SpatialEventGesture to handle multiple event types (.touch, .indirectPinch, .pointer), but pinch gestures still don't trigger the menu. Added debugging: Console logs show that the gesture callbacks are never called when performing pinch gestures in the immersive space. Backup LongPressGesture: Added a simultaneous long press gesture as backup, which also doesn't work consistently. Expected Behavior When the panorama menu is hidden (after 5-second auto-hide), users should be able to: Perform a pinch gesture (indirect pinch) to show the menu Tap in space to show the menu Use other spatial gestures to show the menu Questions Is SpatialEventGesture the correct approach for detecting gestures in a full immersive RealityView? Are there any special considerations for gesture detection when the RealityView contains a large panoramic sphere that might be intercepting gestures? Should I be using a different gesture approach for visionOS immersive spaces? Is there a way to ensure gestures work even when the RealityView content (panoramic sphere) might be blocking them? Environment Xcode 16.1 visionOS 2.5 Testing on Vision Pro device App uses SwiftUI + RealityKit Any guidance on the proper way to implement spatial gesture detection in visionOS immersive spaces would be greatly appreciated! Additional Context The app manages multiple windows and the gesture detection should work specifically when in the immersive panorama mode with the menu hidden. Thank you for any help or suggestions!
1
0
146
Jun ’25
Launching a Unity fully immersive game from SwiftUI
I am trying to launch a fully immersive game from Unity on a SwiftUI view. The game is using Metal Rendering with Compositor Services. I added the unity Xcode project into the workspace, added the necessary bridge code. When I click on the button to call ufw?.showUnityWindow(), it does not start and I get the following in the console: AR session failed to start after 5 seconds. Is the app configured to use an immersive space?
2
0
91
Jun ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
109
May ’25
Accessing LiDAR Depth Data and Scene Reconstruction on Apple Vision Pro
Hello, I'm developing a visionOS application for Apple Vision Pro that aims to scan unknown physical objects, capture their 3D data (such as meshes or point clouds), and export them as 3D models. Ideally, I'd also like to visualize these reconstructions in real-time within the headset. This functionality is similar to what's available in Reality Composer on iPad and iPhone, but I'm seeking to implement it natively on Vision Pro. I've reviewed the visionOS documentation but haven't found clear guidance on accessing LiDAR depth data or performing scene reconstruction. Specifically, I'm interested in: 1.Accessing LiDAR or depth data from Vision Pro's sensors. 2.Utilizing ARKit's scene reconstruction capabilities on visionOS. 3.Exporting captured 3D data as models (e.g., USDZ or OBJ formats). Are there APIs or frameworks in visionOS that support these features?
0
2
100
May ’25
How to get the transform of the joint in a skeleton when it is Animating
I have an entity that was created using Mixamo, and it has an animation. after the animation completes the mesh of the robot is not where the entity is positioned. I want to do something like when the animation finishes, I set the root entity's transform to the mesh's transform. There are no transformations applied to any of the children of this root of the model, which means that the transformations are applied to the skeleton due the the playing of animations. Is there a way where I can apply the final position of the root of the skeleton to the root entity to make sure to position the entity where the animation has ended just before the next animation plays?
0
0
91
May ’25
How to make .blur(radius:) visually affect RealityView content?
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred. Here’s the test code: struct ContentView: View { var body: some View { VStack(spacing: 20) { Text("Above the RealityView") .font(.title) RealityView { content, attachments in if let text = attachments.entity(for: "2dView") { text.position.y = 0.1 content.add(text) } let box = ModelEntity( mesh: .generateBox(size: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)] ) content.add(box) } attachments: { Attachment(id: "2dView") { Text("Above the Box") .font(.title) } } .frame(width: 300, height: 300) .border(.blue) .blur(radius: 99) // Has no visual effect Text("Below the RealityView") .font(.subheadline) } .padding() } } My question: How can I make .blur(radius:) visually affect the content rendered in a RealityView? Can you provide a working example that .blur() to visually affect any part of a RealityView? Thanks!
0
0
73
May ’25
How to add visual thickness to a glass background view
Hi guys, In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness. However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness. I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value. My question is: Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture? Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
0
0
81
May ’25
Collision Detection Fails After Anchoring ModelEntity to Hand in VisionOS
I'm starting my journey in developing an immersive app for VisionOS. I've been making steady progress, but I've encountered a specific challenge that I haven't been able to resolve. I created two ModelEntity objects — a sphere and a cube — and added a DragGesture to the cube. When I drag the cube over the sphere, the two collide correctly, and the collision is logged in the console. So far, everything works as expected. However, when I try to anchor the cube to my hand, the collision stops working. It's as if the cube loses its ability to detect collisions once it's anchored. Any guidance or clarification on this behavior would be greatly appreciated. //  ImmersiveView.swift //  estudos_vision // //  Created by Lailan Rogerio Rodrigues Matos on 15/05/25. // import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { @Environment(AppModel.self) var appModel @State private var session: SpatialTrackingSession? @State private var box = ModelEntity() @State private var subs: [EventSubscription] = [] @State private var ballEntity: Entity? var body: some View { RealityView { content in // Load initial content from the RealityKit scene. if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } // Create and run a spatial tracking session. let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.hand]) _ = await session.run(configuration) self.session = session // Create a red box. let boxMesh = MeshResource.generateBox(size: 0.2) let material = SimpleMaterial(color: .red, isMetallic: false) box = ModelEntity(mesh: boxMesh, materials: [material]) box.position.y += 0.15 // Position the box slightly above the origin. // Configure the box for user interaction and physics. box.components.set(InputTargetComponent(allowedInputTypes: .indirect)) // Make it interactive. box.generateCollisionShapes(recursive: false) // Generate collision shapes for physics. box.components.set(PhysicsBodyComponent( // Add physics behavior. massProperties: .default, material: .default, mode: .kinematic // Use kinematic mode so it can be moved by user interaction. )) box.components.set(GroundingShadowComponent(castsShadow: true)) // Add a shadow. //content.add(box) //commented out to add to hand anchor // Create a left hand anchor and add the box as a child. let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous) handAnchor.addChild(box) content.add(handAnchor) // Add the hand anchor to the scene. // Create a sphere. let ball = ModelEntity(mesh: .generateSphere(radius: 0.15)) ball.position = [0.0, 1.5, -1.0] // Initial position of the ball. ball.generateCollisionShapes(recursive: false) // Add collision. ball.name = "Sphere" content.add(ball) ballEntity = ball // Subscribe to collision events between the box and other entities. let event = content.subscribe(to: CollisionEvents.Began.self, on: box) { ce in print("Collision between \(ce.entityA.name) and \(ce.entityB.name) occurred") //ce.entityA.removeFromParent() // removes the colliding object //ce.entityB.removeFromParent() } Task { subs.append(event) } } // Add a drag gesture to the box, allowing the user to move it. .gesture( DragGesture() .targetedToEntity(box) // Target the drag gesture to the box. .onChanged({ value in // Update the position of the box based on the drag gesture. box.position = value.convert(value.location3D, from: .local, to: box.parent!) }) ) } } #Preview(immersionStyle: .full) { ImmersiveView() .environment(AppModel()) }
1
0
74
May ’25