Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Created

Developer Capture and microphone input for audio-based apps
Hi Apple engineers, I'm currently working on an app that uses the incoming microphone audio and gives visual feedback to the user about the incoming audio. I would like to use Reality Composer Pro's Developer Capture to get a high quality recording of the app and its use cases for the App Store — but any time I have an in-progress capture, my app stops receiving the incoming audio. It almost seems as if the microphone audio is getting 'hijacked' during the screen capture, which prevents me from demonstrating the app's core features. Could you please advise on how to proceed?
2
0
551
Jan ’25
Screenshot using visionOS (Code) on Apple Vision Pro
I want to create a screenshot (static image) of the current view on the Apple Vision Pro using written code in visionOS. Unfortunately, I currently can’t find a way to achieve this. The only option I’ve found so far is through Reality Composer Pro. However, since I want to accomplish this directly through code, this approach is not an option for me.
1
0
339
Jan ’25
ShaderGraphMaterial on entity
Hi I try to make a 360 stereo viewer, and I have made a ShaderGraphMaterial on Reality Composer Pro. Im trying to use that material on a inverted sphere whitch is generated in Swift. When I try to attach the material I get this error "Type of expression is ambiguous without a type annotation" Here is the code (sorry im noob =) ): import SwiftUI import RealityKit import RealityKitContent import PhotosUI struct ImmersiveView: View { @Environment(AppModel.self) var appModel var body: some View { RealityView { content in // Add the initial RealityKit content guard let skyBoxEntity = await createSkybox() else { return } content.add(skyBoxEntity) } } } private func createSkybox () async -> Entity? { var matX = try? await ShaderGraphMaterial(named: "/Root/Mat_Stereo360", from: "360Stereo.usda", in: realityKitContentBundle) let sphere = await MeshResource.generateSphere(radius:1000) let entity = await Entity() entity.components.set(ModelComponent(mesh: sphere, materials: [matX])). //ERROR HERE: Type of expression is ambiguous without a type annotation //entity.scale *= .init(x:-1, y:1, z:1) return entity } I hope someone can help me =) Best regards, Kim
2
0
588
Jan ’25
Request for gaze data in fully immersive Metal apps
Hi, We are trying to port our Unity app from other XR devices to Vision Pro. Thus it's way easier for us to use the Metal rendering layer, fully immersive. And to stay true to the platform, we want to keep the gaze/pinch interaction system. But we just noticed that, unlike Polyspatial XR apps, VisionOS XR in Metal does not provide gaze info unless the user is actively pinching... Which forbids any attempt to give visual feedback on what they are looking at (buttons, etc). Is this planned in Apple's roadmap ? Thanks
2
0
585
Jan ’25
Automatic Plane Measurements like in the Apple Measure App
I’m working on an iOS app that needs to measure the area of planes or surfaces, like the length and width of objects, just like the Apple Measure app does. I’ve been exploring ARKit, but I’m curious if there are any APIs or techniques that can help automate the process of detecting and measuring planes. Specifically, I’m looking for a way to automatically detect and measure planes (e.g., from a top-down view). For example: Measuring a box width and length. I have attached a screenshot and a video of the Apple Measure App doing it. Does Apple provide any tools or APIs for this, or are there any best practices I should know about? I’d love to hear from anyone who’s tackled something similar. Video: https://drive.google.com/file/d/1BxM7fIbFxsCsYwY7w8ZxIeq_4WTGkkwA/view?usp=drive_link
2
0
725
Jan ’25
VisionPro MRApp issue: location reset
We use Unity6+VisonOS2.2 to develop an MR Application. App Mode: RealityKit with PolySpatial, In the actual test, we found that when my moving position is more than 80~100 meters away from the starting position of the application, my current position will be reset to Vector.zero, which will cause my application experience to be very bad. Is anyone experiencing the same problem? Is there a solution?
0
0
226
Jan ’25
RotateGesture3D auto constrained to axis
Hi, On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations. What I want to know is if it is possible to constrain the rotation on axis automatically. Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on. This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes. RotateGesture3D(constrainedToAxis: .x) RotateGesture3D(constrainedToAxis: .y) RotateGesture3D(constrainedToAxis: .z) Is it possible to do this? If so, what would be the best way to do it? A code example would be greatly appreciated. Regards Tof
3
0
430
Jan ’25
Reality kit Entities Appearing to Lag in a Full or Progressive Style Immersive Space When Opened with Environment Turned On
PLATFORM AND VERSION Vision OS Development environment: Xcode 16.2, macOS 15.2 Run-time configuration: visionOS 2.3 (On Real Device, Not simulator) Please someone confirm I'm not crazy and this issue is actually out of my control. Spent hours trying to fix my app and running profiles because thought it was an issue related to my apps performance. Finally considered chance it was issue with API itself and made sample app to isolate problem, and it still existed in it. The issue is when a model entity moves around in a full space that was launched when the system environment immersion was turned up before opening it, the entities looks very choppy as they move around. If you take off the headset while still in the space, and put it back on, this fixes it and then they move smoothly as they should. In addition, you can also leave the space, and then turn the system environment immersion all the way down before launching the full space again, this will also make the entity moves smoothly as it should. If you launch a mixed immersion style instead of a full immersion style, this issue never arrises. The issue only arrises if you launch the space with either a full style, or progressive style, while the system immersion level is turned on. STEPS TO REPRODUCE https://github.com/nathan-707/ChoppyEntitySample Open my test project, its a small, modified vision os project template that shows it clearly. otherwise: create immersive space with either full or progressive immersion style. setup a entity in kinematic mode, apply a velocity to it to make it pass over your head when the space appears. if you opened the space while the Apple Vision Pros system environment was turned up, the entity will look choppy. if you take the headset off while in the same space, and put it back on, it will fix the issue and it will look smooth. alternatively if you open the space with the system immersion environment all the way down, you will also not run into the issue. Again, issue also does not happen if space launched is in mixed style.
1
0
548
Jan ’25
Dynamically assigning texture resource to ShaderGraphMaterial on VisionOS
I implemented a ShaderGraphMaterial and tried to load it from my usda scene by ShaderGraphMaterial.init(name: in: bundle). I want to dynamically set TextureResource on that material, so I wanted to expose texture as Uniform Input of a ShaderGraphMaterial. But obviously RCP's Shader Graph doesn't support Texture input as parameter as the image shows: And from the code level, ShaderGraphMaterial also didn't expose a way to set TexturesResources neither. Its parameterNames shows an empty array if I didn't set any custom input params. The texture I get is from my backend so it really cannot be saved into a file and load it again (that would be too weird). Is there something I am missing?
1
0
557
Jan ’25
Barcode Anchor Jitter in Vision Pro due to Invalid enterprise api for barcode scanning Values
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below. import SwiftUI import RealityKit import ARKit import Combine struct ImmersiveView: View { @State private var arkitSession = ARKitSession() @State private var root = Entity() @State private var fadeCompleteSubscriptions: Set = [] var body: some View { RealityView { content in content.add(root) } .task { // Check if barcode detection is supported; otherwise handle this case. guard BarcodeDetectionProvider.isSupported else { return } // Specify the symbologies you want to detect. let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8]) do { try await arkitSession.requestAuthorization(for: [.worldSensing]) try await arkitSession.run([barcodeDetection]) print("Barcode scanning started") for await update in barcodeDetection.anchorUpdates where update.event == .added { let anchor = update.anchor // Play an animation to indicate the system detected a barcode. playAnimation(for: anchor) // Use the anchor's decoded contents and symbology to take action. print( """ Payload: \(anchor.payloadString ?? "") Symbology: \(anchor.symbology) """) } } catch { // Handle the error. print(error) } } } // Define this function in ImmersiveView. func playAnimation(for anchor: BarcodeAnchor) { guard let scene = root.scene else { return } // Create a plane sized to match the barcode. let extent = anchor.extent let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)]) entity.components.set(OpacityComponent(opacity: 0)) // Position the plane over the barcode. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) root.addChild(entity) // Fade the plane in and out. do { let duration = 0.5 let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 0, to: 1.0, duration: duration, isAdditive: true, bindTarget: .opacity) ) let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 1.0, to: 0, duration: duration, isAdditive: true, bindTarget: .opacity)) let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut]) _ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in // Remove the plane after the animation completes. entity.removeFromParent() }).store(in: &fadeCompleteSubscriptions) entity.playAnimation(fadeAnimation) } catch { print("Error") } } }
3
0
538
Jan ’25
generateConvex(from mesh: MeshResource) crashes instead of throwing a Swift error
I have a MeshResource and I would like to create a collision component from it. let childBounds = child.visualBounds(relativeTo: self) var childShape: ShapeResource do { // Crashed by the following line instead of throwing a Swift Error childShape = try await ShapeResource.generateConvex(from: childModel.mesh); } catch { childShape = ShapeResource.generateBox(size: childBounds.extents) childShape = childShape.offsetBy(translation: childBounds.center) } Based on this document: https://developer.apple.com/documentation/realitykit/shaperesource/generateconvex(from:)-6upj9 Will throw an error if mesh does not define a nonempty convex volume. For example, will fail if all the vertices in mesh are coplanar. But, the method crashes the app instead of throwing a Swift error Incident Identifier: 35CD58F8-FFE3-48EA-85D3-6D241D8B0B4C CrashReporter Key: FE6790CA-6481-BEFD-CB26-F4E27652BEAE Hardware Model: Mac15,11 ... Version: 1.0 (1) Code Type: ARM-64 (Native) Role: Foreground Parent Process: launchd_sim [2057] Coalition: com.apple.CoreSimulator.SimDevice.85A2B8FA-689F-4237-B4E8-DDB93460F7F6 [1496] Responsible Process: SimulatorTrampoline [910] Date/Time: 2025-01-26 16:13:17.5053 +0800 Launch Time: 2025-01-26 16:13:09.5755 +0800 OS Version: macOS 15.2 (24C101) Release Type: User Report Version: 104 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x00000001abf841d0 Termination Reason: SIGNAL 5 Trace/BPT trap: 5 Terminating Process: exc handler [17316] Triggered by Thread: 0 Thread 0 Crashed: 0 CoreRE 0x1abf841d0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedron + 232 1 CoreRE 0x1abf845f0 REAssetManagerCollisionShapeAssetCreateConvexPolyhedronFromMesh + 868 2 RealityFoundation 0x1d25613bc static ShapeResource.generateConvex(from:) + 148 Here is the message on the app console from Xcode /Library/Caches/com.apple.xbs/Sources/REKit_Sim/ThirdParty/PhysX/physx/source/physxcooking/src/convex/QuickHullConvexHullLib.cpp (935) : internal error : QuickHullConvexHullLib::findSimplex: Simplex input points appers to be coplanar. Failed to cook convex mesh (0x3) assertion failure: 'convexPolyhedronShape != nullptr' (REAssetManagerCollisionShapeAssetCreateConvexPolyhedron:line 356) Bad parameters passed for convex mesh creation. Message from debug The above crash happened on a visionOS simulator (visionOS 2.2 (22N840)
1
0
444
Jan ’25
Is there a way to scale a RealityKit ShapeResource?
I can generate a ShapeResource from a ReakityKit entity's extents. Could I apply some scaling to the generated shape. Is there a way to do that? // model is a ModelResource and bounds is a BoundingBox var shape = ShapeResource.generateConvex(from: model.mesh); shape = shape.offsetBy(translation: bounds.center) // How can I scale the shape to fit within the bounds? The following API only provide the rotation and translation support. and I cannot find the scale support. offsetBy(rotation: simd_quatf = simd_quatf(ix: 0, iy: 0, iz: 0, r: 1), translation: SIMD3<Float> = SIMD3<Float>()) I can put the ShapeResource on an entity and scale the entity. But, I would like to know if it is possible to scale the ShapeResource itself without attaching it to an entity.
3
0
579
Jan ’25
Releasing a TextureObject from memory
Hi there! I´m trying to make a 360 image carousel in RealityView/SwiftUI with very large textures. I´ve managed to load one 12K 360 image and showing it on a inverted sphere with a ShaderMaterialGraph made in Reality Composer Pro. When I try to load the next image I get an out of memory error. The carousel works fine with smaller textures. My question is. How do I release the memory from the current texture before loading the next? In theory the garbagecollector should erase it eventually? Hope someone can help =) Thanks in advance! Best regards, Kim
6
0
708
Jan ’25
WorldAnchors added and removed immediately
I can add a WorldAnchor to a WorldTrackingProvider. The next time I start my app, the WorldAnchor is added back, and then is immediately removed: dataProviderStateChanged(dataProviders: [WorldTrackingProvider(0x0000000300bc8370, state: running)], newState: running, error: nil) AnchorUpdate(event: added, timestamp: 43025.248134708, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: true, originFromAnchorTransform: <translation=(0.048458 0.000108 -0.317565) rotation=(0.00° 15.44° -0.00°)>)) AnchorUpdate(event: removed, timestamp: 43025.348131208, anchor: WorldAnchor(id: C0A1AE95-F156-45F5-9030-895CAABF16E9, isTracked: false, originFromAnchorTransform: <translation=(0.000000 0.000000 0.000000) rotation=(-0.00° 0.00° 0.00°)>)) It always leaves me with zero anchors in .allAnchors...the ARKitSession is still active at this point
8
0
742
Jan ’25
RoomCaptureSession with ARSCNView crashes when scanning multiple hotspots across different rooms
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process. Bug Description: The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern: User successfully scans first room with multiple hotspots (working correctly) User stops scanning, moves to a new room In the new room, first 1-2 hotspots work correctly Application crashes when attempting to scan additional hotspots Technical Details: Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose() Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode Error suggests invalid positioning when placing AR anchors Steps to Reproduce: Start room scan Complete multiple hotspot captures in first room Stop scanning Start new room scan Capture 1-2 hotspots successfully Attempt additional hotspot captures -> crashes Attempted Solutions: Implemented anchor cleanup between sessions Added position validation before anchor placement Implemented ARSession error handling Added proper thread management for AR operations Environment: Device: iPhone 14 Pro (LiDAR equipped) iOS Version: 18.1.1 (22B91) Testing through TestFlight Crash Log Details: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 27 Thread 27 Crashed: 0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8 1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268 2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128 3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor Question: Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides. Crash Log Code and full crash logs can be provided if needed.
2
1
656
Jan ’25
Significant deviation of depth map values captured in ARKit framework
I use ARKit to build an app, scan rooms to collect the spatial data of objects and re-construct the 3D scene. the problem is I found the depth map values captured in ARFrame significantly deviate from the real distances, even nonlinearly, for the distances below 1.5m, values are basically correct, but beyond 1.5m, they are smaller than real values. for example read 1.9m from the generated depthmap.tiff, but real distance is 3 meters. below is my code of generating tiff file to record depth map data: Generated TIFF file (captured from ARKit): as shown above, the maximum distance is around 1.9m, but real distance to that wall is more than 3 meters, and also you can see, the depth map picture captured in ARKit is quite blurry, particularly at far distance (> 2.0m), almost smeared out. Generated TIFF file (captured from AVFoundation): In comparison, the depth map captured from traditional AVFoundation and with the same hardware device is much clear, the values seem not in meter unit though.
1
0
504
Jan ’25