Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Barcode Anchor Jitter in Vision Pro due to Invalid enterprise api for barcode scanning Values
We’re using the enterprise API for spatial barcode/QR code scanning in the Vision Pro app, but we often get invalid values for the barcode anchor from the API, leading to jittery barcode positions in the UI. The code we’re using is attached below. import SwiftUI import RealityKit import ARKit import Combine struct ImmersiveView: View { @State private var arkitSession = ARKitSession() @State private var root = Entity() @State private var fadeCompleteSubscriptions: Set = [] var body: some View { RealityView { content in content.add(root) } .task { // Check if barcode detection is supported; otherwise handle this case. guard BarcodeDetectionProvider.isSupported else { return } // Specify the symbologies you want to detect. let barcodeDetection = BarcodeDetectionProvider(symbologies: [.code128, .qr, .upce, .ean13, .ean8]) do { try await arkitSession.requestAuthorization(for: [.worldSensing]) try await arkitSession.run([barcodeDetection]) print("Barcode scanning started") for await update in barcodeDetection.anchorUpdates where update.event == .added { let anchor = update.anchor // Play an animation to indicate the system detected a barcode. playAnimation(for: anchor) // Use the anchor's decoded contents and symbology to take action. print( """ Payload: \(anchor.payloadString ?? "") Symbology: \(anchor.symbology) """) } } catch { // Handle the error. print(error) } } } // Define this function in ImmersiveView. func playAnimation(for anchor: BarcodeAnchor) { guard let scene = root.scene else { return } // Create a plane sized to match the barcode. let extent = anchor.extent let entity = ModelEntity(mesh: .generatePlane(width: extent.x, depth: extent.z), materials: [UnlitMaterial(color: .green)]) entity.components.set(OpacityComponent(opacity: 0)) // Position the plane over the barcode. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) root.addChild(entity) // Fade the plane in and out. do { let duration = 0.5 let fadeIn = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 0, to: 1.0, duration: duration, isAdditive: true, bindTarget: .opacity) ) let fadeOut = try AnimationResource.generate(with: FromToByAnimation<Float>( from: 1.0, to: 0, duration: duration, isAdditive: true, bindTarget: .opacity)) let fadeAnimation = try AnimationResource.sequence(with: [fadeIn, fadeOut]) _ = scene.subscribe(to: AnimationEvents.PlaybackCompleted.self, on: entity, { _ in // Remove the plane after the animation completes. entity.removeFromParent() }).store(in: &fadeCompleteSubscriptions) entity.playAnimation(fadeAnimation) } catch { print("Error") } } }
3
0
501
Jan ’25
PlaneDetectionProvider on VisionOS seems to limit detection to planes less than5 m from world origin.
When using the plane PlaneDetectionProvider in visionOS I seem to have hit a limitation which is that regardless of where the headset is in the space, planes will only be detected that are (as far as I can tell) less that 5m from the world origin. Mapping a room becomes very tricky as a result because you often find some walls are outside the radius, even if you're standing two feet away from a ten foot wall. It just won't see it. I've picked my way through the documentation but I cannot see any way to extend this distance. Am I missing something?
3
0
648
Oct ’24
visionOS console warning: Trying to convert coordinates between views that are in different UIWindows
Hello, I have an iOS app that is using SwiftUI but the gesture code is written using UIGestureRecognizer. When I run this app on visionOS using the "Designed for iPad" destination and try to use any of my gestures I see this warning in the console: Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead. But I don't see any visible problems with the gestures. I see this warning printed out after the gesture takes place but before any of our gesture methods get kicked off. So now I am wondering if this is something we need to deal with or some internal work that needs to happen in UIKit. Does anyone have any thoughts on this?
3
0
1.1k
Dec ’24
When placing a TextField within a RealityViewAttachment, the virtual keyboard does not appear in front of the user as expected.
Hello, Thank you for your time. I have a question regarding visionOS app development. When placing a SwiftUI TextField inside RealityView.attachments, we found that focusing on the field does not bring up the virtual keyboard in front of the user. Instead, the keyboard appears around the user’s lower abdomen area. However, when placing the same TextField in a regular SwiftUI layer outside of RealityView, the keyboard appears in the correct position as expected. This suggests that the issue is specific to RealityView.attachments. We are currently exploring ways to have the virtual keyboard appear directly in front of the user when using TextField inside RealityViewAttachments. If there is any method to explicitly control the keyboard position or any known workarounds—including alternative UI approaches—we would greatly appreciate your guidance. Best regards, Sadao Tokuyama
3
1
597
Jul ’25
Exporting .reality files from Reality Composer Pro
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience. That works really well. I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook. Am I missing something, or is it no longer possible to export .reality files? Thanks.
3
2
1.9k
Jul ’25
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
3
1
539
Jul ’25
How to disable the download option from Quicklook PreviewApplication?
Hi, Following the wwdc24 video - What’s new in Quick Look for visionOS, I've managed to open a 3D model using PreviewApplication by calling let previewItem = PreviewItem(url: modelURL, displayName: "Easter", editingMode: .disabled) _ = PreviewApplication.open(items: [previewItem]) However, the "Save to Downloads" option is aways there(see attached screenshot). As the models are user generated content, and I don't want the download option to be available to all users. How to disable it?
3
1
621
Oct ’24
RealityView Not Refreshing With SwiftData
Hi, I am trying to update what entities are visible in my RealityView. After the SwiftData set is updated, I have to restart the app for it to appear in the RealityView. Also, the RealityView does not close when I move to a different tab. It keeps everything on and tracking, leaving the model in the same location I left it. import SwiftUI import RealityKit import MountainLake import SwiftData struct RealityLakeView: View { @Environment(\.modelContext) private var context @Query private var items: [Item] var body: some View { RealityView { content in print("View Loaded") let lakeScene = try? await Entity(named: "Lake", in: mountainLakeBundle) let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) @MainActor func addEntity(name: String) { if let lakeEntity = lakeScene?.findEntity(named: name) { // Add the Cube_1 entity to the RealityView anchor.addChild(lakeEntity) } else { print(name + "entity not found in the Lake scene.") } } addEntity(name: "Island") for item in items { if(item.enabled) { addEntity(name: item.value) } } // Add the horizontal plane anchor to the scene content.add(anchor) content.camera = .spatialTracking } placeholder: { ProgressView() } .edgesIgnoringSafeArea(.all) } } #Preview { RealityLakeView() }
3
0
494
Jan ’25
USDZ with Blend Shapes Workflow Recommendations
Hi, since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ. Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm) So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
3
0
1.3k
Jan ’25
Debug Vision Pro application directly on physical device instead of the simulator
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure. How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
3
0
450
Jan ’25
How to find the camera transform (or view matrix) in the world coordinate from a camera frame
I'm trying to implement a prototype to render virtual objects in a mixed immersive space on the camer frames captured by CameraFrameProvider. Here are what I have done: Get camera's instrinsics from frame.primarySample.parameters.intrinsics Get camera's extrinsics from frame.primarySample.parameters.extrinsics Get the device anchor by worldTrackingProvider.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) Setup a RealityKit.RealityRenderer to render virtual objects on the captured camera frames let realityRenderer = try RealityKit.RealityRenderer() realityRenderer.cameraSettings.colorBackground = .outputTexture() let cameraEntity = PerspectiveCamera() // see https://developer.apple.com/forums/thread/770235 let cameraTransform = deviceAnchor.originFromAnchorTransform * extrinsics.inverse cameraEntity.setTransformMatrix(cameraTransform, relativeTo: nil) cameraEntity.camera.near = 0.01 cameraEntity.camera.far = 100 cameraEntity.camera.fieldOfViewOrientation = .horizontal // manually calculated based on camera intrinsics cameraEntity.camera.fieldOfViewInDegrees = 105 realityRenderer.entities.append(cameraEntity) realityRenderer.activeCamera = cameraEntity Virtual objects, which should be seen in the camera frames, are clipped out by the camera transform. If I use deviceAnchor.originFromAnchorTransform as the camera transform, virtual objects can be rendered on camera frames at wrong positions (I think it is because the camera extrinsics isn't used to adjust the camera to the correct position). My question is how to use the camera extrinsic matrix for this purpose? Does the camera extrinsics point to a similar orientation of the device anchor with some minor rotation and postion change? Here is an extrinsics from a camera frame. It seems that the direction of Y-axis and Z-axis are flipped by the extrinsics. So the camera is point to a wrong direction. simd_float4x4([[0.9914258, 0.012555369, -0.13006608, 0.0], // X-axis [-0.0009778949, -0.9946325, -0.10346654, 0.0], // Y-axis [-0.13066702, 0.10270659, -0.98609203, 0.0], // Z-axis [0.024519, -0.019568002, -0.058280986, 1.0]]) // translation
3
0
740
Jan ’25
Issues setting up the Enterprise API entitlements (Main Camera Access)
Hello, i've recently received the entitlements to access the main camera stream for a project on the Apple Vision Pro. What happens : When executing code from this WWDC tutorial , i'm getting this error when trying to use a Camera Frame Provider : ar_camera_frame_provider_t <0x300d58870>: Failed to start camera stream with error: <ar_error_t: 0x303fcc4c0 Error Domain=com.apple.arkit Code=100 "App not authorized." UserInfo={NSLocalizedFailureReason=Using camera frame provider requires an entitlement., NSLocalizedRecoverySuggestion=, NSLocalizedDescription=App not authorized.} What I've tried : I followed the instructions given by mail, by : adding the .license file at the root of my project, adding the .entitlements file by adding capabilities in the project (Main Camera Access & Passthrough in screen capture are there). I've added NSCameraDescription, NSEnterpriseMCAMUsageDescription and NSWorldSensingUsageDescription (they all have a value assigned). I've also followed those post & post advices. When checking on the Account settings, i do see the capabilities in the "additional capabilities" On first launch, I'm also getting prompted to accept the NSEnterpriseMCAMUsageDescription, so I assume the info.plist file is valid? What did i missed to get the entitlements working ? Here's the code : import ARKit import SwiftUI import Vision import RealityKit class MainCameraAccess { var arKitSession = ARKitSession() var cameraFrameProvider = CameraFrameProvider() var pixelBuffer: CVPixelBuffer? func startCameraSession() async { let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [.left]) // Request authorization await arKitSession.requestAuthorization(for: [.cameraAccess]) // Start the session do { try await arKitSession.run([cameraFrameProvider]) } catch { print("Failed to start ARKit session: \(error)") return } // Get camera frame updates guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } // Process frames for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer } } func saveLatestImage() { guard let pixelBuffer = self.pixelBuffer else { print("No image available to save.") return } // Convert CVPixelBuffer to UIImage let ciImage = CIImage(cvPixelBuffer: pixelBuffer) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { print("Failed to create CGImage.") return } let uiImage = UIImage(cgImage: cgImage) // Save UIImage to Photos Album UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil) print("Image saved to photo library.") } } Thanks in advance for the help, Jeremy
3
0
636
Dec ’24
VisionOS 26 - threadsPerThreadgroup limit causing crash on device (but not in simulator)
Hi all, I'm running into an issue with an app that previously worked fine on device using visionOS 2.0. After updating to visionOS 26, the same code runs fine in the simulator but crashes on the device with the following error: -[MTLDebugComputeCommandEncoder _validateThreadsPerThreadgroup:]:1330: failed assertion `(threadsPerThreadgroup.width(32) * threadsPerThreadgroup.height(32) * threadsPerThreadgroup.depth(1))(1024) must be <= 832. (kernel threadgroup size limit)` Is there any documented way to check or increase the allowed threadsPerThreadgroup size on Apple Vision Pro? Or any recommended workaround for this regression? Thanks in advance!
3
0
127
Jun ’25
Protobuf Error Crash
We are developing a mixed reality app for Vision Pro using Reality Composer Pro, but we're consistently encountering a Protobuf-related crash whenever we enter the immersive space. Our Reality Composer Pro package is quite complex, with numerous objects. Could the complexity of the package be contributing to the issue, or could something else be at play? No errors are being flagged in the code or Reality Composer itself. Here’s the error log: [libprotobuf FATAL /Library/Caches/com.apple.xbs/Sources/REKit/ThirdParty/protobuf/src/google/protobuf/io/zero_copy_stream_impl_lite.cc:276] CHECK failed: (count) &gt;= (0): libc++abi: terminating due to uncaught exception of type google::protobuf::FatalException: CHECK failed: (count) &gt;= (0): Any insight on what might be causing this would be appreciated.
3
1
745
Oct ’24
RealityKit/ARKit Environment Texturing broken on iOS 18
Devices running iOS 18 using RealityKit do not seem to receive lighting supplied via ARKit Environment Texturing (https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration/2977509-environmenttexturing). Instead just a default IBL is used by RealityKit. This happens with RealityView as well as ARView. It also happens when I explicitly opt-in to environment texturing: let worldTrackingConfig = ARWorldTrackingConfiguration() worldTrackingConfig.environmentTexturing = .automatic arView.session.run(worldTrackingConfig) Even the Xcode AR Template has this issue. I'm attaching a screenshot of the sample app running on iOS 18 where it's broken and from iOS 17 where it works as expected. I hope this can get resolved quickly since I see it as a major regression. Feedback ID: FB15091335 UPDATE: It works on my older iPhone XS (iOS 18 22A5282m) Broken on iPad Pro (11-inch) (3rd generation) (iPadOS 18.0 (22A5350a)) Maybe it's related to LiDAR? Thank you! iOS 17 (works): iOS 18 (broken):
3
1
988
Jan ’25
RealityKit Trace Metric Max/Range for VisionOS app
Hi Nathaniel, I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app. Thanks, Bob
3
0
91
Jun ’25