Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Having an issue with using a custom component with Reality Composer Pro
We have a project which is currently being built as a XCFramework. The framework contains a custom component to be used with entities in Reality Composer Pro. I have tried to se set the RCP Package.swift file to reference the framework package for the in the dependancies. Nothing that I do with the folder path to reference the code is working. Do I need to change the project to be using Swift source code instead of a XCFramework? The component needs to be in the framework as there is a class in the framework that works directly with the custom compoent. I am able to reference the XCFramework as a Swift Package with other projects.
2
0
125
Apr ’25
How to configure Spatial Audio on a Video Material?, Compile error.
I've tried following apple's documentation to apply a video material on a Model Entity, but I have encountered a compile error while attempting to specify the Spatial Audio type. It is a 360 video on a Sphere which plays just fine, but the audio is too quiet compared to the volume I get when I preview the video on Xcode. So I tried tried to configure audio playback mode on the material but it gives me a compile error: "audioInputMode' is unavailable in visionOS audioInputMode' has been explicitly marked unavailable here RealityFoundation.VideoPlaybackController.audioInputMode)" https://developer.apple.com/documentation/realitykit/videomaterial/ Code: let player = AVPlayer(url: url) // Instantiate and configure the video material. let material = VideoMaterial(avPlayer: player) // Configure audio playback mode. material.controller.audioInputMode = .spatial // this line won’t compile. VisionOS 2.4, Xcode 16.4, also tried Xcode 26 beta 2. The videos are HEVC MPEG-4 codecs. Is there any other way to do this, or is there a workaround available? Thank you.
2
0
241
Jul ’25
RealityKit fullscreen layer
Hi! I'm currently trying to render another XR scene in front of a RealityKit one. Actually, I'm anchoring a plane to the head with a shader to display for left/right eye side-by-side images. By default, the camera has a near plane so I can directly draw at z=0. Is there a way to change the camera near plane? Or maybe there is a better solution to overlay image/texture for left/right eyes? Ideally, I would layer some kind of CompositorLayer on RealityKit, but that's sadly not possible from what I know. Thanks in advance and have a good day!
2
0
318
Jul ’25
UICollectionViewDataSourcePrefetching does not work on SwiftUI wrapped VisionOS
prefetching logic for UICollectionView on VisionOS does not work. I have set up a Standalone test repo to demonstrate this issue. This repo is basically a visionOS version of Apple's guide project on implementation of prefetching logic. in repo you will see a simple ViewController that has UICollectionView, wrapped inside UIViewControllerRepresentable. on scroll, it should print 🕊️ prefetch start on console to demonstrate func collectionView(_ collectionView: UICollectionView, prefetchItemsAt indexPaths: [IndexPath]) is called. However it never happens on VisionOS devices. With the same code it behaves correctly on iOS devices
1
0
166
Jul ’25
USDZ Security
I am working on an app that will allow a user to load and share their model files (usdz, usda, usdc). I'm looking at security options to prevent bad actors. Are there security or validation methods built into ARKit/RealityKit/CloudKit when loading models or saving them on the cloud? I want to ensure no one can inject any sort of exploit through these file types.
0
0
478
Jul ’25
How to create a MultiplayerDelegate and use TabletopNetworkSession features?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works. Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession. Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?   possibly we are missing something here; any suggestions?
2
0
247
Apr ’25
new algorithm significantly improves PhotogrammetrySession?
I noticed in the latest macOS beta 3 that there was this update: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) I am not noticing any difference to before with the reconstructions I tested so I am assuming it's reverting to the old model but in the logs there is no way to see if it succeeds or fails to download that new model. do you have any more information on what was improved here with some examples and what we should be looking for? also how can confirm the download of that new model has not failed?
0
0
332
Jul ’25
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
1
0
84
Jul ’25
visionOS widget dimensions?
Is there any size guidance for the new WidgetKit integration on visionOS? The Widget HIG provides dimensions for all the widget size classes on iOS, iPadOS and watchOS, but has not been updated for visionOS. https://developer.apple.com/design/human-interface-guidelines/widgets My potential widget use case is image based, so I'm looking to better understand the optimal size, resolution etc I would need, particularly for the new visionOS specific extra large widget size.
0
0
572
Jul ’25
how to achieve "concave in" glass view look?
I have been trying to implement this look where a component looks "pushed in" but I could not find any resources regarding this effect. The closest I got was a combination of a RoundedRectangle and .glassBackgroundEffect(), but this makes the view look pushed out, instead of pushed in. I was wondering if this is achievable in SwiftUI level, or even in UIKit level.
1
0
120
Apr ’25
Overlaying SwiftUI content with transparency in front of RealityView
Following up on my previous question here: https://developer.apple.com/forums/thread/774262 Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker. Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again. Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
1
0
391
Feb ’25
The participantIdentifier of Shared Coordinate Space invalid in Visionos26 Enterprise api
Visionos26 Enterprise api has the new feature: Shared Coordinate Space, participants exchange their coordinate data by SharedCoordinateSpaceProvider through their own network, when shared coordinate space established with nearby participants, the event: connectedParticipantIdentifiers(participants: [UUID]) will be received. But the Event.participantIdentifier still be an invalid default value(00000000-0000-0000-FFFF-FFFFFFFF) in this time, I wonder when or how I can get a valid event.participantIdentifier, or is there some other way to get the local participantIdentifier? Or If it's a bug, please fix it in later beta release version, thank you.
0
0
208
Jul ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
64
Apr ’25
How to obtain video streams from the digital space included in VisionPro after applying for the "Enterprise API"?
After implementing the method of obtaining video streams discussed at WWDC in the program, I found that the obtained video stream does not include digital models in the digital space or related videos such as the program UI. I would like to ask how to obtain a video stream or frame that contains only the physical world? let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? var cameraAccessStatus = ARKitSession.AuthorizationStatus.notDetermined let worldTracking = WorldTrackingProvider() func requestWorldSensingCameraAccess() async { let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func queryAuthorizationCameraAccess() async{ let authorizationResult = await arKitSession.queryAuthorization(for: [.cameraAccess]) cameraAccessStatus = authorizationResult[.cameraAccess]! } func monitorSessionEvents() async { for await event in arKitSession.events { switch event { case .dataProviderStateChanged(_, let newState, let error): switch newState { case .initialized: break case .running: break case .paused: break case .stopped: if let error { print("An error occurred: \(error)") } @unknown default: break } case .authorizationChanged(let type, let status): print("Authorization type \(type) changed to \(status)") default: print("An unknown event occured \(event)") } } } @MainActor func processWorldAnchorUpdates() async { for await anchorUpdate in worldTracking.anchorUpdates { switch anchorUpdate.event { case .added: //检查是否有持久化对象附加到此添加的锚点- //它可能是该应用程序之前运行的一个世界锚。 //ARKit显示与此应用程序相关的所有世界锚点 //当世界跟踪提供程序启动时。 fallthrough case .updated: //使放置的对象的位置与其对应的对象保持同步 //世界锚点,如果未跟踪锚点,则隐藏对象。 break case .removed: //如果删除了相应的世界定位点,则删除已放置的对象。 break } } } func arkitRun() async{ do { try await arKitSession.run([cameraFrameProvider,worldTracking]) } catch { return } } @MainActor func processDeviceAnchorUpdates() async { await run(function: self.cameraFrameUpdatesBuffer, withFrequency: 90) } @MainActor func cameraFrameUpdatesBuffer() async{ guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]),let cameraFrameUpdates1 = cameraFrameProvider.cameraFrameUpdates(for: formats[1]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } self.pixelBuffer = mainCameraSample.pixelBuffer } for await cameraFrame in cameraFrameUpdates1 { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } if self.pixelBuffer != nil { self.pixelBuffer = mergeTwoFrames(frame1: self.pixelBuffer!, frame2: mainCameraSample.pixelBuffer, outputSize: CGSize(width: 1920, height: 1080)) } } }
0
0
131
Apr ’25
I need to troubleshoot Transform Drift in ARKit
Hi all, I'm currently developing a real-time object reconstruction app using ARKit. The goal is to scan large objects using ARKit’s depth and transform data, and generate a point cloud. However, I’m facing a major challenge - Transform Drift / World Alignment Issues The localToWorld transform provided by ARKit frequently seems to drift or become unstable across frames. This results in misaligned point clouds even when the device is moved slowly or kept relatively still. In some cases, a static surface scanned over a few seconds results in clearly misaligned fragments. This makes it difficult to accurately stitch a multi-frame point cloud. I have experimented with various lighting conditions and object textures, but the issue persists in all cases. At times, the relative error between frames reaches up to 20 cm, while in other instances the error is minimal; however, the drift gradually accumulates over time, leading to an overall enlargement of the reconstructed object. I have attached images of both cases here. Questions: Are there specific conditions under which ARKit’s world transform is expected to drift? Is there a way to detect or recover from this drift during runtime? Any best practices for maintaining consistent tracking during scanning or measurement sessions?
1
0
117
Jun ’25