Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Understanding the Distance for Physical Object Visibility in Vision Pro Immersive System
I understand that the system helps maintain user comfort by automatically adjusting the opacity of content in certain situations, like when someone moves too quickly or gets too close to a physical object. The content in front of them dims briefly to allow a clearer view of their surroundings. And I'd like to know the specific distance at which the system begins to show the physical object, or what criteria are used for this adjustment.
0
0
319
Oct ’24
Camera settings at intrinsic calibration time
Hi everyone, I am wondering under which settings the camera(s) were set by the time they were calibrated. For instance, one aspect that is easy to find is the reference resolution of the images taken when calibrating the intrinsics, this is by retrieving intrinsicMatrixReferenceDimensions. Making sure that the principal point is referenced to the by the time resolution used when the calibration was ongoing. However, recently I saw that there are focusing modes that potentially displace the lens' physical position. Settings like: AutoFocusRangeRestriction: none, near, far setFocusModeLocked: Locks the lens position at the specified value, and sets the focus mode to a locked state. My concern lies the impact this focusing lens displacements can have on the intrinsic matrix parameters, like these parameters no longer describe the camera since the lens position has changed. In simple words, what is the focus 'mode'/'range' the cameras were set when calibrating them for intrnisics?
0
0
477
Jan ’25
Trouble with initializing a SharePlay and using GroupSessionJournal.
I am having trouble with initializing the SharePlay. It works but we have to leave the game (click the close button) and rejoin it, sometimes several times, for it to establish the connection. I am also having trouble sharing images over SharePlay with GroupSessionJournal. I am not able to get it to transfer any amount of data or even get recognition on the other participants in the SharePlay that an image is being sent. We have look at all the information we can find online and are not able to establish a connection. I am not sure if I am missing a step, or if I am incorrectly sending the data through the GroupSessionJournal. Here are the steps I took take to replicate the issue I have: FaceTime another person with the app. Open the app and click the SharePlay button to SharePlay it with the other person. Establish the SharePlay and by making sure that the board states are syncronized across participants. If its not click the close button and click open app again to rejoin the SharePlay. (This is one of the bugs that I would like to fix. This is just a work around we developed to establish the SharePlay. We would like it so that when you click SharePLay and they join the session it works.) Once the SharePlay has been established, change the image by clicking change 1 image. Select a jpg image. The image that represents 1 should be not set. If you dont see the image click on any of the X in the squares and it will change to the image. The image should appear on the other participant in the SharePlay. (This does not happen and is what we have not been able to figure out how to get working.) Here are the classes for the example project I created: Content View Game Model Class Activity Manager Main Starter Class
0
0
529
Sep ’24
RoomCaptureSession persistence, ARSession pause broken?
Hi all, Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described. The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs. These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded(). Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described. Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
0
3
541
Nov ’24
Post processing in VisionOS
WWDC21 had a cool demo project with fish, with a watery, misty look (Dive into RealityKit). It used post processing in RealityKit, but the ARView class isn’t available in VisionOS. Can CompositorLayer be used instead for post processing in full immersion?
0
0
289
Nov ’24
new algorithm significantly improves PhotogrammetrySession?
I noticed in the latest macOS beta 3 that there was this update: A new algorithm significantly improves PhotogrammetrySession reconstruction quality of low-texture objects not captured with the ObjectCaptureSession front end. It will be downloaded and cached once in the background when the PhotogrammetrySession is used at runtime. If network isn’t available at that time, the old low quality model will be used until the new one can be downloaded. There is no code change needed to get this improved model. (145220451) I am not noticing any difference to before with the reconstructions I tested so I am assuming it's reverting to the old model but in the logs there is no way to see if it succeeds or fails to download that new model. do you have any more information on what was improved here with some examples and what we should be looking for? also how can confirm the download of that new model has not failed?
0
0
290
Jul ’25
How to get the transform of the joint in a skeleton when it is Animating
I have an entity that was created using Mixamo, and it has an animation. after the animation completes the mesh of the robot is not where the entity is positioned. I want to do something like when the animation finishes, I set the root entity's transform to the mesh's transform. There are no transformations applied to any of the children of this root of the model, which means that the transformations are applied to the skeleton due the the playing of animations. Is there a way where I can apply the final position of the root of the skeleton to the root entity to make sure to position the entity where the animation has ended just before the next animation plays?
0
0
91
May ’25
RoomPlan - The delegate of ARSession is retaining x ARFrames
Hi, I'm encountering an issue in our app that uses RoomPlan and ARsession for scanning. After prolonged use—especially under heavy load from both the scanning process and other unrelated app operations—the iPhone becomes very hot, and the following warning begins to appear more frequently: "ARSession <0x107559680>: The delegate of ARSession is retaining 11 ARFrames. The camera will stop delivering camera images if the delegate keeps holding on to too many ARFrames. This could be a threading or memory management issue in the delegate and should be fixed." I was able to reproduce this behavior using Apple’s RoomPlanExampleApp, with only one change: I introduced a CPU-intensive workload at the end of the startSession() function: DispatchQueue.global().asyncAfter(deadline: .now() + 5) { for i in 0..<4 { var value = 10_000 DispatchQueue.global().async { while true { value *= 10_000 value /= 10_000 value ^= 10_000 value = 10_000 } } } } I suspect this is some RoomPlan API problem that's why a filed an feedback: 17441091
0
0
144
May ’25
Apple Vision Pro - hand tracking with gloves
I am considering adding finger pad haptics (Data flow for haptic feedback is directed from the AVP to the fingers, not vice versa). Simple piezos wired to a wrist connection holding the driver/battery. But I'm concerned it will impact the hand tracking. Any guidance regarding gloves and/or the size of any peripherals attached to fingers? Or, if anyone has another (inexpensive) low profile option on the market please LMK. Thanks
0
0
211
Feb ’25
Prevent Window (or Volume) Mouse Focus
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window. Is there anyway to reject that focus?
0
0
402
Nov ’24
VisionOS Main Camera Enterprise API: Development license into distribution for Business Store
Hello, We've been working for months now on an App for the Vision Pro. (it's been great btw!) We already have an App in the App Store for iOS, and have been migrating our platform from the Microsoft Hololens 2 to the AVP: https://apps.microsoft.com/detail/9NPPP031VHD1 We require the Main Camera access and already have gotten the Enterprise.license for development purposes. Unfortunately, we cannot publish our Business App (which uses an Enterprise API) under the same Name/Bundle ID as our iOS App because it would conflict with our current Distribution Method. We arrived at the conclusion that we need a new Enterprise.license under a different Bundle ID to create a new App for the Business Store. Has anyone been in the same boat as us, and tried to publish to the Business Store while already having an App in the Public App Store under the same name? We applied to get another license for distribution under another name (with "Pro" at the end), but it's been stuck in limbo for over a month now (probably because the new bundle ID doesn't have any track record). Anyhow, thanks for any help, we're open to suggestions as to how to proceed!
0
0
427
Feb ’25
Any recommended content-aware compression strategy for .ktx textures in Reality Composer Pro?
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets. I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity. In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging: Low-complexity images are compressed more aggressively The final packaged file size varies based on content complexity Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment. Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro? Or any best practices to optimize .ktx sizes based on image complexity? Thanks!
0
0
84
Jun ’25
How can I implement the expand effect when clicking on a contact's avatar like in visionos's messages apps?
I found that there is such a click-to-expand horizontally and smoothly effect in the system application called "message", which is good. I wonder if I can add a similar effect to my own app. If possible, are there any implementation ideas or examples that I can refer to? Thanks!
0
0
580
Oct ’24
Export Camera Position from object capture API
Hi, I would like to train Gaussian splats from my object captures. So I need a pointcloud and camera positions together with the original photos taken to train GS In an app like postShot. I could do this with Reality Capture, which supports exporting pointclouds and camera position but it does not do well with turntable photogrammetry. While the Apple object capture API does produce really solid results with turntable images. so my question is, can I export camera data from my object captures to use in another application? Or is there may be a plan to at this feature in the future? It would be really helpful in creating ultra realistic, 3-D objects in Gaussian splat format. Thanks for any isuggestions…
0
1
342
Dec ’24
AR sessions fails with "Required sensor failed"
The AR based app I am working on right now is experiencing an issue. Sometimes, the AR session fails with a call to my ARSessionObserver's session(_ session: ARSession, didFailWithError error: Error) with the following error: Error Domain=com.apple.arkit.error Code=102 "Required sensor failed." NSLocalizedFailureReason="A sensor failed to deliver the required input.," NSLocalizedRecoverySuggestion="Make sure that the application has the required privacy settings." The underlying error seems to point to the CoreMotion framework: Domain=CMErrorDomain Code=102 "(null) Some people seem to have experienced this issue and solved it by making sure that the Compass Calibration switch is ON in Settings > Privacy > Location Services > System Services. For context, the ARWorldTrackingConfiguration.worldAlignment is set to .gravity The thing is it is already ON when I experience this issue. I also noticed that this issue happens way more often on the iPhone 16e than in any other device. Has anyone had similar experiences? I am looking for a way to prevent this error from happening (ideally) or handling in a way that does not affect the user. Any help is appreciated
0
0
154
Aug ’25
High CPU with ARWorldTrackingConfiguration vs. ARBodyTrackingConfiguration using AREnvironmentTexturing
In a simple test, I'm observing ~30% higher CPU usage with the ARWorldTrackingConfiguration compared to the ARBodyTrackingConfiguration when both configurations have AREnvironmentTexturing enabled. In Instruments, I observe Recon3D consuming ~5.5 seconds of CPU time with the ARWorldTrackingConfiguration vs <0.3 second with the ARBodyTrackingConfiguration in two separate 30 seconds samples. This is on an iPhone 12 Pro equipped with lidar. Is there a reason why two separate configurations, both having the same features enabled would have a different CPU overhead?
0
0
122
Aug ’25
Keep Apple Vision Pro Awake
I have two Apple Vision Pros so that I can make and test a multi-player immersive reality game. But I am one developer, so I need to be able to take one Apple Vision Pro off, and put the other one on to see what the other device is seeing, and to ensure my game information is correctlly being sent over the network with multipeer connectivity. But when I take one off, the Apple Vision Pro immediately goes to sleep. With Apple Vision Pro OS 1, I could put a piece of paper into the pro and they would stay on for hours, and I could take them on and off and debug my game. But now with VisionOS 2, even with the paper they soon go to sleep. Is there a setting I can change or override as a developer to stop this auto sleep or auto lock? I need to check things like: when two devices are on the network, can I see them both so that players can select each other from a menu? can i send object positions back and forth Thank you.
0
2
571
Oct ’24