Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

VisionOS DockingRegion getting ignored
Hi, I added DockingRegion to my scene from Reality Composer Pro, and I am able to load up the scene, but DockingRegion is getting ignored and the scene is getting rendered with no change in AVPlayerViewController window. As it can be seen in Reality Composer Pro screenshot below, I set the width of the player to 666, and moved it to the back by 300cm, but the actual result does not reflect the position I set on Reality Composer Pro. Is there anything else I should do other than loading up the Entity and adding to RealityView? Specifically, do I have to get the DockingRegion within the usda file and somehow enable it?
3
0
478
Dec ’24
Animation handling on Scene change
I work on a game where I use timeline animations in Reality Composer Pro. The game runs in an immersive space, but can be paused where I then move the whole level root entity from the immersive space to another RealityView in a Window Group. When the player continues I do it exactly the other way around to move the level root from the window group back to my immersive space RealityView. And it seems like all animations get automatically stopped and restarted when the scene gets changed. The problem is, it does not resume where it stopped before, it completely starts again from where it stopped and therefore, has for example a wrong y offset as visible in the picture. For example in the picture, the yellow sphere loops the following animation: 0 to 100 100 to -100 -100 to 0 If I now pause the game (and basically switch scenes), the previous animation gets stopped and restarted at position y = 100. So now it loops: 100 to 200 200 to 0 0 to 100 I already tried all kind of setups - like: Setting the animations relative to root, parent, local Using behaviors (on Added to Scene, on Notification) And finally even by accessing the availableAnimations directly and saving the playback controller of the animation There I saw, if I manually trigger the following code before switching the scene, everything works as expected: Button("Reset") { animationPlaybackController.time = 0 animationPlaybackController.pause() animationPlaybackController.stop(blendOutDuration: 0.00001) } But if I use time = 0 with .stop() directly, the time = 0 seems to be ignored and I get the same behavior as before that it stops in a wrong y offset, hence my assumption that animations get stopped and invalidated once they change the scene. I tried to call the code manually on ImmersiveSpace.onDisappear, WindowGroup.onAppear and different kind of SceneEvents subscriptions, but unfortunately nothing worked. So am I doing something wrong in general or is there a way to fix this?
0
0
55
Apr ’25
目前在使用Unity进行开发时,遇到了材质(Material)和Shader在visionOS设备上无法正常渲染的问题
具体表现为:在Unity编辑器中材质显示正常,但部署到Vision Pro真机后部分材质丢失或Shader效果异常(如透明通道失效、光照计算错误等)。此问题影响了开发进度,希望得到技术支持的帮助 Specific results: The materials are displayed normally in the Unity editor, but after being deployed to the Vision Pro real machine, some materials are lost or the Shader effect is abnormal (such as transparent channel failure, antenna calculation error, etc.). This problem has affected the development progress, and I hope to get help from technical support
0
0
46
Apr ’25
attenuation map covers over object
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
1
0
69
Apr ’25
VisionPro camera frame rate
Hi, I'm working with CameraFrameProvider from Enterprise API. Is it always capped at 30fps, or is there something I can switch to get more? I assume it is capped at 30, so let me cram in additional question here :). If I'd get a developer strap and attach an external camera capable of doing >30fps, will I get the full stream, or some other limitation will kick in?
2
0
88
Apr ’25
Cursor display issue on attachment view in immersive space
While using Screen Mirroring in developer mode within my immersive space, I noticed an alignment issue with the computer cursor (transparent circle). When I move it toward an attachment view, the cursor remains horizontal instead of aligning with the surface of the attachment view. It shows correctly on a 2D window only wrong on attachment view. Is this behavior a bug, or could it be caused by a missing or incorrect configuration on the attachment view? Want help, thanks.
1
0
72
Apr ’25
QuickLook and .heic
I"m trying to create a simple app for my students that will display .heic images taken with a nikon and them converted to .heic in the photos app. My attempts only result in the QuickLook viewer showing the images in 2d. Any guidance? Here is my ContentView: import SwiftUI import QuickLook struct ContentView: View { @State private var showQuickLook = false @State private var previewURL: URL? = nil // State to store the URL for Quick Look var body: some View { VStack { Button("See it in 3D") { // Set the URL for the file from the bundle and toggle Quick Look presentation if let imageURL = Bundle.main.url(forResource: "Michelia_fuego", withExtension: "heic") { previewURL = imageURL // Set the preview URL if the image is found showQuickLook.toggle() // Toggle to trigger Quick Look presentation } else { print("File not found") // Print error if the file is missing } } .quickLookPreview($previewURL) // Binding to the URL } } } #Preview { ContentView() }
2
0
712
Oct ’24
Apply mesh to real world people.
As far as I know, Apple hasn’t opened access to the Vision Pro camera for developers yet, so I’m trying to find possible workarounds within the current capabilities. I’m wondering if there’s any way to apply a mesh to a person in the scene in Vision Pro, or if there’s an alternative approach to roughly detect a human shape in front of the user?
2
0
83
Apr ’25
How to insert modified video frames into the system camera to achieve AI erase effect?
By applying for the enterprise API, we can obtain the data of video frames collected by VisionPro glasses, and then we process the collected video frames to achieve the function of eliminating a certain object. But it was not found how to insert the processed video frames into the data source collected by the system camera. So I would like to ask if there is any API that can insert processed video frames into the original data and present them to the user? This effect is similar to the right side twist of VisionPro glasses, which allows the physical world and digital space to blend perfectly after rotation. So, I would like to ask if there is a related API that can solve this problem? STEPS TO REPRODUCE Obtain video frames, Process the obtained video frames Insert the processed video frames into the VisonOS system camera. System: VisionOS 2.0 API used: Enterprise APIs Main camera access permissions
1
0
554
Nov ’24
Convert entity to image
VisionPro 开发,XCode,我想在窗口中找到一个显示模型的图片。这个模型可以改变它的材料,它不是唯一的形象,它正在改变。如何将此模型转换为图像
1
0
55
Apr ’25
USDZ file not loading on Apple Vision Pro
I'm working on a school project that allows users to open a .USDZ file (using Quick Look) on the webpage while using Apple Vision Pro to put the object in their physical envirnment, the project is deployed on Vercel. I'm testing the page with my apple vision pro, when I click open the .USDZ file, I'm seeing a triangle with an exclamation mark while it's trying to load, but it won't load. Does anybody know how to troubleshoot this issue?
4
0
928
Oct ’24
visionOS App Not Receiving Latest Heart Rate Data from HealthKit
Hello everyone, I'm developing an app for visionOS that utilizes HealthKit to query heart rate data. However, I'm encountering an issue where the app doesn't retrieve the latest heart rate values. Specifically, it fails to get live heart rate data even after the data has been saved to the Health app. The readings my app displays are outdated and do not match the current values shown in the Health app. Here's what I've tried so far: Fetching Heart Rate Samples: Used HKSampleQuery and HKAnchoredObjectQuery to fetch the most recent heart rate samples. Despite this, the data retrieved is still not up-to-date. Checking Permissions: Ensured that all necessary HealthKit permissions are granted. The app has authorization to read heart rate data and write workout data. My questions are: Is there a known issue or limitation with HealthKit on visionOS that prevents apps from accessing the latest heart rate data? Are there additional steps or configurations required to access live heart rate data in visionOS apps? Has anyone successfully implemented live heart rate monitoring on visionOS, and if so, could you share how you achieved it?
0
0
458
Sep ’24
Eye Difference in Object Tracking
Hi all, I am having trouble debugging an error where the wireframe object entity representation for the Object Tracking Demo: "Explore object tracking for visionOS" appears incorrect in the right eye of the Vision Pro but correct in the left eye. Would anyone happen to know what is going on? I have attempted to offset the object by changing world coordinates, but this moves the object in both the left and the right eye. Could this be due to the new visionOS beta update (2.0 --> 2.2) ? I am currently using visionOS 2.2. Thanks!
1
0
375
Nov ’24
How to read a video stream that includes both the physical world and digital space when obtaining a video stream from VisionPro by applying for the Enterprise API but only reading a video stream that contains both the physical world and digital space
We have successfully obtained the permissions for "Main Camera access" and "Passthrough in screen capture" from Apple. Currently, the video streams we have received are from the physical world and do not include the digital world. How can we obtain video streams from both the physical and digital worlds? thank you!
0
0
53
Apr ’25
How to create a MultiplayerDelegate and use TabletopNetworkSession features?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works. Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession. Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?   possibly we are missing something here; any suggestions?
2
0
199
Apr ’25
distance calculation:I want to implement the same code functionality for distance calculation in Unity using SwiftUI. visionos
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body. I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this? void CalculateLengthInsideOrgan() { // Direction from the base of the probe to the tip Vector3 direction = probeTip.position - probeBase.position; float probeLength = direction.magnitude; // Raycasting RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask); if (hits.Length > 0) { // Calculate the length entering the organ float distanceToFirstHit = hits[0].distance; lengthInsideOrgan = probeLength - distanceToFirstHit; } else { lengthInsideOrgan = 0f; } }
1
0
641
Nov ’24
how to achieve "concave in" glass view look?
I have been trying to implement this look where a component looks "pushed in" but I could not find any resources regarding this effect. The closest I got was a combination of a RoundedRectangle and .glassBackgroundEffect(), but this makes the view look pushed out, instead of pushed in. I was wondering if this is achievable in SwiftUI level, or even in UIKit level.
1
0
87
Apr ’25