Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Reality Composer Pro Performance on iOS
I need help to wrap my head around this... If I import the Reality Composer Pro package and load it into an ARView, I will see 1.3gb of memory usage and about 180-220% cpu usage. The frames will start at around 60fps, and then eventually drop to around 30fps. If I export the usdz from Reality Composer Pro and load that into the same ARView, I will see about 1gb of memory usage and around 150% cpu usage; fps holds longer at 60 but eventually drops. If I load that same usdz into a QuickLook view, I will see about 55mb of memory usage, 9-11% cpu, and the frames stay locked at 116fps. The only thing I notice is the button I have is slightly less responsive, but it all still works fine. I don't understand. How can I make the ARView work as efficiently as QuickLook?
0
0
203
Mar ’25
Hidden window/volume system overlays in Full Space
When I show a window while a sky sphere is shown, the handles to drag/close/resize the window are hidden. The colliders still work, so they are there, but only the visuals are hidden. I already know from another project, that this also happens to volumes. They only appear once you get closer to the window or if the sky sphere gets removed. Is this a known issue or is there a fix for that? .persistentSystemOverlays(.visible)does not fix it Xcode 16.3.0 Beta, visionOS 2.4
5
0
257
Mar ’25
Custom component question
I created a custom component for composer pro in which I have several variables I need an entity to have. The idea is to add this component to some 3d models and save them as usdz’s then I load these usdz’s in code and do specific things depending on these variables. The component shows up in composer fine and I can set variables there. The problem is that the values I set in composer are different that what is shown in code. lets say in composer I set canMove = true. then when I read in code is set to false. I don’t know if I’m missing something public struct MyObjectComponent: Component, Codable { public var affectAll: Bool = false public var affectFloor: Bool = false public var canMove: Bool = false public var moveX: Bool = false public var moveY: Bool = false public var moveZ: Bool = false public var canRotate: Bool = false public var rotateX: Bool = false public var rotateY: Bool = false public var rotateZ: Bool = false public init() { } } Any help appreciated. Guillermo
4
0
665
Nov ’24
Potential bug in Anchor updates on visionOS using the ARKit C API
I have an application running on visionOS 2.0 that uses the ARKit C API to create anchors and listen for updates. I am running an ARKit session with a WorldTrackingProvider (and a CameraFrameProvider, if that is relevant) Then, I am registering a callback using ar_world_tracking_provider_set_anchor_update_handler_f When updates arrive I iterate over the updated anchors using ar_world_anchors_enumerate_anchors_f. Then, as described in the https://developer.apple.com/documentation/visionos/tracking-points-in-world-space documentation, I walk around and hold down the Digital Crown to reposition the current space. This resets the world origin to my current position. When this happens, anchor updates arrive. In most cases, the anchor updates return the new transform (using ar_world_anchor_get_origin_from_anchor_transform) but sometimes I get an anchor update that reports the transform of the anchor from before the world origin was repositioned. Meaning instead of staying in place in the physical world, the world anchor moves relative to me. I can work around this by calling ar_world_tracking_provider_copy_all_world_anchors_f which provides me with the correct transform, but this async method also adds some noticeable delay to the anchor updates. Is this already a known issue?
0
0
378
Sep ’24
Post processing in VisionOS
WWDC21 had a cool demo project with fish, with a watery, misty look (Dive into RealityKit). It used post processing in RealityKit, but the ARView class isn’t available in VisionOS. Can CompositorLayer be used instead for post processing in full immersion?
0
0
289
Nov ’24
How to Move and Rotate WindowGroup with Code in Xcode
当我进入混合空间时,出现一个模型,但模型后面有一个 windowGroup,无法完全查看。如果我想点击进入 mix 空间,我需要使用代码将 windowGroup 移动到另一个位置,而不是手动移动 ![](“https://developer.apple.com/forums/content/attachment/0471ead0-4c74-43a7-9ecc-12e67e81cec6” “title=WechatIMG31.jpg;宽度=915;高度=777”)
0
0
42
Mar ’25
Control Player Camera with PS5 Controller on Vision Pro
I recently completed a freelance project where I was tasked with creating room-scale environments that could be used as AR elements. As a bonus, I suggested that these could be done to scale, and repurposed for eventual viewing in Vision Pro. To illustrate, I was able to quickly create a simple Immersive project in Xcode, add the USDZ file (authored in Maya, with baked lighting from Arnold) to Reality Composer Pro, and compile for quick sending to headset. I then would do screen recordings inside the immersive space, which the client loved to see. However, I am unable to walk around due to the boundary limitations. My next obvious thought is, how can I setup the “player” camera so that I can control with a PS5 controller inside AVP? In addition to Maya, I’m an Unreal Engine artist, and have been waiting patiently to get any projects compiled for AVP. With 5.5 release, I was able to get a VR Template test over to AVP, where I have rudimentary navigation control via the PS5 controller. Ideally, I’d also love to learn how to set this up natively, so I can take simple USDZ scenes created in Maya, import to RCP, setup a simple camera controller, and then be able to use this to navigate my VR Immersive spaces on Vision Pro. How can we go about doing this? Part two of this question/suggestion is, how would I go about controlling a rigged, animated character in AR/passthrough mode in a similar fashion? Thx!
0
0
523
Nov ’24
Start Metal3 and visionOS in Compositor Services
I am seeking a comprehensive pathway to learning Metal programming on VisionOS. The official documentation’s Pathway on Metal is insufficient in this regard. I kindly request that someone create a detailed pathway to assist me in this endeavor. The pathway should encompass the following key areas: Knowledge Base: Understand the fundamental principles of Metal and other frameworks, as well as basic concepts, to prepare for future learning. Metal3 (very important) : Gain a deep understanding of Metal itself, the programming language used to communicate with the GPU on the device to render graphics. This knowledge forms the foundation for all Metal-related tasks. Compositor Services and ARKit (important) : Learn how to display Metal scenes within the Vision device’s space and enable augmented reality (AR) and hand interaction. This knowledge is essential for creating interactive and immersive experiences. Metal Performance Shaders: Acquire expertise in optimizing material rendering to enhance performance. MetalKit: Simplifies the tasks that display your Metal content onscreen. MetalFX: Develop proficiency in using MetalFX to improve rendering efficiency and achieve visually stunning effects. I would appreciate it if you could provide me with a detailed and comprehensive pathway, including the URLs of relevant documents, to guide my learning journey. Thank you for your assistance.
0
0
518
Nov ’24
Adding reference image failed in VisionPro
I am using ARKit to detect image in visionPro. However I met some question about adding the reference image. Some of my images can not be added correctly sometimes. (As you can see in the picture above, the 'orange' can not be added correctly, but the 'cup' can). However, sometimes they will be added without any problem. I do not know why it will happen. And I want they all be added steadily.
0
0
214
Mar ’25
System issues encountered when developing VisionOS related programs using Unity
Using Unity to develop VisionOS program, pressing the right knob of VisionPro during use will exit the Unity space and destroy the model in the space. The model in the space has been disconnected from the SwiftUI interface. After clicking the right knob, return to the system main interface, and then click the right knob again to return to the inside of the program. However, Unity space cannot be restored, and calling the discisWindow method on the SwiftUI interface has no effect, so the interface cannot be destroyed. Is there any solution??
1
0
325
Feb ’25
xform called "Scene" breaks animations on Quicklook starting with iOS15
Hello, We discovered that a bunch of our old animated models were no longer animated on iOS15 and onwards. After a few days of playing spot the difference between usda files I noticed that all the broken models had an xform called "Scene". Lo and behold, changing the name of that xform fixed the issue on all the models. Even lowercase "scene" makes the animations work again. Is "Scene" a reserved keyword or something? What other keywords do we need to avoid so we can create more robust USDZ files? I'm surprised this issue isn't more widespread considering Blender wraps models in a "Scene" node. At the drive link below you can find two animated cube USDZs. The only difference is the name of one of the xforms. The one with a "Scene" xform is not animated in quicklook (replicated on iPhone 13 iOS v15.2, iPhone 13 iOS v 18.3, and various devices on Browserstack including iPhone 16 iOS v18.3). https://drive.google.com/drive/folders/1dch1WaM9O6mbHy29S6NGWgnSHkZkPiBf?usp=sharing
1
0
342
Mar ’25
How to programmatically update Model Position Offset of GeometryModifier?
is it possible to dynamically update ModelPositionOffset of GeometryModifier with a depth map image? in my code I set up the parameter for "DepthMapTexture" universal input node and tried setting the depth map for depthTextureResource. I have 2 DrawableQueues. One for setting InputTexture, and one for setting DepthMapTexture. This only shows the part that concerns setting DepthMapTexture this is where I define the plane entity. and this is the shader graph what I noticed with GeometryModifier is that, the depthMap image has to be same as input image's dimensions. and when I applied this material to usdz file, with pre-assigned image and depth map from RCP, and loaded that Entity from code, depth map was applied correctly. what I am unsure is that if it is impossible to define a model entity from code, apply ShaderGraphMaterial from RCP, and dynamically update the image used in GeometryModifier. Maybe I'm missing something when defining Entity, something that allows geometric modifications?
1
0
272
Mar ’25
Overlaying SwiftUI content with transparency in front of RealityView
Following up on my previous question here: https://developer.apple.com/forums/thread/774262 Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker. Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again. Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
1
0
374
Feb ’25
Vision - Time travel door
Hello All, We're going to do a scene now, kind of like a time travel door. When the user selects the scene, the user passes through the door to show the current scene. The changes in the middle need to be more natural. It's even better if you can walk through an immersive space... There is very little information now. How can I start doing this? Is there any information I can refer to thanks
2
0
539
Dec ’24
how to convert mlmodel to reference object?
Hello, I have downloaded and run the sample object tracking app for visionos. Now I'm working on my own objects for tracking. I have made a model using Create ML using images of my object. However, I cannot see how to convert the Create ML output file (***.mlmodel) into a reference object like the files in the sample project. is there a tool for converting them? TIA
2
0
324
Feb ’25