Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

RealityKit Rotation Origin
I am trying to rotate topEntity around the origin point of shapeEntity, but have not found a way to do so. topEntity is an entity group that also contains shapeEntity, so I cannot set topEntity as a child of shapeEntity. From Blender I set the correct origin of topEntity, but when I import the usd model into Reality Composer Pro it does not save the origin point and there is no way to set the origin in Reality Composer Pro. DragGesture() .targetedToEntity(where: .has(CustomComponent.self)) .onChanged({ value in let rotation = -Float(value.translation.height) let clampedRotation = min(max(rotation, 0), 45) if value.entity.name == "grab"{ if let topEntity = selectedEntity.findEntity(named: "top"), let shapeEntity = selectedEntity.findEntity(named: "Shape_1"){ topEntity.transform.rotation = simd_quatf( angle: clampedRotation * .pi / 180, axis: SIMD3(x: 0, y: 0, z: 1) ) } } })
1
0
85
1w
Prevent Window (or Volume) Mouse Focus
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window. Is there anyway to reject that focus?
2
0
180
1w
Game controller issues in visionOS
Hi, I'm building a windowed game in visionOS which requires a gamepad. And I'm using a PS5 controller for this case. However, I found a few problems: When looking at the window close button, and press X in the gamepad, the window will be closed. This can happen accidentally when the user is playing. When press the PS button(the home button), I want my app to handle it, e.g. take the user to the title screen. But visionOS will always capture this input and opens the home screen(App Library). How to avoid these 2 issues? Or in general, how to make the gamepad input only available to my app but not the visionOS system?
1
0
186
1w
Parent is changed during gesture interaction producing incorrect relative translation values.
I'm having trouble re-setting the position of a child entity during app re-load even though it appears that I am correctly obtaining and persisting the correct translation values after a drag gesture. The problem exists when I drag a child element to a new location (persist those new values) then reload the app to force re-positioning from persisted translation values. I notice that the parent relationship changes during interaction (tap or drag) which can be seen in the debug statements. I'm wondering if this is related to the problem, or, if the parent change is normal during re-rendering and is un-related to my problem. My thought process is since we care about relative translation values when persisting, if the parent relationship is changed just before persistence, then, are we persisting and setting the wrong values? Project Link: Private STEPS TO REPRODUCE Run the app. Drag the pre-loaded stage down the Y axis so that the floor of the stage is more visible to your eye (in order to better visualize the problem). Tap the button in the timeline to create a new project. Drag the only visible element from the left panel onto the timeline (element is labeled f_works_entity_1). There should now be a green 3d model added to the stage. Drag this green element to a new location (be careful to hover over the green element so that you don't inadvertently drag the stage). Re-run the app to see that the green element is offset to a new location, not the last dragged location. To reset and try again, delete the project canvas next to the project name (trash button) then restart the app. Areas of concern: RealityKitView is the only file you may need. Line 119 is where we create new child entities Lines 185-219 is where we persist and apply persisted values. You can also search FIXME in the file to see areas of concern. Tip: I have a tap gesture on each entity that produces a debug statement with info about the entity and its parent including IDs.
1
0
239
1w
Access to Raw Lidar point cloud
Is it possible to access the raw lidar measurements before the sceneDepth calculation is done to combines the lidar measurements with visual data. In low light environments the lidar scanner should still work and provide depth info but I cannot figure out how to access those pure lidar depth measurements. I am currently using: guard let frame = arView.session.currentFrame, let depthData = frame.sceneDepth?.depthMap else { print("Depth data is unavailable.") return } but this is the depth data after sensor fusion occurs and fails in low light conditions.
4
0
233
1w
Are there other ways to train the model? Training the VisionOS tracking model using Create ML on the Mac takes too much time each time.
I am developing with Apple Vision Pro to implement object tracking functionality, but each model needs to go into Create ML for training, and the training time is very long. Are there other ways to shorten training time while obtaining reference files in the same format? Additionally, can the delay in object tracking be further optimized? Although the refresh rate has been optimized, there is still a noticeable delay.
1
0
264
1w
Rendering bug when layering transparent textures front and back
If I put an alpha image texture on a model created in Blender and run it on RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below. I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and t hen impor ted it into Reality Composer Pro. When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her, t he following behaviors were obser ved in t he transparent areas ・The transparent areas do not become transparent ・The transparent areas become transparent toget her wit h t he image behind t hem the order of t he images becomes incorrect Best regards.
1
0
201
1w
Reality Composer Pro Timeline does not seem to work on iphone 12 and 11
I created a project using Reality Composer Pro. When I export to a .usdz file, it works well on iPhone 13, 14, 15, and 16 but not on iPhone 12 and 11. In the timeline, I use a behaviour that is on added to scene to active intro animation and loop background audio. But it does not work on old device like iPhone 12 and 11. Also, all interactive taps/touch points/audio don't seem to work too. Iphone 13,14,15,16 is on ios 18.1. iPhone 11, 12 is on ios 17.6.1. Here is sample usdz file exported from REality Composer Pro 2.0 that has problem above: https://drive.google.com/file/d/1sHZn9JABTswLq2flYjToTbWDuE5T7eNw/view?usp=sharing
0
0
90
1w
distance calculation:I want to implement the same code functionality for distance calculation in Unity using SwiftUI. visionos
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body. I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this? void CalculateLengthInsideOrgan() { // Direction from the base of the probe to the tip Vector3 direction = probeTip.position - probeBase.position; float probeLength = direction.magnitude; // Raycasting RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask); if (hits.Length > 0) { // Calculate the length entering the organ float distanceToFirstHit = hits[0].distance; lengthInsideOrgan = probeLength - distanceToFirstHit; } else { lengthInsideOrgan = 0f; } }
1
0
254
1w
Regarding real-time object tracking and real-time image recognition
We used real-time object tracking, and with enterprise permissions, we can improve the smoothness to 30Hz, but there are still noticeable delays. On one hand, we want to know why this delay occurs; is it due to performance considerations? Because we found that the delay in hand tracking is actually very low. On the other hand, we consider that it may be due to the complexity of 3D objects, so I considered using image tracking. However, we found that there are even more serious delays in image tracking and QR code tracking. We hope to optimize it. Currently, the frame rate for recognizing images for tracking seems to be one frame per second, and we hope to increase it because object recognition and tracking can be very smooth on other Apple platforms, such as iOS. Additionally, can we appropriately consider interfaces for depth recognition to obtain depth data? We want to know what accuracy vision can achieve in measuring the physical world, as well as the accuracy in rendering on the screen. We wonder if this is related to hardware devices like radar. Also, what accuracy can we achieve in tracking the movement distance of objects?
1
0
164
1w
ARKit AnchorUpdate<ImageAnchor>.event Behavior Changes in visionOS 2.1
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?
2
0
176
1w
Attachment always user facing
Hello, Is there a way to always have the attachments of a RealityView always face the user? For example, in a visionOS app, in an immersive space, we have an attachment. When the user either walks around the attachment, or rotates the parent entity, we would like the attachment to automatically rotate to face the user. How do we do this? I anticipated this to be a trivial feature to implement, since I thought I remembered seeing this feature as a built-in/opt-in option for attachments. But, I cannot find that feature. All and any recommendations are appreciated, thanks.
2
0
168
1w
Access main camera on Apple Vision Pro
From visionOS 2.0 we can access Apple Vision Pro's main camera but only for Enterprise account as it is enterprise API only, I have a normal Developer account and I want to use main camera and want to have a video call feature in app by using main camera of AVP, is it possible to do it using developer account only. Currently using that account I am not able to create entitlement certificate as there is no option.
2
0
199
1w
Eye Difference in Object Tracking
Hi all, I am having trouble debugging an error where the wireframe object entity representation for the Object Tracking Demo: "Explore object tracking for visionOS" appears incorrect in the right eye of the Vision Pro but correct in the left eye. Would anyone happen to know what is going on? I have attempted to offset the object by changing world coordinates, but this moves the object in both the left and the right eye. Could this be due to the new visionOS beta update (2.0 --> 2.2) ? I am currently using visionOS 2.2. Thanks!
1
0
162
2w
How to insert modified video frames into the system camera to achieve AI erase effect?
By applying for the enterprise API, we can obtain the data of video frames collected by VisionPro glasses, and then we process the collected video frames to achieve the function of eliminating a certain object. But it was not found how to insert the processed video frames into the data source collected by the system camera. So I would like to ask if there is any API that can insert processed video frames into the original data and present them to the user? This effect is similar to the right side twist of VisionPro glasses, which allows the physical world and digital space to blend perfectly after rotation. So, I would like to ask if there is a related API that can solve this problem? STEPS TO REPRODUCE Obtain video frames, Process the obtained video frames Insert the processed video frames into the VisonOS system camera. System: VisionOS 2.0 API used: Enterprise APIs Main camera access permissions
1
0
214
2w
Custom 3D Window Using RealityView
I have a RealityView displaying a Reality Composer Pro scene in window. Things are generally working fine, but the content seems to be appearing in front of and blocking the VisionOS window, rather than being contained inside it. Do I need to switch to a volumetric view for this to work? My scene simply contains a flat display which renders 3D content (it has a material that sends different imagery to each eye).
3
0
210
2w