Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Access main camera on Apple Vision Pro
From visionOS 2.0 we can access Apple Vision Pro's main camera but only for Enterprise account as it is enterprise API only, I have a normal Developer account and I want to use main camera and want to have a video call feature in app by using main camera of AVP, is it possible to do it using developer account only. Currently using that account I am not able to create entitlement certificate as there is no option.
2
0
571
Nov ’24
Eye Difference in Object Tracking
Hi all, I am having trouble debugging an error where the wireframe object entity representation for the Object Tracking Demo: "Explore object tracking for visionOS" appears incorrect in the right eye of the Vision Pro but correct in the left eye. Would anyone happen to know what is going on? I have attempted to offset the object by changing world coordinates, but this moves the object in both the left and the right eye. Could this be due to the new visionOS beta update (2.0 --> 2.2) ? I am currently using visionOS 2.2. Thanks!
1
0
375
Nov ’24
Attachment always user facing
Hello, Is there a way to always have the attachments of a RealityView always face the user? For example, in a visionOS app, in an immersive space, we have an attachment. When the user either walks around the attachment, or rotates the parent entity, we would like the attachment to automatically rotate to face the user. How do we do this? I anticipated this to be a trivial feature to implement, since I thought I remembered seeing this feature as a built-in/opt-in option for attachments. But, I cannot find that feature. All and any recommendations are appreciated, thanks.
2
0
509
Nov ’24
Reality Composer Pro Timeline does not seem to work on iphone 12 and 11
I created a project using Reality Composer Pro. When I export to a .usdz file, it works well on iPhone 13, 14, 15, and 16 but not on iPhone 12 and 11. In the timeline, I use a behaviour that is on added to scene to active intro animation and loop background audio. But it does not work on old device like iPhone 12 and 11. Also, all interactive taps/touch points/audio don't seem to work too. Iphone 13,14,15,16 is on ios 18.1. iPhone 11, 12 is on ios 17.6.1. Here is sample usdz file exported from REality Composer Pro 2.0 that has problem above: https://drive.google.com/file/d/1sHZn9JABTswLq2flYjToTbWDuE5T7eNw/view?usp=sharing
0
1
523
Nov ’24
distance calculation:I want to implement the same code functionality for distance calculation in Unity using SwiftUI. visionos
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body. I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this? void CalculateLengthInsideOrgan() { // Direction from the base of the probe to the tip Vector3 direction = probeTip.position - probeBase.position; float probeLength = direction.magnitude; // Raycasting RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask); if (hits.Length > 0) { // Calculate the length entering the organ float distanceToFirstHit = hits[0].distance; lengthInsideOrgan = probeLength - distanceToFirstHit; } else { lengthInsideOrgan = 0f; } }
1
0
641
Nov ’24
Are there other ways to train the model? Training the VisionOS tracking model using Create ML on the Mac takes too much time each time.
I am developing with Apple Vision Pro to implement object tracking functionality, but each model needs to go into Create ML for training, and the training time is very long. Are there other ways to shorten training time while obtaining reference files in the same format? Additionally, can the delay in object tracking be further optimized? Although the refresh rate has been optimized, there is still a noticeable delay.
1
0
757
Nov ’24
AVAudioSession gets interrupted when closing a window
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again. This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
3
1
825
Nov ’24
Creating and Viewing Immersive Video Locally on Vision Pro
We would like to create an Immersive video and store the video file locally in Vision Pro for viewing. By Immersive video, I mean the video that is played at the end of the Vision Pro experience at the Apple Store (LeBron's dunk, Curry's 3-point shot, tightrope walk, etc.). It is unclear if a way is currently provided to view Immersive video locally. I can find some information about Spatial video on the Dev site, but I can't find any information about Immersive video. My understanding is: Spatial video: A video window appears in space and plays video with depth. Up to 4K side-by-side video can be converted to MV-HEVC format using Xcode and played back in the Photos app. Immersive video: 180VR video, but I’m not sure how it was created. Similar to Spatial video, I converted a side-by-side 180VR video to MV-HEVC format using Xcode, but it could not be played back in the Photos app as expected. Vision Pro's Photos app features an Immersive button during video playback, but this appears to be for zooming in on Spatial video to the full field of view, which seems different from Immersive video. The demo video provided by Apple is streamed from Apple TV, and there are no local files available. We are currently considering creating an app that displays different videos to each eye, but we prefer not to go this route due to licensing and distribution issues.
4
0
2.2k
Nov ’24
ARKit AnchorUpdate<ImageAnchor>.event Behavior Changes in visionOS 2.1
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?
2
0
430
Nov ’24
Prevent Window (or Volume) Mouse Focus
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window. Is there anyway to reject that focus?
0
0
402
Nov ’24
Parent is changed during gesture interaction producing incorrect relative translation values.
I'm having trouble re-setting the position of a child entity during app re-load even though it appears that I am correctly obtaining and persisting the correct translation values after a drag gesture. The problem exists when I drag a child element to a new location (persist those new values) then reload the app to force re-positioning from persisted translation values. I notice that the parent relationship changes during interaction (tap or drag) which can be seen in the debug statements. I'm wondering if this is related to the problem, or, if the parent change is normal during re-rendering and is un-related to my problem. My thought process is since we care about relative translation values when persisting, if the parent relationship is changed just before persistence, then, are we persisting and setting the wrong values? Project Link: Private STEPS TO REPRODUCE Run the app. Drag the pre-loaded stage down the Y axis so that the floor of the stage is more visible to your eye (in order to better visualize the problem). Tap the button in the timeline to create a new project. Drag the only visible element from the left panel onto the timeline (element is labeled f_works_entity_1). There should now be a green 3d model added to the stage. Drag this green element to a new location (be careful to hover over the green element so that you don't inadvertently drag the stage). Re-run the app to see that the green element is offset to a new location, not the last dragged location. To reset and try again, delete the project canvas next to the project name (trash button) then restart the app. Areas of concern: RealityKitView is the only file you may need. Line 119 is where we create new child entities Lines 185-219 is where we persist and apply persisted values. You can also search FIXME in the file to see areas of concern. Tip: I have a tap gesture on each entity that produces a debug statement with info about the entity and its parent including IDs.
1
0
501
Nov ’24
Rendering bug when layering transparent textures front and back
If I put an alpha image texture on a model created in Blender and run it on RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below. I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and t hen impor ted it into Reality Composer Pro. When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her, t he following behaviors were obser ved in t he transparent areas ・The transparent areas do not become transparent ・The transparent areas become transparent toget her wit h t he image behind t hem the order of t he images becomes incorrect Best regards.
1
0
704
Nov ’24
ObjectCapture from ARKit
We are currently using ObjectCapture from ARKit, and we would like to fix exposure time, white balance parameter and ISO. How can we do this ? Additionally, we'd like to obtain the following information from the ARKit : white balance parameters (in case we cannot fix them) and color correction matrices ?
0
0
513
Nov ’24
Regarding real-time object tracking and real-time image recognition
We used real-time object tracking, and with enterprise permissions, we can improve the smoothness to 30Hz, but there are still noticeable delays. On one hand, we want to know why this delay occurs; is it due to performance considerations? Because we found that the delay in hand tracking is actually very low. On the other hand, we consider that it may be due to the complexity of 3D objects, so I considered using image tracking. However, we found that there are even more serious delays in image tracking and QR code tracking. We hope to optimize it. Currently, the frame rate for recognizing images for tracking seems to be one frame per second, and we hope to increase it because object recognition and tracking can be very smooth on other Apple platforms, such as iOS. Additionally, can we appropriately consider interfaces for depth recognition to obtain depth data? We want to know what accuracy vision can achieve in measuring the physical world, as well as the accuracy in rendering on the screen. We wonder if this is related to hardware devices like radar. Also, what accuracy can we achieve in tracking the movement distance of objects?
1
0
450
Nov ’24
How does rendering to a higher resolution RenderTarget and then downsampling to a Drawable cause image distortion?
Rendering the scene onto a RenderTarget with twice the resolution of the Drawable, and then downsampling to the Drawable, causes the image to appear distorted. Modifications were made on the Xcode VisionOS template Foveation should be enabled by default struct ContentStageConfiguration: CompositorLayerConfiguration { func makeConfiguration(capabilities: LayerRenderer.Capabilities, configuration: inout LayerRenderer.Configuration) { configuration.depthFormat = .depth32Float configuration.colorFormat = .bgra8Unorm_srgb let foveationEnabled = capabilities.supportsFoveation configuration.isFoveationEnabled = foveationEnabled let options: LayerRenderer.Capabilities.SupportedLayoutsOptions = foveationEnabled ? [.foveationEnabled] : [] let supportedLayouts = capabilities.supportedLayouts(options: options) configuration.layout = supportedLayouts.contains(.layered) ? .layered : .dedicated } } To avoid errors, rasterizationRateMap is not set. var renderPassDescriptor = MTLRenderPassDescriptor() renderPassDescriptor.colorAttachments[0].texture = self.renderTarget.currentFrameColor renderPassDescriptor.renderTargetWidth = self.renderTarget.currentFrameColor.width renderPassDescriptor.renderTargetHeight = self.renderTarget.currentFrameColor.height renderPassDescriptor.colorAttachments[0].loadAction = .clear renderPassDescriptor.colorAttachments[0].storeAction = .store renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 0.0) renderPassDescriptor.depthAttachment.texture = self.renderTarget.currentFrameDepth renderPassDescriptor.depthAttachment.loadAction = .clear renderPassDescriptor.depthAttachment.storeAction = .store renderPassDescriptor.depthAttachment.clearDepth = 0.0 //renderPassDescriptor.rasterizationRateMap = drawable.rasterizationRateMaps.first if layerRenderer.configuration.layout == .layered { renderPassDescriptor.renderTargetArrayLength = drawable.views.count } The rendering process is as follows:
2
0
581
Nov ’24
Limited Window Manipulation in Mixed Immersive Style in VisionOS
In VisionOS versions 2.1 and 2.2, I’m encountering a significant limitation when using the .immersionStyle(selection: .constant(.mixed), in: .mixed) mode, specifically in mixed immersive style. Here’s a breakdown of the behavior: In full immersion mode (.immersionStyle(selection: .constant(.full), in: .full)), users can interact with and manipulate system windows while inside a 3D model, allowing typical interactions like moving windows, pinching, or activating UI switches. However, in mixed immersive mode, using the exact same layout “inside” a 3D model (which doesn’t visually obstruct the window), users are unable to interact with window content or move the window. Basic interactions like pinching or toggling switches require users to physically touch these elements in AR space, which is inconsistent with the behavior in full immersion. From a usability perspective, this restriction seems unnecessary, as the software should ideally allow for similar interaction capabilities across both immersive styles. The expected behavior is to enable window manipulation within a 3D model in mixed mode, matching the functionality observed in full immersion. The scene in question is a House in which the user is placed during the immersion that's why I am referring to the user being "Inside" of the scene. Has anyone else experienced this or found a workaround?
2
1
432
Nov ’24
RealityKit Rotation Origin
I am trying to rotate topEntity around the origin point of shapeEntity, but have not found a way to do so. topEntity is an entity group that also contains shapeEntity, so I cannot set topEntity as a child of shapeEntity. From Blender I set the correct origin of topEntity, but when I import the usd model into Reality Composer Pro it does not save the origin point and there is no way to set the origin in Reality Composer Pro. DragGesture() .targetedToEntity(where: .has(CustomComponent.self)) .onChanged({ value in let rotation = -Float(value.translation.height) let clampedRotation = min(max(rotation, 0), 45) if value.entity.name == "grab"{ if let topEntity = selectedEntity.findEntity(named: "top"), let shapeEntity = selectedEntity.findEntity(named: "Shape_1"){ topEntity.transform.rotation = simd_quatf( angle: clampedRotation * .pi / 180, axis: SIMD3(x: 0, y: 0, z: 1) ) } } })
1
0
399
Nov ’24
Custom 3D Window Using RealityView
I have a RealityView displaying a Reality Composer Pro scene in window. Things are generally working fine, but the content seems to be appearing in front of and blocking the VisionOS window, rather than being contained inside it. Do I need to switch to a volumetric view for this to work? My scene simply contains a flat display which renders 3D content (it has a material that sends different imagery to each eye).
3
0
543
Nov ’24