Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

How to add visual thickness to a glass background view
Hi guys, In visionOS, when using a ZStack decorated with .glassBackgroundEffect(), you can see the 3D glass background from the front, but when viewed from the side, the view appears to have no thickness. However, I noticed that in an app built by Apple, when viewing a glass background view from the side, it appears to have thickness. I tried adding .frame(depth:) to a glass background view, but it appears as two separate layers spaced by the depth value. My question is: Is there a view modifier that adds visual thickness to a glass background view, as shown in the picture? Or, if not, how should I write a custom view modifier to achieve this effect? Thanks!
0
0
81
May ’25
ImmersiveSpace with system environments
Hi, When opening an ImmersiveSpace with the .mixed style, is it possible to keep the user's current selected system immersive environment? Currently, the system immersive environment will be dismissed. ImmersiveSpace(id: "some id") { SomeRealityView() } .immersionStyle(selection: .constant(.mixed), in: .mixed)
1
0
445
Oct ’24
Game controller issues in visionOS
Hi, I'm building a windowed game in visionOS which requires a gamepad. And I'm using a PS5 controller for this case. However, I found a few problems: When looking at the window close button, and press X in the gamepad, the window will be closed. This can happen accidentally when the user is playing. When press the PS button(the home button), I want my app to handle it, e.g. take the user to the title screen. But visionOS will always capture this input and opens the home screen(App Library). How to avoid these 2 issues? Or in general, how to make the gamepad input only available to my app but not the visionOS system?
1
0
632
Nov ’24
App lost audio spatialization from VisionOS 2 Update
Hi, I have a video player app that lost its audio spatialization since the VisionOS 2 update. I am using the VideoPlayerComponent (https://developer.apple.com/documentation/realitykit/videoplayercomponent), to implement my videos as entities, as I want a custom look and controls to my player. In VisionOS 1, there was automatic audio spatialization. Depending where my video entity is, the app automatically enables head tracking audio spatialization. Since VisionOS 2 however, I cannot get my video entities to play Spatial Audio. I've looked into DestinationVideo and even set up AVAudioSessionSpatialExperience but Spatial Audio is still not working. Appreciate any help. Thanks.
1
0
379
Oct ’24
Body segmentation/occlusion on the Apple Vision Pro
Hello, I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools? Thanks! Mehdi
1
0
361
Feb ’25
Animation handling on Scene change
I work on a game where I use timeline animations in Reality Composer Pro. The game runs in an immersive space, but can be paused where I then move the whole level root entity from the immersive space to another RealityView in a Window Group. When the player continues I do it exactly the other way around to move the level root from the window group back to my immersive space RealityView. And it seems like all animations get automatically stopped and restarted when the scene gets changed. The problem is, it does not resume where it stopped before, it completely starts again from where it stopped and therefore, has for example a wrong y offset as visible in the picture. For example in the picture, the yellow sphere loops the following animation: 0 to 100 100 to -100 -100 to 0 If I now pause the game (and basically switch scenes), the previous animation gets stopped and restarted at position y = 100. So now it loops: 100 to 200 200 to 0 0 to 100 I already tried all kind of setups - like: Setting the animations relative to root, parent, local Using behaviors (on Added to Scene, on Notification) And finally even by accessing the availableAnimations directly and saving the playback controller of the animation There I saw, if I manually trigger the following code before switching the scene, everything works as expected: Button("Reset") { animationPlaybackController.time = 0 animationPlaybackController.pause() animationPlaybackController.stop(blendOutDuration: 0.00001) } But if I use time = 0 with .stop() directly, the time = 0 seems to be ignored and I get the same behavior as before that it stops in a wrong y offset, hence my assumption that animations get stopped and invalidated once they change the scene. I tried to call the code manually on ImmersiveSpace.onDisappear, WindowGroup.onAppear and different kind of SceneEvents subscriptions, but unfortunately nothing worked. So am I doing something wrong in general or is there a way to fix this?
0
0
55
Apr ’25
Depth matrix accuracy with the iPhone 14 Pro and Lidar
Hello Community, I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro. From this depth map, I generate a point cloud using the intrinsic camera parameters. I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud. For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°. My question is: Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code? To obtain the depth map, I’m using: AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) Thanks in advance for your help!
0
0
65
Apr ’25
Summon gesture
Can you help to write a code able to pick an element a bit far from me, then bring it near to me, flick it a bit and then send it back to its original position when I release it? Thanks a lot, Christophe
1
0
46
Apr ’25
DragGesture that pivots with the user in visionOS
Apple published a set of examples for using system gestures to interact with RealityKit entities. I've been using DragGesture a lot in my apps and noticed an issue when using it in an immersive space. When dragging an entity, if I turn my body to face another direction, the dragged entity does not stay relative to my hand. This can lead to situations where the entity is pulled very close to me, or pushed far way, or even ends up behind me. In the examples linked above, there are two versions of how they use drag. handleFixedDrag: This is similar to what I'm doing now. It uses the value from value.gestureValue.translation3D as the basis for the drag handlePivotDrag: This version aims to solve the problem I described above by using value.inputDevicePose3D as the basis of the gesture. I've tried the example from handlePivotDrag, but it has one limitation. Using this version, I can move the entity around me as if it were on the inside of an arc or sphere. However, I can no longer move the entity further or closer. It stays within a similar (though not exact) distance relative to me while I drag. Is there a way to combine these concepts? Ideally, I would like to use a gesture that behaves the same way that visionOS windows do. When we drag windows, I can move them around relative to myself, pull them closer, push them further, all while avoiding the issues described above. Example from handleFixedDrag mutating private func handleFixedDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } if !state.isDragging { state.isDragging = true state.dragStartPosition = entity.scenePosition } let translation3D = value.convert(value.gestureValue.translation3D, from: .local, to: .scene) let offset = SIMD3<Float>(x: Float(translation3D.x), y: Float(translation3D.y), z: Float(translation3D.z)) entity.scenePosition = state.dragStartPosition + offset if let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } } Example from handlePivotDrag mutating private func handlePivotDrag(value: EntityTargetValue<DragGesture.Value>) { let state = EntityGestureState.shared guard let entity = state.targetedEntity else { fatalError("Gesture contained no entity") } // The transform that the pivot will be moved to. var targetPivotTransform = Transform() // Set the target pivot transform depending on the input source. if let inputDevicePose = value.inputDevicePose3D { // If there is an input device pose, use it for positioning and rotating the pivot. targetPivotTransform.scale = .one targetPivotTransform.translation = value.convert(inputDevicePose.position, from: .local, to: .scene) targetPivotTransform.rotation = value.convert(AffineTransform3D(rotation: inputDevicePose.rotation), from: .local, to: .scene).rotation } else { // If there is not an input device pose, use the location of the drag for positioning the pivot. targetPivotTransform.translation = value.convert(value.location3D, from: .local, to: .scene) } if !state.isDragging { // If this drag just started, create the pivot entity. let pivotEntity = Entity() guard let parent = entity.parent else { fatalError("Non-root entity is missing a parent.") } // Add the pivot entity into the scene. parent.addChild(pivotEntity) // Move the pivot entity to the target transform. pivotEntity.move(to: targetPivotTransform, relativeTo: nil) // Add the targeted entity as a child of the pivot without changing the targeted entity's world transform. pivotEntity.addChild(entity, preservingWorldTransform: true) // Store the pivot entity. state.pivotEntity = pivotEntity // Indicate that a drag has started. state.isDragging = true } else { // If this drag is ongoing, move the pivot entity to the target transform. // The animation duration smooths the noise in the target transform across frames. state.pivotEntity?.move(to: targetPivotTransform, relativeTo: nil, duration: 0.2) } if preserveOrientationOnPivotDrag, let initialOrientation = state.initialOrientation { state.targetedEntity?.setOrientation(initialOrientation, relativeTo: nil) } }
1
0
457
Feb ’25
Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration. The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen. In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected. However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended. Steps I have taken: Created a new layer (CameraFeed) and assigned the relevant objects to it. Set the secondary camera’s culling mask to render only the CameraFeed layer. Assigned the RenderTexture as the camera’s target texture. Applied the RenderTexture to an Unlit/Texture material on a quad. Confirmed the camera is active and correctly positioned relative to the object. From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial. My questions are: Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial? Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed? Has anyone found alternative design patterns or methods for this kind of interaction? Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4 Any insight or suggestions would be greatly appreciated. Thank you.
0
0
71
Apr ’25
How to create a MultiplayerDelegate and use TabletopNetworkSession features?
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works. Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession. Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?   possibly we are missing something here; any suggestions?
2
0
199
Apr ’25
[Rnedering] Always display a Window in front of any Entity in a mixedImmersiveView?
Currently I am using mixed style immersive view to place both my WindowView(plain style) and ImmersiveView content together. The issue is that the rendering depth testing may always let the virtual content block my normal WindowView. Is it possible to manually set windowedVIew always displays in the front of my virtual view in mixed style immersion? (I know modelSortGroup but it doesn't quite fits here) Or if I can dynamically change the .progressive value when the immersive space is open (set the value to zero means .mixed itself right?)
0
0
61
Mar ’25
how to achieve "concave in" glass view look?
I have been trying to implement this look where a component looks "pushed in" but I could not find any resources regarding this effect. The closest I got was a combination of a RoundedRectangle and .glassBackgroundEffect(), but this makes the view look pushed out, instead of pushed in. I was wondering if this is achievable in SwiftUI level, or even in UIKit level.
1
0
87
Apr ’25
Start Metal3 and visionOS in Compositor Services
I am seeking a comprehensive pathway to learning Metal programming on VisionOS. The official documentation’s Pathway on Metal is insufficient in this regard. I kindly request that someone create a detailed pathway to assist me in this endeavor. The pathway should encompass the following key areas: Knowledge Base: Understand the fundamental principles of Metal and other frameworks, as well as basic concepts, to prepare for future learning. Metal3 (very important) : Gain a deep understanding of Metal itself, the programming language used to communicate with the GPU on the device to render graphics. This knowledge forms the foundation for all Metal-related tasks. Compositor Services and ARKit (important) : Learn how to display Metal scenes within the Vision device’s space and enable augmented reality (AR) and hand interaction. This knowledge is essential for creating interactive and immersive experiences. Metal Performance Shaders: Acquire expertise in optimizing material rendering to enhance performance. MetalKit: Simplifies the tasks that display your Metal content onscreen. MetalFX: Develop proficiency in using MetalFX to improve rendering efficiency and achieve visually stunning effects. I would appreciate it if you could provide me with a detailed and comprehensive pathway, including the URLs of relevant documents, to guide my learning journey. Thank you for your assistance.
0
0
518
Nov ’24
Asset flickering when switching ImmersiveSpaces
There is a flickering occurring on 3D assets when switching immersive spaces, which is not the nicest user experience. The flickering does occur either when loading the scenes directly from the RealityKitContent package, or from memory (pre-loaded assets). Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it. To replicate the issue, follow these steps: Create a new visionOS app using Xcode template, see image. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image. Replace all swift files with those you will find in the attached texts. In the RealityKitContent package, create a scene named YellowSpheres as illustrated below. In the RealityKitContent package, create a scene named RedSpheres as illustrated below. Launch the app in debug mode via Xcode and onto the AVP device or simulator Continuously switch immersive spaces by pressing on buttons Show RedSpheres and Show YellowSpheres. Observe the 3d assets flicker upon opening of the immersive spaces. AppModel RedSpheresImmersiveView YellowSpheresImmersiveView TestFlickeringBetweenImmersiveSpacesApp
1
0
443
Oct ’24
Skysphere flickering w attachment at finial display of scene
There is a flickering and slight dimming occurring specifically on skysphere, at initial load of the scene, when using Attachment. This is observed in the simulator and on the real device. Since we cannot upload a video illustrating the undesirable behaviour, I have to describe how to setup the project for you to observe it. To replicate the issue, follow these steps: Create a new visionOS app using Xcode template, see image. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist), see image. Replace all swift files with those you will find in the attached texts. Add the skysphere image asset Skydome_8k found at this Apple Sample App Presenting an artist’s scene. Launch the app in debug mode via Xcode and onto the AVP device or simulator Continuously open and dismiss the skysphere by pressing on buttons Open Skysphere and Close. Observe the skysphere flicker and dim upon display of the skysphere. The current workaround is commented in file ThreeSixtySkysphereRealityView at lines 65, 70, 71, and 72. Uncomment these lines, and the flickering and dimming do not occur. Are we using attachments wrongly? Is this behavior known and documented? Or, is there really a bug in visionOS? AppModel InitialImmersiveView MainImmersiveView TestSkysphereAttachmentFlickerApp ThreeSixtySkysphereRealityView
1
0
404
Oct ’24