Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

RealityView Limits VisionOS
Im asking myself we are the limits of RealityView. For example is it possible to place an entity on postion (x=800m,y=0,z=-900m) What happens if i walk from my (0,0,0) to this point, will i see the entity then ? Does someone know where are the limits ?
1
0
299
Oct ’24
Immersive environment learning material
I really love the immersive environments, but I don’t have experience with creating them. Do you have resources or tutorials you can recommend for creating these from scratch? I’ve seen the sample projects and videos, but they usually start in the middle, assuming you already have the assets created.
1
0
64
Jul ’25
Prevent Window (or Volume) Mouse Focus
When using a trackpad (or screen-shared Mac) with the Vision Pro, moving your attention to a new window or app immediately refocuses the mouse cursor, which in many circumstances is really useful. But in circumstances where there is a viewer-only window, that window jumping gets in the way. Imagine a 3d object editor of some sort, with a live viewer in a second window, maybe a browser. Manipulating the 3d object with the mouse in the editor gets continually interrupted when looking at the live viewer because the cursor jumps to the viewer window. Is there anyway to reject that focus?
0
0
402
Nov ’24
Is it possible to live render CMTaggedBuffer / MV-HEVC frames in visionOS?
Hey all, I'm working on a visionOS app that captures live frames from the left and right cameras of Apple Vision Pro using cameraFrame.sample(for: .left/.right). Apple provides documentation on encoding side-by-side frames into MV-HEVC spatial video using CMTaggedBuffer: Converting Side-by-Side 3D Video to MV-HEVC My question: Is there any way to render tagged frames (e.g. CMTaggedBuffer with .stereoView(.leftEye/.rightEye)) live, directly to a surface in RealityKit or Metal, without saving them to a file? I’d like to create a true stereoscopic (spatial) live video preview, not just render two images side-by-side. Any advice or insights would be greatly appreciated!
2
0
215
Aug ’25
Using AVAsynchronousKeyValueLoading.load() on an AVAssetTrack gives an error
I'm seeing this error while attempting to compile my VisionOS app under Xcode 26. My existing code looks like: let (naturalSize, formatDescriptions, mediaCharacteristics) = try? await videoTrack.load(.naturalSize, .formatDescriptions, .mediaCharacteristics) This is now giving a compiler error: Type of expression is ambiguous without a type annotation I don't see that anything that was changed or deprecated in the latest version. Also loading the properties individually seems to work fine i.e.: let naturalSize = try? await videoTrack.load(.naturalSize) let formatDescriptions = try? await videoTrack.load(.formatDescriptions) let mediaCharacteristics = try? await videoTrack.load(.mediaCharacteristics)
1
0
69
Jun ’25
Volumetric Windwos anchores
Hi, we would like to create something where you can open multiple volumetric windows and place them in a room, our biggest issue is that we want these windows to be persistent, so when I close and reopen the app, the windows to be in the same position. We can't use immersive spaces because we also want to have the possibility to access the shared space. Is it possible with the current features and capabilities to do that? If yes do you have some advices how can we achieve this? The alternative is if is it possible to open the virtual display in immersive spaces or if we have the possibility to implement our own virtual display.
1
0
361
Feb ’25
Raw point cloud access
Hi, I currently have Enterprise API access and have observed that the main camera API only provides RGB data. I am trying to access point cloud information from LIDAR, but it seems ARKit doesn't offer this directly via the standard APIs that iPad uses. I wanted to ask if there are any possible options to access depth data or enhanced camera capabilities using the Enterprise API. Specifically: Does having Enterprise API access unlock any additional camera-related APIs in AVFoundation that could provide depth information or more advanced control over the camera? Are there any workarounds or alternative methods to obtain depth data from the camera?
1
0
384
Oct ’24
visionOS RealityKit's physics simulation stops for certain entity.scale values
The setup:
 My Vision Pro app loads uszd models created by a 3rd party app. These models have to be scaled to the right dimension for RealityKit. I want to use physics for realistic movement of these entities, but this does not work as expected. I thus wrote a demo test app based on Apple's immersive space app template (code below).
This app has two entities, a board and above a box that is a child of the board.
 Both have a collision component and a physics body.
 The board, and thus also its child the box, are scaled. The problem:
 If the scale factor is greater or equal 0.91, the box falls under gravity towards the board where it is stopped after some movements. This looks realistic.
 However, if the scale factor is below 0.91, even 0.9, the movement of the box is immediately stopped on the board without any physics simulation.
 Unfortunately, I am not able to upload screen recordings, but if the demo app is executed, one sees the effect immediately. The question:
 I cannot imagine that this behavior is correct. Can somebody confirm that this is a bug? If so I will write a bug report.
 My demo app uses Xcode Version 16.0 (16A242d), simulator Version 16.0 (1038) and visionOS 2.0. The code: import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { @Environment(AppModel.self) var appModel var body: some View { RealityView { content in let boardEntity = makeBoard() content.add(boardEntity) let boxEntity = makeBox() boardEntity.addChild(boxEntity) moveBox(boxEntity, parentEntity: boardEntity) } } func makeBoard() -> ModelEntity { let mesh = MeshResource.generateBox(width: 1.0, height: 0.05, depth: 1.0) var material = UnlitMaterial(); material.color.tint = .red let boardEntity = ModelEntity(mesh: mesh, materials: [material]) let scale: Float = 0.91 // Physics does not run if scale < 0.91 boardEntity.scale = [scale, scale, scale] boardEntity.transform.translation = [0, 0, -3] boardEntity.generateCollisionShapes(recursive: false) boardEntity.physicsBody = PhysicsBodyComponent(massProperties: .default, material: PhysicsMaterialResource.generate(friction: .infinity, restitution: 0.8), mode: .static) return boardEntity } func makeBox() -> ModelEntity { let mesh = MeshResource.generateBox(size: 0.2) var material = UnlitMaterial(); material.color.tint = .green let boxEntity = ModelEntity(mesh: mesh, materials: [material]) boxEntity.generateCollisionShapes(recursive: false) boxEntity.physicsBody = PhysicsBodyComponent(massProperties: .default, material: PhysicsMaterialResource.generate(friction: .infinity, restitution: 0.8), mode: .dynamic) return boxEntity } func moveBox(_ boxEntity:Entity, parentEntity: Entity) { // Set position and orientation of the box let translation = SIMD3<Float>(0, 0.5, 0) // Turn the box by 45 degrees around the y axis let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0)) let transform = Transform(rotation: rotationY, translation: translation) boxEntity.transform = transform } }
2
0
493
Oct ’24
PushWindowAction requires the replaced window to be a WindowGroup or DocumentGroup
Hello, I keep running into the below warning when pushing a window of type volumetric. Although pushing the windows is achieved, we always get the warning regardless of pushing the window via the Attachment button or via the buttons in the ToolbarItemGroup. Illustrated is all the code: app file, first volume and second volume. You can see in my app file that all volumetric window are indeed in a WindowGroup. What is wrong? How can I get rid of that warning? Warning: PushWindowAction requires the replaced window to be a WindowGroup or DocumentGroup
3
0
586
Sep ’24
A Summary of the WWDC25 Group Lab - visionOS
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for visionOS. I saw that there is a new way to add SwiftUI View attachments in my RealityView, what advantages does this have over the old way? Attachments can now be added directly to your entities with ViewAttachmentComponent. The removes the need to declare your attachments upfront in your RealityView initializer and then add those attachments as child entities. The new approach provides greater flexibility. Canyon Crosser and Petite Asteroids both utilize the new approach. ManipulationComponent looks really cool! Right now my app has a series of complicated custom gestures. What gestures does it handle for me exactly, and are there any situations where I should prefer my own custom gestures? ManipulationComponent provides natural interaction with virtual objects. It seamlessly handles translation and rotation. You can easily add manipulation to a SwiftUI view like Model3D with the manipulable view modifier. The new Object Manipulation API is great for most apps, and is a breeze to implement, but sometimes you might want a more custom feel, and that’s ok! Custom gestures are still fully supported for that scenario. I saw that there is a new API to also access the right main camera. What can I do with this? Correct, in visionOS 26, you can access the left and right main cameras. You can even access them simultaneously as a stereo pair. Camera access still requires a managed entitlement and an enterprise license, see Accessing the main camera for more details about those requirements. More computer vision and machine learning use-cases are unlocked with access to both cameras, we are excited to see what you will do! What do I need to do to add spatial accessory input for my app? First, use the GameController framework to establish a connection with the spatial accessory, and then listen for events from the controller. Then, you can use either RealityKit, ARKit, or a combination of both to track the accessory, anchor virtual content to it, and fine tune the accessory interaction with the content in your app. For more details, check out Discovering and tracking spatial game controllers and styli. By far, the most difficulty with implementing visionOS apps is SwiftUI window management…placing, opening, closing, etc. Are there any improvements to window management in visionOS 26? Yes! We recommend watching Set the scene with SwiftUI in visionOS. You can use the defaultLaunchBehavior to choose whether a particular window is presented (or suppressed) at launch. You can also prevent a window like a secondary toolbar from launching as the initial window using .restorationBehavior(.disabled). Adopting best practices for persistent UI provides a great overview of SwiftUI window management on visionOS. As for placing windows, there is still no API for an app to specify the placement of its windows other than relative placement. If that is a feature you are interested in, please file an enhancement request for it using Feedback Assistant! How to get access to the Enterprise API? First, request the entitlement and license through your Apple Developer or enterprise account. Once these have been granted, include the license and entitlement in your project. Then you can build, test, and distribute as an in-house app.
4
0
269
Jul ’25
How to Play Timeline Animations via code
Hi everyone, I need to synchronize the playback of RealityKit Timelines via SharePlay. To do this I am trying to get the references of the timelines using "AnimationPlaybackController" and "AnimationResource". In my realitykit scene I have configured both an animation (with blender), and a timeline, the animation starts correctly when the realitykit scene starts, the timeline not. Below the code: struct ContentView: View { @State private var subscriptions = [EventSubscription]() @Environment(AppModel.self) private var appModel let rootEntity = Entity() @State var testEntity: Entity? @State var testAnimation: AnimationResource? @State var testController: AnimationPlaybackController? init() { CubeComponent.registerComponent() } var body: some View { RealityView { content in content.add(rootEntity) if let scene = try? await Entity(named: "Room", in: realityKitContentBundle) { rootEntity.addChild(scene) playAnimations(from: content) } } .gesture(SpatialTapGesture().targetedToAnyEntity() .onEnded({ value in _ = value.entity.applyTapForBehaviors() if let testEntity, let testAnimation { testController = testEntity.playAnimation(testAnimation.repeat()) } }) ) } func playAnimations(from content: RealityViewContent) { subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: AnimationLibraryComponent.self, { event in let entity = event.entity entity.components[AnimationLibraryComponent.self]?.animations.forEach({ (key, value) in if value.definition is AnimationGroup { if key == "/Room/TestTimeline" { let controller = entity.playAnimation(value.repeat()) testEntity = entity testAnimation = value appModel.syncronizedAnimations[key] = .init(name: key, animationController: controller, entityName: entity.name) } } else { if entity.name == "SphereInteractable" { let controller = entity.playAnimation(value.repeat()) appModel.syncronizedAnimations[key] = .init(name: key, animationController: controller, entityName: entity.name) } } }) })) } } the variables testEntity, testAnimation and testController are for testing purposes only. If I try to start the animations in the playAnimations function, only the animation created via blender starts (the one related to the object "SphereInteractable"), the Timeline starts only if I save a reference and I play it with a tap gesture or with a delay of ! seconds with DispatchQueue.asyncAfter called in the onAppear. is there a better way to handle this? The goal is to have a reference of the AnimationPlaybackController of the timeline, in order to sync the animation via shareplay. Thanks
3
0
846
Oct ’24
Volumetric window not sharing in SharePlay session for VisionOS
I've been struggling with this for far too long so I've decided to finally come here and see if anyone can point me to the documentation that I'm missing. I'm sure it's something so simple but I just can't figure it out. I can SharePlay our test app with my brother (device to device) but when I open a volumetric window, it says "not shared" under it. I assume this will likely fix the video sharing problem we have as well. Everything else works so smooth but SharePlay has just been such a struggle for me. It's the last piece to the puzzle before we can put it on the App Store.
2
0
99
2w
Metal Compositor Service & Persona (VisionOS)
Hello, I'm currently trying to make a collaborative app. But it just works only on Reality View, when I tried to use Compositor Layer like below, the personas disappeared. ImmersiveSpace(id: "ImmersiveSpace-Metal") { CompositorLayer(configuration: MetalLayerConfiguration()) { layerRenderer in SpatialRenderer_InitAndRun(layerRenderer) } } Is there any potential solution too see Personas in Metal view? Thanks in advance!
2
0
711
2d
The multiview video screen turns blank when returning
I am encountering an issue while using the multiview video demo provided at this link "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/". Specifically, when running on versions of visionOS prior to 2.2, navigating back results in a blank screen. Has anyone else experienced this problem and found a solution? Any advice or workaround would be greatly appreciated.
1
0
346
Feb ’25