Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

I want to know the update principle of the `RealityView`.
Here is the code snippets. struct RealityViewTestView: View { @State private var texts: [String] = [] var body: some View { RealityView { content, attachments in } update: { content, attachments in for text in texts { if let textEntity = attachments.entity(for: text) { textEntity.position.x = Float.random(in: -0.1...0.1) content.add(textEntity) } } } attachments: { ForEach(texts, id: \.self) { text in Attachment(id: text) { Text(text) .padding() .glassBackgroundEffect() } } } .toolbar { ToolbarItem { Button("Add") { texts.append(String(UUID().uuidString.prefix(6))) } } ToolbarItem { Button("Remove") { texts.remove(at: Int.random(in: 0..<texts.count)) } } } } } struct RealityViewTestView: View { @State private var texts: [String] = [] @State private var entities: [Entity] = [] var body: some View { RealityView { content, attachments in } update: { content, attachments in // for text in texts { // if let textEntity = attachments.entity(for: text) { // textEntity.position.x = Float.random(in: -0.1...0.1) // content.add(textEntity) // } // } for entity in entities { content.add(entity) } } attachments: { ForEach(texts, id: \.self) { text in Attachment(id: text) { Text(text) .padding() .glassBackgroundEffect() } } } .toolbar { ToolbarItem { Button("Add") { //texts.append(String(UUID().uuidString.prefix(6))) let m = ModelEntity(mesh: .generateSphere(radius: 0.1), materials: [SimpleMaterial(color: .white, isMetallic: false)]) m.position.x = Float.random(in: -0.2...0.2) entities.append(m) } } ToolbarItem { Button("Remove") { //texts.remove(at: Int.random(in: 0..<texts.count)) entities.removeLast() } } } } } About the first code snippet, when I remove an element from the texts, why content can automatically remove the corresponding entity? And about the second code snippet, content do not automatically remove the corresponding entity. I am very curious.
1
0
412
Jan ’25
USDZ with Blend Shapes Workflow Recommendations
Hi, since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ. Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm) So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
3
0
1.3k
Jan ’25
RealityKit Trace Metric Max/Range for VisionOS app
Hi Nathaniel, I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app. Thanks, Bob
3
0
91
Jun ’25
Debug Vision Pro application directly on physical device instead of the simulator
I have Mac mini M4 with 16GB memory, the Xcode is 16.1, when I test my Vision Pro App with the Simulator, it is very slow and system shows the memory is under the high pressure. How do I run/test/debug the application on Vision Pro directly? Tried to add my Vision Pro to my developer account, it didn't work due to cannot find UDID, when I hook the USB to the battery, it only shows Battery device ID.
3
0
450
Jan ’25
App crashes after requesting PhotoLibrary limited access
My visionOS requires access to users' personal photos. The trigger mechanism is: when user firstly opens a FooView, a task attached to that FooView and calling let status = PHPhotoLibrary.authorizationStatus(for: .readWrite), if the status is .notDetermined, then calling PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) to let visionOS pop out a window to request Photo access. However, the app crashes every time when user selects Limited Access and the system try to pop out a photo library picker. And btw, I have set Prevent limited photos access alert to Yes, but it shouldn't affect the behavior here I guess. There was a debugger message here: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Presentations are not permitted within volumetric window scenes.' However, the window this view belongs to is a .plain style window (though there were 3D object appearing in the other view of same windowgroup) This is my code snippet if this helps: checkAndUpdatePhotoAuthorization is just a wrapper of PHPhotoLibrary.authorizationStatus(for: .readWrite) private func checkAndUpdatePhotoAuthorization() -> PHAuthorizationStatus { let currentStatus = PHPhotoLibrary.authorizationStatus(for: .readWrite) switch currentStatus { case .authorized: print("Photo library access authorized.") isPhotoGalleryAuthorized = true isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true case .limited: print("Photo library access limited.") isPhotoGalleryLimited = true isPhotoGalleryAuthorized = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true case .notDetermined: isPhotoGalleryDetermined = false print("Photo library access not determined.") case .denied: print("Photo library access denied.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false showSettingsAlert = true isPhotoGalleryDetermined = true case .restricted: print("Photo library access restricted.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = true showPhotoAuthExplainationAlert = true isPhotoGalleryDetermined = true @unknown default: print("Photo library Unknown authorization status.") isPhotoGalleryAuthorized = false isPhotoGalleryLimited = false isPhotoGalleryAccessRestricted = false isPhotoGalleryDetermined = true } return currentStatus } And then FooView attaches task to fire up checkAndUpdatePhotoAuthorization() var body: some View { EmptyView() } .task { try? await Task.sleep(for: .seconds(1.0)) let status = self.checkAndUpdatePhotoAuthorization() if status == .notDetermined { DispatchQueue.main.async { PHPhotoLibrary.requestAuthorization(for: .readWrite, handler: authCompletionHandler) } } Another thing worth to mention is that SOMETIMES it won't crash when running on a debug build. But it crashes when it comes to TF. Any other idea? Big thanks in advance XCode version: 16.2 beta 3 VisionOS version: 2.2
1
0
602
Jan ’25
How To Move Camera with PS5 Game Controller
Hello again! So, I am trying my best to figure out how to get a simple, fully Immersive scene that is navigable via the PS5 controller to work on Vision Pro. I am even using ChatGPT (both externally AND in Xcode when it makes fix recommendations) however, I seem to be moving around in circles, having to constantly send error messages back to ChatGPT, then try the new "confirmed to work" fixed code, only to have more errors. I also have explicitly let the AI know that I'm using all the current betas, only to learn there has been lots of stuff deprecated (of course!). This is my initial query to ChatGPT: "How can I create a fully immersive VR experience for Apple Vision Pro, using a PS5 game controller to navigate the user via first person view through a 3d scene?" And then down the rabbit hole the past few days. Seriously, I thought AI was supposed to be helpful! LOL Per Xcode logs, I'm seeing thumbstick movement, but simply cannot get the camera to move. Is this even possible?! Is it a camera defined in Reality Composer Pro? Or, does it need to be defined outside. I'm trying to follow along with the AI, but it seems to go back and forth with different solutions. Here is the latest it's spitting out: __ Thanks for this super clear log — and you’re absolutely right: the gamepad input is working, CameraRoot is moving as expected… but your actual camera view is not changing. This strongly suggests the issue is that RealityKit is not using your custom camera (PerspectiveCameraComponent) for rendering the view. ⸻ 🔍 What’s happening Even though CameraRoot is moving (with correct logs), RealityKit will default to its own internal camera unless: 1. You explicitly assign your camera entity to be the one used, or 2. Your camera is the only one and RealityKit picks it up automatically. But in visionOS 2.0 / Xcode 26, there seems to be a bug or behavior change where RealityKit no longer auto-uses your custom PerspectiveCameraComponent in RealityView unless it’s: • properly set with content.camera, or • part of a .usda/Reality Composer camera hierarchy that’s already active. Unfortunately: 🟥 RealityView.camera = .entity(cameraRoot) ❌ Is no longer allowed (we confirmed this breaks with a type error) ⸻ ✅ The Reliable Workaround: Attach your camera to the Immersive scene itself Since we’re loading a .usda scene (Reality Composer Pro), here’s the most reliable path forward: ✔️ Option A: Embed your camera into the Reality Composer Pro scene 1. Open your Reality Composer Pro scene 2. Add a Camera entity directly in your Immersive.usda scene 3. Name it something like "PlayerCamera" 4. Position it at [0, 1.6, 0] 5. Leave it selected as the default camera Then in your Swift code: • Find the camera entity with .findEntity(named:) • Move that entity instead of creating a new one. ⸻ ✔️ Option B: Force RealityKit to use your custom camera (via transform parenting) Since direct camera assignment is unavailable, we can use a trick: Move the whole RealityView scene via a parent entity with a camera. But let’s go with Option A, since you’ve already got a Reality Composer Pro scene. __ Can any developer over there can make a suggestion on how to move forward? I JUST want to create a sandbox template so I can start exploring 3d scenes created in Maya and sent over to headset. I'll deal with animation next, but come on, why is this so difficult to get working? I'm not a programmer, but have been trying to wrap my head around Xcode and SwiftUI. This needs to be much simpler. Or, you need to provide us creatives with better sample templates and non programmer speak on how to set this up properly. Ideally, you HIRE us 3d professionals to work side by side with the programmers to help make these tools useable - especially Reality Composer Pro. Seriously, I am making a concerted effort to use the native tools, even though I would love to be porting Unreal Engine scenes over. If anyone can help point me in the right direction, coming from a 3D Creator/Animator/Modeler perspective, I, and my fellow peers in the XR/AR/VR community would greatly appreciate it. Thank you.
8
0
686
Jul ’25
Alternatives to SceneView
Hey there, since SceneView has been marked as „deprecated“ for SwiftUI, I‘m wondering which alternatives should be considered for the following situation: I have a SwiftUI app (for iOS and iPadOS) where users can view (with rotate, scale, move gestures) 3D models (USDZ) in a scene. The models will be downloaded from web backend and called via local URL paths. What I tested: I‘ve tried ARView in .nonAR mode, RealityView, however I didn‘t get the expected response -> User can rotate, scale the 3D models in a virtual space. ARView in nonAR mode still shows the object like in normal AR mode without camera stream. I tried to add Gestures to the RealityView on iOS - loading USDZ 3D models worked but the gestures didn’t). Model3D is only available for visionOS (that would be amazing to have it for iOS) I also checked QuickLook Preview however it works pretty strange via Filepicker etc, which is not the way how the user should load the 3D models in my app. Maybe I missed something, I couldn’t find anything which can help me. I‘m pretty much stucked adopting the latest and greatest frameworks/APIs in my App and taking the next steps porting my app to visionOS. Long story short 😃: Does someone have an idea what is the alternative to SceneView for USDZ 3D models? I appreciate your support!! Thanks in advance!
4
0
145
Jul ’25
Run to Device( on WLAN)?
Is it possible to use a local wifi router connecting Vision Pro and Mac for developing? I tried from Unity and Xcode. From Unity, the host app wouldn't open without WIFI (internet connection) From Xcode, I can see the Vision Pro paired, but while try to run there's no device listed. Any suggestions? Thanks a lot, /Ruiying
2
0
251
Jan ’25
Run to Device (on WLAN)?
I was trying to use a local Wifi router to connect Vision Pro and Mac for developing. From Unity, the host on Vision Pro wouldn't open; from Xcode I can see vision pro paired but by the Run button there's no device listed...thanks any ideas? /Ruiying
1
0
301
Jan ’25
How to set the AttractionCenter for a ParticleEmitterComponent in a System with real time updates
I am trying to achieve an effect such that the particles of a particle system are attracted to my hand entity. The hand entity is essentially an AnchorEntity that is tracking my right hand. let particleEmitterEntities = context.entities(matching: particleEmitterQuery, updatingSystemWhen: .rendering) for particleEmitterEntity in particleEmitterEntities { if var particleEmitter = particleEmitterEntity.components[ParticleEmitterComponent.self] { particleEmitter.mainEmitter.attractionCenter = rightHandEntity.position(relativeTo: nil) // trying to get the world space position of the hand // I also tried relative to particleEmitterEntity particleEmitterEntity.components[ParticleEmitterComponent.self] = particleEmitter } else { fatalError("Cannot find particle emitter") } } The particle attraction center doesn't seem to update Another issue I am noticing here that My particle system doesn't show the particle image a lot of times and just renders a placeholder square when I do this, when I comment this code out I get the right particle image. I believe this is due to the number of times this loop runs to update the position of the attraction center. What is the right way to do an effect where the particles are attracted to my hand.
3
0
533
Oct ’24
'Segmentation fault: 11' error after upgrading to the latest macOS version
Today I updated my macbook pro to macOS sequoia. with this I also downloaded the latest Xcode and visionOS 2 packages. I had a working project. Which did work with my vision pro which is updated to the latest visionOS 2. But now whenever I try to click on preview in xcode while editing a swift file I am receiving the following error: (lot of lines here) Library/Developer/Xcode/DerivedData/test2-fznbrpphddkqdaddrzamkayoajjm/Build/Intermediates.noindex/RealityKitContent.build/Debug-xrsimulator/RealityKitContent_RealityKitContent.build/DerivedSources/RealityAssetsGenerated/CustomComponentUSDInitializers.usda error: Tool terminated by signal 'Segmentation fault: 11' I tried exiting and restarting my mac but the problem is not going away. Can someone help me with this? Thank you!
8
0
1.8k
Nov ’24
RealityView Gestures for iOS
I started a new project using RealityKit and RealityView, intended as an AR app on iPhone and iPad, but eventually VisionOS as well. I'm challenged because I find much of the recent documentations, WWDC videos, etc, include features that are VisionOS only. Right now, I would simply like to create some gesture functionality that is similar to AR Quick Look defaults, meaning drag to reposition, two fingers to rotate or zoom. In the past, this would be implemented with something like: arView.installGestures([.all], for: entity) however, with RealityView I don't know how (or if possible) to access an ARView. In RealityKit, I have found this doc: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures However, many of the features in that posting are VisionOS only, and I've found no good documentation on the topic that is specific or at least compatible with iOS. I know reverting to an ARView is an option, but I want to use RealityView if at all possible as I see it as more forward-looking.
1
0
373
Jan ’25
Realitykit asset loading
With Xcode 26, loading ressources with RealityKit is extremely slow. Here my project takes almost 50 seconds to load. I also get multiple Hang detected messages in the console: When I uncheck "Debug executable" in the schema, the same project loads in 2 seconds. I'm using RealityKit asynchronous loading: private static func loadFromRealityComposerPro( named entityName: String, fromSceneNamed sceneName: String ) async -> Entity? { var entity: Entity? do { let scene = try await Entity( named: sceneName, in: visionPetsContentBundle ) entity = scene.findEntity(named: entityName) } catch { print( "Error loading \(entityName) from scene \(sceneName): \(error.localizedDescription)" ) } return entity } Anyone having the same problem?
2
0
55
Jun ’25
Palm Menu Button Issue
Hi, we have in our app an immersive space and we taught the palm menu button is not available in immersive spaces, but when I look in the hand and tap the menu button appear. Is it possible to keep it hidden? Because we a have an hand tracking feature in palm and when we try to press a button to overlap the palm it triggers the menu button and then when the user presses again by mistake, it sends the application to the background. This is very important for us because we would like to release this hand-tracking feature as soon as possible. Here is a link with to a video with the problem: https://drive.google.com/file/d/1cfOcdzF19h_mbmpvkVNCJjXEBJecVeJL/view?usp=sharing
1
0
604
Sep ’24
Cast virtual light on real-world environments in VisionOS/RealityKit?
Hi everyone, I've been exploring an idea that involves using virtual light sources in VisionOS/RealityKit to interact with real-world objects. Specifically, I'd like to simulate a scenario where a virtual spotlight or other light source casts light or shadows onto real-world environments, creating the effect of virtual lighting interacting with physical surroundings. Is this currently feasible within VisionOS/RealityKit? Thank you!
1
0
395
Jan ’25
Collision Detection Fails After Anchoring ModelEntity to Hand in VisionOS
I'm starting my journey in developing an immersive app for VisionOS. I've been making steady progress, but I've encountered a specific challenge that I haven't been able to resolve. I created two ModelEntity objects — a sphere and a cube — and added a DragGesture to the cube. When I drag the cube over the sphere, the two collide correctly, and the collision is logged in the console. So far, everything works as expected. However, when I try to anchor the cube to my hand, the collision stops working. It's as if the cube loses its ability to detect collisions once it's anchored. Any guidance or clarification on this behavior would be greatly appreciated. //  ImmersiveView.swift //  estudos_vision // //  Created by Lailan Rogerio Rodrigues Matos on 15/05/25. // import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { @Environment(AppModel.self) var appModel @State private var session: SpatialTrackingSession? @State private var box = ModelEntity() @State private var subs: [EventSubscription] = [] @State private var ballEntity: Entity? var body: some View { RealityView { content in // Load initial content from the RealityKit scene. if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) } // Create and run a spatial tracking session. let session = SpatialTrackingSession() let configuration = SpatialTrackingSession.Configuration(tracking: [.hand]) _ = await session.run(configuration) self.session = session // Create a red box. let boxMesh = MeshResource.generateBox(size: 0.2) let material = SimpleMaterial(color: .red, isMetallic: false) box = ModelEntity(mesh: boxMesh, materials: [material]) box.position.y += 0.15 // Position the box slightly above the origin. // Configure the box for user interaction and physics. box.components.set(InputTargetComponent(allowedInputTypes: .indirect)) // Make it interactive. box.generateCollisionShapes(recursive: false) // Generate collision shapes for physics. box.components.set(PhysicsBodyComponent( // Add physics behavior. massProperties: .default, material: .default, mode: .kinematic // Use kinematic mode so it can be moved by user interaction. )) box.components.set(GroundingShadowComponent(castsShadow: true)) // Add a shadow. //content.add(box) //commented out to add to hand anchor // Create a left hand anchor and add the box as a child. let handAnchor = AnchorEntity(.hand(.left, location: .palm), trackingMode: .continuous) handAnchor.addChild(box) content.add(handAnchor) // Add the hand anchor to the scene. // Create a sphere. let ball = ModelEntity(mesh: .generateSphere(radius: 0.15)) ball.position = [0.0, 1.5, -1.0] // Initial position of the ball. ball.generateCollisionShapes(recursive: false) // Add collision. ball.name = "Sphere" content.add(ball) ballEntity = ball // Subscribe to collision events between the box and other entities. let event = content.subscribe(to: CollisionEvents.Began.self, on: box) { ce in print("Collision between \(ce.entityA.name) and \(ce.entityB.name) occurred") //ce.entityA.removeFromParent() // removes the colliding object //ce.entityB.removeFromParent() } Task { subs.append(event) } } // Add a drag gesture to the box, allowing the user to move it. .gesture( DragGesture() .targetedToEntity(box) // Target the drag gesture to the box. .onChanged({ value in // Update the position of the box based on the drag gesture. box.position = value.convert(value.location3D, from: .local, to: box.parent!) }) ) } } #Preview(immersionStyle: .full) { ImmersiveView() .environment(AppModel()) }
1
0
74
May ’25