Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

RealityKit / visionOS – Memory not released after dismissing ImmersiveSpace with USDZ models
Hi everyone, I’m encountering a memory overflow issue in my visionOS app and I’d like to confirm if this is expected behavior or if I’m missing something in cleanup. App Context The app showcases apartments in real scale using AR. Apartments are heavy USDZ models (hundreds of thousands of triangles, high-resolution textures). Users can walk inside the apartments, and performance is good even close to hardware limits. Flow The app starts in a full immersive space (RealityView) for selecting the apartment. When an apartment is selected, a new ImmersiveSpace opens and the apartment scene loads. The scene includes multiple USDZ models, EnvironmentResources, and dynamic textures for skyboxes. When the user dismisses the experience, we attempt cleanup: Nulling out all entity references. Removing ModelComponents. Clearing cached textures and skyboxes. Forcing dictionaries/collections to empty. Despite this cleanup, memory usage remains very high. Problem After dismissing the ImmersiveSpace, memory does not return to baseline. Check the attached screenshot of the profiling made using Instruments: Initial state: ~30MB (main menu). After loading models sequentially: ~3.3GB. Skybox textures bring it near ~4GB. After dismissing the experience (at ~01:00 mark): memory only drops slightly (to ~2.66GB). When loading the second apartment, memory continues to increase until ~5GB, at which point the app crashes due to memory pressure. The issue is consistently visible under VM: IOSurface in Instruments. No leaks are detected. So it looks like RealityKit (or lower-level frameworks) keeps caching meshes and textures, and does not free them when RealityView is ended. But for my use case, these resources should be fully released once the ImmersiveSpace is dismissed, since new apartments will load entirely different models and textures. Cleanup Code Example Here’s a simplified version of the cleanup I’m doing: func clearAllRoomEntities() { for (entityName, entity) in entityFromMarker { entity.removeFromParent() if let modelEntity = entity as? ModelEntity { modelEntity.components.removeAll() modelEntity.children.forEach { $0.removeFromParent() } modelEntity.clearTexturesAndMaterials() } entityFromMarker[entityName] = nil removeSkyboxPortals(from: entityName) } entityFromMarker.removeAll() } extension ModelEntity { func clearTexturesAndMaterials() { guard var modelComponent = self.model else { return } for index in modelComponent.materials.indices { removeTextures(from: &modelComponent.materials[index]) } modelComponent.materials.removeAll() self.model = modelComponent self.model = nil } private func removeTextures(from material: inout any Material) { if var pbr = material as? PhysicallyBasedMaterial { pbr.baseColor.texture = nil pbr.emissiveColor.texture = nil pbr.metallic.texture = nil pbr.roughness.texture = nil pbr.normal.texture = nil pbr.ambientOcclusion.texture = nil pbr.clearcoat.texture = nil material = pbr } else if var simple = material as? SimpleMaterial { simple.color.texture = nil material = simple } } } Questions Is this expected RealityKit behavior (textures/meshes cached internally)? Is there a way to force RealityKit to release GPU resources tied to USDZ models when they’re no longer used? Should dismissing the ImmersiveSpace automatically free those IOSurfaces, or do I need to handle this differently? Any guidance, best practices, or confirmation would be hugely appreciated. Thanks in advance!
3
0
1.1k
3w
WorldTrackingProvider stops working on device
After re-launching the immersive space in my app 5-10 times, the WorldTrackingProvider stops working. Only restarting the app will allow it to start working again. Only on device, not the simulator. I get these errors when it happens: The device_anchor can only be queried when the world tracking provider is running. ARPredictorRemoteService <0x107cbb5e0>: Service configured with error: Error Domain=com.apple.arkit.error Code=501 "(null)" Remote Service was invalidated: <ARPredictorRemoteService: 0x107cbb5e0>, will stop all data_providers. ARRemoteService: remote object proxy failed with error: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process." UserInfo={NSDebugDescription=The connection to service with pid 81 named com.apple.arkit.service.session was invalidated from this process.} ARRemoteService: weak self released before invalidation @Observable class VisionPro { let session = ARKitSession() let worldTracking = WorldTrackingProvider() func transformMatrix() async -> simd_float4x4 { guard let deviceAnchor = worldTracking.queryDeviceAnchor(atTimestamp: CACurrentMediaTime()) else { return .init() } return deviceAnchor.originFromAnchorTransform } func runArkitSession() async { Task { try? await session.run([worldTracking]) } } } which I call from my RealityView: .task { await visionPro.runArkitSession() }
3
0
405
Feb ’25
Loading USDZ asset into Model3D causes visionOS 2.0 beta 5 to crash
We've recently discovered that our app crashes on startup on the latest visionOS 2.0 beta 5 (22N5297g) build. In fact, the entire field of view would dim down and visionOS would then restart, showing the Apple logo. Interestingly, no app crash is reported by Xcode during debug. After investigation, we have isolated the issue to a specific USDZ asset in our app. Loading it in a sample, blank project also causes visionOS to reliably crash, or become extremely unresponsive with rendering artifacts everywhere. This looks like a potentially serious issue. Even if the asset is problematic, loading it should not crash the entire OS. We have filed feedback FB14756285, along with a demo project. Hopefully someone can take a look. Thanks!
3
1
539
Jul ’25
PushWindowAction requires the replaced window to be a WindowGroup or DocumentGroup
Hello, I keep running into the below warning when pushing a window of type volumetric. Although pushing the windows is achieved, we always get the warning regardless of pushing the window via the Attachment button or via the buttons in the ToolbarItemGroup. Illustrated is all the code: app file, first volume and second volume. You can see in my app file that all volumetric window are indeed in a WindowGroup. What is wrong? How can I get rid of that warning? Warning: PushWindowAction requires the replaced window to be a WindowGroup or DocumentGroup
3
0
586
Sep ’24
Reading scenePhase from custom Scene
Hi, I've encountered a thread where an Apple engineer points out that there are 2 possible ways to anchor scenePhase, either App or View implementation: https://developer.apple.com/forums/thread/757429 This thread also links to documentation which states If you read the phase from within a custom Scene instance, the value similarly reflects an aggregation of all the scenes that make up the custom scene: This doesn't seem to be the case on visionOS 2, I tried the following code starting from an empty app template: import SwiftUI @main struct SceneTestApp: App { var body: some Scene { MyScene() WindowGroup(id: "extra") { Text("Extra window") } } } struct MyScene: Scene { @Environment(\.scenePhase) private var scenePhase @Environment(\.openWindow) private var openWindow var body: some Scene { WindowGroup { ContentView() .onAppear { openWindow(id: "extra") } } .onChange(of: scenePhase) { oldValue, newValue in print("scenePhase changed") } } } The result was that I didn't get onChange callback if I only closed the extra window, the callback only came after I closed both windows and the whole app was suspended. Is this expected behavior?
3
0
358
Feb ’25
Template Project Entity Overlapping and Sticking Issues
Hello, There are three issues I am running into with a default template project + additional minimal code changes: the Sphere_Left entity always overlaps the Sphere_Right entity. when I release the Sphere_Left entity, it does not remain sticking to the Sphere_Right entity when I release the Sphere_Left entity, it distances itself from the Sphere_Right entity When I manipulate the Sphere_Right entity, these above 3 issues do not occur: I get a correct and expected behavior. These issues are simple to replicate: Create a new project in XCode Choose visionOS -> App, then click Next Name your project, and leave all other options as defaults: Initial Scene: Window, Immersive Space Renderer: RealityKit, Immersive Space: Mixed, then click Next Save you project anywhere... Replace the entire ImmersiveView.swift file with the below code. Run. Try to manipulate the left sphere, you should get the same issues I mentioned above If you restart the project, and manipulate only the right sphere, you should get the correct expected behaviors, and no issues. I am running this in macOS 26, XCode 26, on visionOS 26, all released lately. ImmersiveView Code: // // ImmersiveView.swift // import OSLog import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { private let logger = Logger(subsystem: "com.testentitiessticktogether", category: "ImmersiveView") @State var collisionBeganUnfiltered: EventSubscription? var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add manipulation components setupManipulationComponents(in: immersiveContentEntity) collisionBeganUnfiltered = content.subscribe(to: CollisionEvents.Began.self) { collisionEvent in Task { @MainActor in handleCollision(entityA: collisionEvent.entityA, entityB: collisionEvent.entityB) } } } } } private func setupManipulationComponents(in rootEntity: Entity) { logger.info("\(#function) \(#line) ") let sphereNames = ["Sphere_Left", "Sphere_Right"] for name in sphereNames { guard let sphere = rootEntity.findEntity(named: name) else { logger.error("\(#function) \(#line) Failed to find \(name) entity") assertionFailure("Failed to find \(name) entity") continue } ManipulationComponent.configureEntity(sphere) var manipulationComponent = ManipulationComponent() manipulationComponent.releaseBehavior = .stay sphere.components.set(manipulationComponent) } logger.info("\(#function) \(#line) Successfully set up manipulation components") } private func handleCollision(entityA: Entity, entityB: Entity) { logger.info("\(#function) \(#line) Collision between \(entityA.name) and \(entityB.name)") guard entityA !== entityB else { return } if entityB.isAncestor(of: entityA) { logger.debug("\(#function) \(#line) \(entityA.name) already under \(entityB.name); skipping reparent") return } if entityA.isAncestor(of: entityB) { logger.info("\(#function) \(#line) Skip reparent: \(entityA.name) is an ancestor of \(entityB.name)") return } reparentEntities(child: entityA, parent: entityB) entityA.components[ParticleEmitterComponent.self]?.burst() } private func reparentEntities(child: Entity, parent: Entity) { let childBounds = child.visualBounds(relativeTo: nil) let parentBounds = parent.visualBounds(relativeTo: nil) let maxEntityWidth = max(childBounds.extents.x, parentBounds.extents.x) let childPosition = child.position(relativeTo: nil) let parentPosition = parent.position(relativeTo: nil) let currentDistance = distance(childPosition, parentPosition) child.setParent(parent, preservingWorldTransform: true) logger.info("\(#function) \(#line) Set \(child.name) parent to \(parent.name)") child.components.remove(ManipulationComponent.self) logger.info("\(#function) \(#line) Removed ManipulationComponent from child \(child.name)") if currentDistance > maxEntityWidth { let direction = normalize(childPosition - parentPosition) let newPosition = parentPosition + direction * maxEntityWidth child.setPosition(newPosition - parentPosition, relativeTo: parent) logger.info("\(#function) \(#line) Adjusted position: distance was \(currentDistance), now \(maxEntityWidth)") } } } fileprivate extension Entity { func isAncestor(of other: Entity) -> Bool { var current: Entity? = other.parent while let node = current { if node === self { return true } current = node.parent } return false } } #Preview(immersionStyle: .mixed) { ImmersiveView() .environment(AppModel()) }
3
0
96
6d
Video Memory Leak when Backgrounding
While trying to control the following two scenes in 1 ImmersiveSpace, we found the following memory leak when we background the app while a stereoscopic video is playing. ImmersiveView's two scenes: Scene 1 has 1 toggle button Scene 2 has same toggle button with a 180 degree skysphere playing a stereoscopic video Attached are the files and images of the memory leak as captured in Xcode. To replicate this memory leak, follow these steps: Create a new visionOS app using Xcode template as illustrated below. Configure the project to launch directly into an immersive space (set Preferred Default Scene Session Role to Immersive Space Application Session Role in Info.plist. Replace all swift files with those you will find in the attached texts. In ImmersiveView, replace the stereoscopic video to play with a large 3d 180 degree video of your own bundled in your project. Launch the app in debug mode via Xcode and onto the AVP device or simulator Display the memory use by pressing on keys command+7 and selecting Memory in order to view the live memory graph Press on the first immersive space's button "Open ImmersiveView" Press on the second immersive space's button "Show Immersive Video" Background the app When the app tray appears, foreground the app by selecting it The first immersive space should appear Repeat steps 7, 8, 9, and 10 multiple times Observe the memory use going up, the graph should look similar to the below illustration. In ImmersiveView, upon backgrounding the app, I do: a reset method to clear the video's memory dismiss of the Immersive Space containing the video (even though upon execution, visionOS raises the purple warning "Unable to dismiss an Immersive Space since none is opened". It appears visionOS dismisses any ImmersiveSpace upon backgrounding, which makes sense..) Am I not releasing the memory correctly? Or, is there really a memory leak issue in either SwiftUI's ImmersiveSpace or in AVFoundation's AVPlayer upon background of an app? App file TestVideoLeakOneImmersiveView First ImmersiveSpace file InitialImmersiveView Second ImmersiveSpace File ImmersiveView Skysphere Model File Immersive180VideoViewModel File AppModel
3
0
693
Oct ’24
How to set the AttractionCenter for a ParticleEmitterComponent in a System with real time updates
I am trying to achieve an effect such that the particles of a particle system are attracted to my hand entity. The hand entity is essentially an AnchorEntity that is tracking my right hand. let particleEmitterEntities = context.entities(matching: particleEmitterQuery, updatingSystemWhen: .rendering) for particleEmitterEntity in particleEmitterEntities { if var particleEmitter = particleEmitterEntity.components[ParticleEmitterComponent.self] { particleEmitter.mainEmitter.attractionCenter = rightHandEntity.position(relativeTo: nil) // trying to get the world space position of the hand // I also tried relative to particleEmitterEntity particleEmitterEntity.components[ParticleEmitterComponent.self] = particleEmitter } else { fatalError("Cannot find particle emitter") } } The particle attraction center doesn't seem to update Another issue I am noticing here that My particle system doesn't show the particle image a lot of times and just renders a placeholder square when I do this, when I comment this code out I get the right particle image. I believe this is due to the number of times this loop runs to update the position of the attraction center. What is the right way to do an effect where the particles are attracted to my hand.
3
0
533
Oct ’24
MainActor attribute on RealityKit APIs is causing problems
Hello, A lot of the RealityKit APIs (Ex. LowLevelMesh, LowLevelTexture, etc.) are marked with MainActor so they needed to be accessed on the main thread. This creates issues when we need to perform expensive GPU related operations since now we need to perform those on the main thread. This results in bottlenecks and hangs in our application. We would like to use a multi-threaded approach to solve these problems which is difficult to do here. We are constantly streaming data whether the app is just appearing or the user is interacting with our application so we need to be able to perform these operations on a separate thread. Any advice on how to achieve this using RealityKit? Thank you.
3
8
192
Mar ’25
Volumetric window not sharing in SharePlay session for VisionOS
I've been struggling with this for far too long so I've decided to finally come here and see if anyone can point me to the documentation that I'm missing. I'm sure it's something so simple but I just can't figure it out. I can SharePlay our test app with my brother (device to device) but when I open a volumetric window, it says "not shared" under it. I assume this will likely fix the video sharing problem we have as well. Everything else works so smooth but SharePlay has just been such a struggle for me. It's the last piece to the puzzle before we can put it on the App Store.
2
0
100
2w
Inserted image not showing up on tab bar on visionOS
Images are not appearing on tab bar on visionOS despite it shows up in perfect on iOS. I tried rendering mode API to make the original image visible, and it is working fine on iOS. But on visionOS the image stays white like masked by the tab bar default content color. Did anyone achieve solving this problem? I might be able to create my custom ornament to make it look like tab bar, but I think it‘s too much coding to do so.
2
0
486
Sep ’24
Can visionOS windows be smarter about following users?
I haven't found a way to programmatically position the main view in visionOS apps, which seems intentional. While this aligns with user-controlled window placement, it creates a challenge: as users move, they must constantly reposition the main window manually. A potential solution could be a feature that quickly brings the window to the user, perhaps via a custom gesture. This might improve user experience significantly. Given my current understanding of visionOS, I may be missing something. I'd appreciate any insights or alternative perspectives on this issue. Thoughts?
2
0
556
Nov ’24
visionOS RealityKit's physics simulation stops for certain entity.scale values
The setup:
 My Vision Pro app loads uszd models created by a 3rd party app. These models have to be scaled to the right dimension for RealityKit. I want to use physics for realistic movement of these entities, but this does not work as expected. I thus wrote a demo test app based on Apple's immersive space app template (code below).
This app has two entities, a board and above a box that is a child of the board.
 Both have a collision component and a physics body.
 The board, and thus also its child the box, are scaled. The problem:
 If the scale factor is greater or equal 0.91, the box falls under gravity towards the board where it is stopped after some movements. This looks realistic.
 However, if the scale factor is below 0.91, even 0.9, the movement of the box is immediately stopped on the board without any physics simulation.
 Unfortunately, I am not able to upload screen recordings, but if the demo app is executed, one sees the effect immediately. The question:
 I cannot imagine that this behavior is correct. Can somebody confirm that this is a bug? If so I will write a bug report.
 My demo app uses Xcode Version 16.0 (16A242d), simulator Version 16.0 (1038) and visionOS 2.0. The code: import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { @Environment(AppModel.self) var appModel var body: some View { RealityView { content in let boardEntity = makeBoard() content.add(boardEntity) let boxEntity = makeBox() boardEntity.addChild(boxEntity) moveBox(boxEntity, parentEntity: boardEntity) } } func makeBoard() -> ModelEntity { let mesh = MeshResource.generateBox(width: 1.0, height: 0.05, depth: 1.0) var material = UnlitMaterial(); material.color.tint = .red let boardEntity = ModelEntity(mesh: mesh, materials: [material]) let scale: Float = 0.91 // Physics does not run if scale < 0.91 boardEntity.scale = [scale, scale, scale] boardEntity.transform.translation = [0, 0, -3] boardEntity.generateCollisionShapes(recursive: false) boardEntity.physicsBody = PhysicsBodyComponent(massProperties: .default, material: PhysicsMaterialResource.generate(friction: .infinity, restitution: 0.8), mode: .static) return boardEntity } func makeBox() -> ModelEntity { let mesh = MeshResource.generateBox(size: 0.2) var material = UnlitMaterial(); material.color.tint = .green let boxEntity = ModelEntity(mesh: mesh, materials: [material]) boxEntity.generateCollisionShapes(recursive: false) boxEntity.physicsBody = PhysicsBodyComponent(massProperties: .default, material: PhysicsMaterialResource.generate(friction: .infinity, restitution: 0.8), mode: .dynamic) return boxEntity } func moveBox(_ boxEntity:Entity, parentEntity: Entity) { // Set position and orientation of the box let translation = SIMD3<Float>(0, 0.5, 0) // Turn the box by 45 degrees around the y axis let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0)) let transform = Transform(rotation: rotationY, translation: translation) boxEntity.transform = transform } }
2
0
493
Oct ’24
QuickLook and .heic
I"m trying to create a simple app for my students that will display .heic images taken with a nikon and them converted to .heic in the photos app. My attempts only result in the QuickLook viewer showing the images in 2d. Any guidance? Here is my ContentView: import SwiftUI import QuickLook struct ContentView: View { @State private var showQuickLook = false @State private var previewURL: URL? = nil // State to store the URL for Quick Look var body: some View { VStack { Button("See it in 3D") { // Set the URL for the file from the bundle and toggle Quick Look presentation if let imageURL = Bundle.main.url(forResource: "Michelia_fuego", withExtension: "heic") { previewURL = imageURL // Set the preview URL if the image is found showQuickLook.toggle() // Toggle to trigger Quick Look presentation } else { print("File not found") // Print error if the file is missing } } .quickLookPreview($previewURL) // Binding to the URL } } } #Preview { ContentView() }
2
0
711
Oct ’24
Vision pro not pairing Macbook with pro
I'm having trouble pairing my apple vision pro to my macbook pro M3, my macbook pro is on sonoma 14.6 and i have tested pairing a visionOS1.2 and 2.0 vision pro but it still doesn't work, i have a mac mini that pairs and connects fine to the headsets and those are the steps i tried to do on vision pro and macbook pro to pair them together until now but with no success : On the same windows wifi hotspot On the same iPhone hotspot On an other wifi hotspot Tried to clear remote devices, still not recognized tried to turn off and turn on developper mode still nothing tried to reset network parameters tried to restart headset tried to restart Xcode tried to restart mac just after restart the headset showed up and i clicked pair and typed in the code but then the headset was still in "disconnected" and couldn't connect to mac tried to restart mac and headset tried to rename headset tried to switch mac tried 1 headset on at a time tried to clean build folder deleted contents of ~/Library/Developer/Xcode/DerivedData tried sudo defaults write "/Library/Preferences/com.apple.mDNSResponder.plist NoMulticastAdvertisements" -bool true tried to deactivate the firewall
2
0
635
Oct ’24
Ground Shadows pass through objects
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below. This is a screen capture in composer pro but the same thing happens in the Vision Pro. Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
2
0
504
Dec ’24
How to search location in global rather than in local?
I'm doing a weather app, users can search locations for getting weather, but the problem is, the results only shows locations in my country, not in global. For example, I'm in China, I can't search New York, it just shows nothing. Here's my code: @Observable class SearchPlaceManager: NSObject { var searchText: String = "" let searchCompleter = MKLocalSearchCompleter() var searchResults: [MKLocalSearchCompletion] = [] override init() { super.init() searchCompleter.resultTypes = .address searchCompleter.delegate = self } @MainActor func seachLocation() { if !searchText.isEmpty { searchCompleter.queryFragment = searchText } } } extension SearchPlaceManager: MKLocalSearchCompleterDelegate { func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) { withAnimation { self.searchResults = completer.results } } } Also, I've tried to set searchCompleter.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: 0, longitude: 0), span: MKCoordinateSpan(latitudeDelta: 180, longitudeDelta: 360) ), but it doesn't work.
2
0
753
Dec ’24
CompositorServices Or RealityKit
I have been concentrating on developing the visionOS application. While I am currently quite familiar with RealityKit, CompositorServices has also captured my attention. I have not yet acquired knowledge of CompositorServices. Could you please clarify whether it is essential for me to learn CompositorServices? Additionally, I would appreciate it if you could provide insights into the advantages of RealityKit and CompositorServices.
2
0
672
Mar ’25
visionOS Simulator Rotate and Scale gestures difficult to register (capture)
We were having an issue wrb the system rotate and scale gestures (two-handed gestures / RotateGesture3D and MagnifyGesture) were extremely difficult to register (make work) in the visionOS simulator. The solution we found was to: Launch your app in the simulator Move the pointer on top of the 3D object for which you are testing rotation and scaling gestures. Press and hold the Option key to display touch points (ie: the two-handed gesture points). While maintaining the option key pressed, release the pointer and re-enable it again. I am using a track pad with tap-to-click enabled and three-finger to drag enabled in accessibility, so "release the pointer and re-enable it again" translates simply to removing the three finger and placing them again on the trackpad. If you have maintained the option key pressed, then you should now be able to rotate and scale the 3D object. Context if you are interested: Our issue was also occurring in Apple's own sample project relating to gestures "Transforming RealityKit entities using gestures", at below link. On Apple's article "Interacting with your app in the visionOS simulator" at the below link, for two-handed gestures it states "Press and hold the Option key to display touch points. Move the pointer while pressing the Option key to change the distance between the touch points. Move the pointer and hold the Shift and Option keys to reposition the touch points." This simply did not work anymore for rotation and scaling gestures. These gestures used to be a lot more responsive in Sonoma. Either the article should be updated to what I described above, or there is an issue. Our colleague who is using macOS Sonoma 14.6.1 with the latest release of Xcode is not having these issues. Here is the list of configurations (troubleshooting we tried!) where it is difficult to achieve rotation and scaling gestures in the visionOS simulator: macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.1 RC w visionOS 2.0 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.1 macOS Sequoia 16.1 Beta, Xcode 16.2 Beta 1 w visionOS 2.0 macOS Sequoia 16.1 Beta, remove all Xcodes and installed the build from AppStore (Xcode 16.1) macOS Sequoia 16.1 Beta, Xcode 16.0 w visionOS 2.0 completely wiped out, and reset entire development machine, re-installed latest releases of sequoia (15.1) and xcode (15.1)) Throughout these troubleshooting I often: restarted both xcode and sim erased all derived data erased all contents and settings from sims performed fresh git clones None of the above worked, only the workaround described above works atm. As you can maybe deduce, it was very time consuming to find the workaround, we also wasted some development effort thinking our gesture development was no-good. Hopefully this will help other devs. Article Link: https://developer.apple.com/documentation/xcode/interacting-with-your-app-in-the-visionos-simulator Gesture sample project link: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures
2
0
826
Nov ’24