Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

VISIONOS: how to detect if the user closes a window vs a window going into the background
For context, we have a fully immersive application running on visions. The application starts with a standard View/menu, and when the user clicks an option, it takes you to fully immersive mode with a small floating toolbar window that you can move and interact with as you move around the virtual space. When the user clicks the small x button below the window, we intercept the scenePhase .background event and handle exiting immersive mode and displaying the main menu. This all works fine. The problem happens if the user turns around and doesn't look at the floating window for a couple of minutes. The system decides that the window should go into the background, and the same scenePhase background event is called - causing the system to exit immersive mode without warning. There seems to be no way of preventing this, and no distinction between this and the user clicking the close button. Is there a reliable way to detect if the user intentionally clicks the close button vs the window from going into the background through lack of use? onDisappear doesn't trigger. thanks in advance
1
0
59
15h
Loading USDZ file take almost 30s
I'm having issue with loading 1.2GB USDZ file with visionOS Here's the details the file is download via backend api file is download to document directory FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true) when loading the asset, it took almost 30s to load the asset Loaded usd((extension in RealityFoundation):RealityKit.Entity.LoadStatistics.USDLoader.rio) in 29.24642503261566 seconds loading asset code let model = try await Entity(contentsOf: assetUrl) USDZ file is exported from RealityComposerPro Did I make any mistake on the flow or is there any other approach to decrease the loading time?
1
0
61
17h
VisionOS 2 dev beta control center call
Hello I've just updated my Vision Pro to the newest and greatest 2.0, and I see that the way to call out control center has been changed to the hand gesture which I'm assuming that is powered by computer vision? using the cams, for me there is a use case of watching apple TV shows at night where my gf would like to have the lights turned off, and that is the time where it fails, the new method is greate and the responsiveness are crazily good, but I would like this to be a toggle so that we can select our own method? like for example during the day where lights are sufficient we use the new gestrue recogn way, and when low light we can switch it back to look up? or as a fellow programmer who are just learning, I think it is possible to just make it automatic toggle, whenever the lighting condition can't make the hand gestrue work, hope to see that fixed :) cheers
0
0
66
20h
Attachment-like functionality in metal immersive space
I have an immersive space that is rendered using metal. Is there a way that I can position swiftUI views at coordinates relative to positions in my immersive space? I know that I can display a volume with RealityKit content simultaneously to my metal content. The volume's coordinate system specifically, it's bounds, does not, coincide with my entire metal scene. One approach I thought of would be to open two views in my immersive space. That way, I could simply add Attachment's to invisible RealityKit Entities in one view at positions where I have content in my metal scene. unfortunately it seems that, while I can declare an ImmersiveSpace be composed of multiple RealityViews ImmersiveSpace(){ RealityView { content in // load first view } update: { content in // update } } RealityView{ content in //load second view } } update: { content in //update } } That results in two coinciding realty kit views in the immersive space. I can not however do something like this: ImmersiveSpace(){ CompositorLayer(configuration: ContentStageConfiguration()){ layerRenderer in //set up my metal renderer and stuff } RealityView{ content in //set up a view where I could use attachments } update: { content in } }
2
0
96
5d
VisionOS 2 - Passthrough in screen capture
Hello, I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro. I have applied for and obtained the Enterprise API and actually can stream via the "Main camera access" API, as reported on https://developer.apple.com/videos/play/wwdc2024/10139/. My problem is that I have not found any reference to how to integrate the "Passthrough in screen capture" API into my project. Have any of you been able to do this? Thank you
6
0
207
4d
Apply sprite scene as material in Reality Composer pro
Hello, I’m trying to move my app into vision OS, my app is used for pilot to study the airplane system, is a 3d airplane cockpit build with scene kit and I use sprite scene to animate the cockpit instruments . Scenekit allow to apply as material a sprite scene , so I could animate easy all the different instruments and indication there, but I can’t find this option on reality compose pro , is this possible? any suggestions I can look into to animate and simulate instruments.
0
0
76
1d
Xbox controller and visionOS 2
I am having problems getting button input from an Xbox game controller. I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS. The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement. buttonA.valueChangedHandler = { button, value, pressed in os_log("Got valueChangedHandler") } At the end of RealityView, I have the modifier RealityView { content in // stuff } .handlesGameControllerEvents(matching: .gamepad) But I am never seeing the log message appear in the console when I press the 'A' button (or any other button). Any ideas what I might be doing wrong? The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
1
0
89
2d
Trouble with Core ML Object Tracking for Spherical Objects Using WWDC Sample Code and Object Capture
Hi everyone, I'm working with Core ML for object tracking and have successfully trained a couple of models. However, when I try to use the reference object file in the object tracking sample code from the WWDC video, it doesn't work. I'm training the model on a single-color plastic spherical object. Could this be the reason for the issue? I also attempted to use USDZ 3D assets that resemble the real object. Do these need to be trained with the Object Capture app to work properly? Speaking of the Object Capture app, my experience hasn't been great. When I scanned my spherical object, the result was far from a sphere—it looked more like a mushy dough. Any insights or suggestions would be greatly appreciated!
0
0
88
2d
Add a modifier to a single model in the Reality Composer Pro scene
We can add many models in the Reality Composer Pro scene, but when I use RealityView to display and add modifiers in SwiftUI, the modifiers will have Effect, and I don't want to do this. I hope this modifier will be valid for a single model in the Reality Composer Pro scenario. May I ask how to add modifiers to a single model in the Reality Composer Pro scene?
0
0
72
2d
Potential problem when projecting massive 3D entity
Hi fellows, I am developing a conceptual prototype to explore how Apple Vision Pro can be applied in Real Estate selling. But I encounter an problem and to consult with the developer community. When rendering a massive 3D entity, sometimes it work fine but sometimes my Vision Pro device automatically reboot (all the running apps are shutdown and then Apple logo appears to reboot) while running the app. I have a 1:1 3D entity imported into the Reality Composer Pro, and I managed to develop a simple rendering Vision Pro app. It works great after I finally made it work and see the 1:1 building in front of me. You can see the full demo video for details: https://www.icloud.com/iclouddrive/0d5QA-sYSehmLF9rEMXsCUyDg#REPtt02_Demo_v1.1 But sometimes when rendering the 1:1 building, my Vision Pro device suddenly reboot. So far I have no clues on its root cause. It might be caused by the not-enough-memory or for safety concern? (since the 3D entity is big and might block a big area of the view) Can any one or even an official provide some guidance here and identify the root cause? Thx much!
0
0
100
3d
Error Loading USDZ File in Vision Pro Application
Hi everyone, I'm working on a Vision Pro application and encountering an issue while trying to load a USDZ file. Here are the details: File Path: /Users/siddharthpatel/Library/Developer/CoreSimulator/Devices/31F10013-50B6-4CEF-9388-9094087FAEBF/data/Containers/Data/Application/EB260F0A-A84F-4E95-876D-08199D2A4998/Documents/hive1.usdz Code: do { try await modelEntityForCollider = ModelEntity(contentsOf: fileURL!) } catch { print("Error loading model: (error)") } Error: Thread 1: Fatal error: Failed to import entity from "/Users/siddharthpatel/Library/Developer/CoreSimulator/Devices/31F10013-50B6-4CEF-9388-9094087FAEBF/data/Containers/Data/ ... ve1.usdz" I've verified that the file path is correct and the USDZ file exists at the specified location. What could be causing this error and how can I resolve it? Thanks in advance for your help! Siddharth
1
0
88
1w
How to ship SDK RealityKit entity components that can be using and applied within a customer's application?
We deliver an SDK that enables rich spatial computing experiences. We want to enable our customers to develop apps using Swift or RealityComposer Pro. Composer allows the creation of custom components from the add components button in the inspector panel. These source files are dropped into the RealityComposer package and directory. We would like to be able to have our customers import our SDK components into their applications RealityComposer package and have our components be visible to be applied by our customer into their scene compositions. How can we achieve this? We believe this will lead to a risk ecosystem components extensions for RealityComposer Pro.
4
0
213
2w
SharePlay Button
I learned Sharplay from the WWDC video. I understand the creation of seats, but I can't learn some of the following contents well, so I hope you can help me. The content is as follows: I have set up the seats. struct TeamSelectionTemplate: SpatialTemplate { let elements: [any SpatialTemplateElement] = [ .seat(position: .app.offsetBy(x: 0, z: 4)), .seat(position: .app.offsetBy(x: 1, z: 4)), .seat(position: .app.offsetBy(x: -1, z: 4)), .seat(position: .app.offsetBy(x: 2, z: 4)), .seat(position: .app.offsetBy(x: -2, z: 4)), ] } It was mentioned in one of my previous posts: "I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSe lectionTemplate.", and someone replied to me and asked me to try systemCoordinator.configuration.spatialTemplatePreference = .custom (TeamSelectionTemplate()), however, Xcode error Cannot find 'systemCoordinator' in scope How to solve it? Thank you!
1
0
125
5d
RealityKit's ARView raycast returns nothing
Hello, I have rendered an usdz File using sceneKit's .write() method on the displayed scene. Once I load it on another RealityKit's ARView using the .nonAR mode of the camera, I am trying to use the view's raycast(from:,allowing:,alignment:) method, to get the coordinates on the model. I have applied the collisionComponents when loading the model using the .generateCollisionShapes() function to be able to interact with the modelEntity. However, the raycast result returns nothing. Is there something I am missing for it to work? Thanks!
1
0
92
4d