Discuss Spatial Computing on Apple Platforms.

Post

Replies

Boosts

Views

Activity

VisionOS 2 dev beta control center call
Hello I've just updated my Vision Pro to the newest and greatest 2.0, and I see that the way to call out control center has been changed to the hand gesture which I'm assuming that is powered by computer vision? using the cams, for me there is a use case of watching apple TV shows at night where my gf would like to have the lights turned off, and that is the time where it fails, the new method is greate and the responsiveness are crazily good, but I would like this to be a toggle so that we can select our own method? like for example during the day where lights are sufficient we use the new gestrue recogn way, and when low light we can switch it back to look up? or as a fellow programmer who are just learning, I think it is possible to just make it automatic toggle, whenever the lighting condition can't make the hand gestrue work, hope to see that fixed :) cheers
0
0
23
3h
Xbox controller and visionOS 2
I am having problems getting button input from an Xbox game controller. I have the visionOS 2 beta on my Apple Vision Pro, and I am trying to use an Xbox game controller with a RealityView following the instructions from the WWDC session Explore game input in visionOS. The notification about a game controller is picking up the game controller, finds GCInputButtonA, and I am setting closures for touchedChangedHandler, pressedChangedHandler, and valueChangedHandler that just print an os_log statement. buttonA.valueChangedHandler = { button, value, pressed in os_log("Got valueChangedHandler") } At the end of RealityView, I have the modifier RealityView { content in // stuff } .handlesGameControllerEvents(matching: .gamepad) But I am never seeing the log message appear in the console when I press the 'A' button (or any other button). Any ideas what I might be doing wrong? The Xbox controller is pretty old. Settings is reporting it as version 9.0.3
1
0
81
1d
Attachment-like functionality in metal immersive space
I have an immersive space that is rendered using metal. Is there a way that I can position swiftUI views at coordinates relative to positions in my immersive space? I know that I can display a volume with RealityKit content simultaneously to my metal content. The volume's coordinate system specifically, it's bounds, does not, coincide with my entire metal scene. One approach I thought of would be to open two views in my immersive space. That way, I could simply add Attachment's to invisible RealityKit Entities in one view at positions where I have content in my metal scene. unfortunately it seems that, while I can declare an ImmersiveSpace be composed of multiple RealityViews ImmersiveSpace(){ RealityView { content in // load first view } update: { content in // update } } RealityView{ content in //load second view } } update: { content in //update } } That results in two coinciding realty kit views in the immersive space. I can not however do something like this: ImmersiveSpace(){ CompositorLayer(configuration: ContentStageConfiguration()){ layerRenderer in //set up my metal renderer and stuff } RealityView{ content in //set up a view where I could use attachments } update: { content in } }
1
0
85
4d
SharePlay Button
I learned Sharplay from the WWDC video. I understand the creation of seats, but I can't learn some of the following contents well, so I hope you can help me. The content is as follows: I have set up the seats. struct TeamSelectionTemplate: SpatialTemplate { let elements: [any SpatialTemplateElement] = [ .seat(position: .app.offsetBy(x: 0, z: 4)), .seat(position: .app.offsetBy(x: 1, z: 4)), .seat(position: .app.offsetBy(x: -1, z: 4)), .seat(position: .app.offsetBy(x: 2, z: 4)), .seat(position: .app.offsetBy(x: -2, z: 4)), ] } It was mentioned in one of my previous posts: "I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSe lectionTemplate.", and someone replied to me and asked me to try systemCoordinator.configuration.spatialTemplatePreference = .custom (TeamSelectionTemplate()), however, Xcode error Cannot find 'systemCoordinator' in scope How to solve it? Thank you!
1
0
115
4d
size of AVplayerViewController in immersive space
I am trying to make the immersive version of AVplayerViewController bigger, but I can't find any information on how I can go about it. It seems that if I wanted to change immersive video viewing experience, only thing I can do is using VideoMaterial and put it on ModelEntity with .generatePlane. is there a way to change video size on immersive mode for AVplayerViewController?
0
0
82
5d
Capturing Perspective Camera View in RealityKit and Integrating SceneKit with RealityKit
Hello, I am currently developing an application using RealityKit and I've encountered a couple of challenges that I need assistance with: Capturing Perspective Camera View: I am trying to render or capture the view from a PerspectiveCamera in RealityKit/RealityView. My goal is to save this view of a 3D model as an image or video using a virtual camera. However, I'm unsure how to access or redirect the rendered output from a PerspectiveCamera within RealityKit. Is there an existing API or a recommended approach to achieve this? Integrating SceneKit with RealityKit: I've also experimented with using SCNNode and SCNCamera to capture the camera's view, but I'm wondering if SceneKit is directly compatible within a RealityKit scene, specifically within a RealityView. I would like to leverage the advanced features of RealityKit for managing 3D models. Is saving the virtual view of a camera supported, and if so, what are the best practices? Any guidance, sample code, or references to documentation would be greatly appreciated. Thank you in advance for your help!
3
0
152
6d
AVCam modified for Spatial Video captureing in WWDC24
I just follow the video and add the codes, but when I switch to spatial video capturing, the videoPreviewLayer shows black. <<<< FigCaptureSessionRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSessionRemote.m:405) - (err=0) <<<< FigCaptureSessionRemote >>>> captureSessionRemote_getObjectID signalled err=-16405 (kFigCaptureSessionError_ServerConnectionDied) (Server connection was lost) at FigCaptureSessionRemote.m:405 <<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:421) - (err=-16405) <<<< FigCaptureSessionRemote >>>> Fig assert: "msg" at bail (FigCaptureSessionRemote.m:744) - (err=0) Did I miss something?
1
0
161
6d
Scene Phase issue with VisionOS 2.0
Hello, I am new to swiftUI and VisionOS but I developed an app with a window and an ImmersiveSpace. I want the Immersive space to be dismissed when the window/app is closed. I have the code below using the state of ScenePhase and it was working fine in Vision OS 1.1 but it stopped working with VisionOS 2.0. Any idea what I am doing wrong? Is there another way to handle the dismissal of ImmersiveSpace when my main Window is closed? @main struct MyApp: App { @State private var viewModel = ViewModel() var body: some Scene { @Environment(\.scenePhase) var scenePhase @Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace WindowGroup { SideBarView() .environment(viewModel) .frame(width: 1150,height: 700) .onChange(of: scenePhase, { oldValue, newValue in if newValue == .inactive || newValue == .background { Task { await dismissImmersiveSpace() viewModel.immersiveSpaceIsShown = false } } }) }.windowResizability(.contentSize) ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView(area: viewModel.currentModel) .environment(viewModel) } } }
2
0
99
6d
RealityKit for macOS example
I would like to code some RealityViews to run on my Mac first (and then incorporate them in a visionOS project) so that my code/test loop is faster, but I have not been able to find a simple example that supports Mac. Is it possible to have volumes on a Mac? Is there support for using a game controller to move around the RealityView, like in the visionOS simulator?
1
0
120
1w
SharePlay Button
I followed the WWDC video to learn Sharplay. I understood the first creation of seats, but I couldn't learn some of the following content very well, so I hope you can give me a list code. The contents are as follows: I have already taken a seat. struct TeamSelectionTemplate: SpatialTemplate { let elements: [any SpatialTemplateElement] = [ .seat(position: .app.offsetBy(x: 0, z: 4)), .seat(position: .app.offsetBy(x: 1, z: 4)), .seat(position: .app.offsetBy(x: -1, z: 4)), .seat(position: .app.offsetBy(x: 2, z: 4)), .seat(position: .app.offsetBy(x: -2, z: 4)), ] } I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSelectionTemplate. Thank you very much.
2
0
196
1w