Discuss Spatial Computing on Apple Platforms.

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Building a custom render pipeline with RealityKit
Hello experts, and question seekers, I have been trying to get Gaussian splats working with RealityKit, however it seems not to work out for me. The library I use for Gaussian splatting: https://github.com/scier/MetalSplatter My idea was to use the renderers provided by RealityKit (aka RealityRenderer) https://developer.apple.com/documentation/realitykit/realityrenderer and the renderer provided by MetalSplatter (aka. SplatRenderer) https://github.com/scier/MetalSplatter/blob/main/MetalSplatter/Sources/SplatRenderer.swift Then with a custom render pipeline, I would be able to compose the outputs of the renderers, enabling the possibility, for example to build immersive scenery with realistic environment scans, as Gaussian splats, and RealityKit to provide the necessary features to build extra scenery around Gaussian splats, eg. dynamic 3D models inside Gaussian splats. However the problem is, as of now I am not able to do that with the current implementation of RealityRenderer. It seems to be, that first RealityRenderer is supposed to be an API, just to render colour information onto a texture, which in first glance might be useful, but misses important information, such as for example depth, and stencil information. Second issue is, even with that in mind, currently I am not able to execute RealityRenderer.updateAndRender, due to the following error messages: Could not resolve material name 'engine:BuiltinRenderGraphResources/Common/realityRendererBackground.rematerial' in bundle at '/Users//Library/Developer/CoreSimulator/Devices//data/Containers/Bundle/Application//.app'. Loading via asset path. exiting spatial tracking service update thread because wait returned 37” I was able to build a custom Metal view with UIViewRepresentable, MTKView, and MTKViewDelegate, enabling me to build a custom rendering pipeline, by utilising some of the Metal developer workflows. Reference: https://developer.apple.com/documentation/xcode/metal-developer-workflows/ Inside draw(in view: MTKView), in a class derived by MTKViewDelegate: guard let currentDrawable = view.currentDrawable else { return } let realityRenderer = try! RealityRenderer() try! realityRenderer.updateAndRender(deltaTime: 0.0, cameraOutput: .init(.singleProjection(colorTexture: currentDrawable.texture)), whenScheduled: { realityRenderer in print("Rendering scheduled") }, onComplete: { RealityRenderer in print("Rendering completed") }) Can you please tell me, what I am doing wrong? Is there any solution, that enables me to use RealityKit with for example Gaussian splats? Any help is greatly appreciated. All the best, Ethem Kurt
2
1
682
Aug ’24
Spatial Video captured quality using the API differs from that of the iPhone's built-in camera app
I tried "WWDC24: Build compelling spatial photo and video experiences | Apple" and it can successfully capture spatial video. But I found the video by my app differs from the iPhone build-in camera app in: Videos captured with the iPhone's build-in camera app tend to have a more natural or warmer tone, while videos taken with my app appear whiter or cooler in color temperature. In videos recorded using the iPhone's built-in camera app, the left eye image is typically sharper than the right eye image. However, in my app, this is reversed: the right eye image is clearer than the left eye image. I've noticed that when I cover the wide-angle lens while shooting, the entire preview screen in my app becomes brighter. However, this doesn't occur when using the iPhone's built-in camera app. Is there any api or parameters to make my app more close to the iPhone build-in app? I have tried "whiteBalanceMode" and "exposureMode" but no luck.
3
0
825
Aug ’24
Window to Window container displacement
In Xcode 16 beta 6, we want to start the app with an Alert advising the user that they are about to enter an immersive space. To achieve this, I use an empty VStack (lets name it View1) with an alert modifier. Then, in the alert’s OK button action, we have the statement openWindow(id: "ContentView”). View1 is in the first WindowGroup in the App file. When pressing OK, the Alert and View1 dismiss themselves, then ContentView displays itself shifted vertically towards the top. ContentView is in a secondary WindowGroup. We should expect ContentView to display itself front and center to the user as every other window. What is wrong my code? Or, is there a bug in visionOS? Attached are images of my code, and a video illustrating the bad behavior.
5
0
655
Aug ’24
How can I completely clear the asset memory?
Background: This is a simple visionOS empty application. After the app launches, the user can enter an ImmersiveSpace by clicking a button. Another button loads a 33.9 MB USDZ model, and a final button exits the ImmersiveSpace. Below is the memory usage scenario for this application: After the app initializes, the memory usage is 56.8 MB. After entering the empty ImmersiveSpace, the memory usage increases to 64.1 MB. After loading a 33.9 MB USDZ model, the memory usage reaches 92.2 MB. After exiting the ImmersiveSpace, the memory usage slightly decreases to 90.4 MB. Question: While using a memory analysis tool, I noticed that the model's resources are not released after exiting the ImmersiveSpace. How should I address this issue? struct EmptDemoApp: App { @State private var appModel = AppModel() var body: some Scene { WindowGroup { ContentView() .environment(appModel) } ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .environment(appModel) .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } } .immersionStyle(selection: .constant(.mixed), in: .mixed) } } struct ContentView: View { @Environment(AppModel.self) private var appVM var body: some View { HStack { VStack { ToggleImmersiveSpaceButton() } if appVM.immersiveSpaceState == .open { Button { Task { if let url = Bundle.main.url(forResource: "Robot", withExtension: "usdz") { if let model = try? await ModelEntity(contentsOf: url, withName: "Robot") { model.setPosition(.init(x: .random(in: 0...1.0), y: .random(in: 1.0...1.6), z: -1), relativeTo: nil) appVM.root?.add(model) print("Robot: \(Unmanaged.passUnretained(model).toOpaque())") } } } } label: { Text("Add A Robot") } } } .padding() } } struct ImmersiveView: View { @Environment(AppModel.self) private var appVM var body: some View { RealityView { content in appVM.root = content } } } struct ToggleImmersiveSpaceButton: View { @Environment(AppModel.self) private var appModel @Environment(\.dismissImmersiveSpace) private var dismissImmersiveSpace @Environment(\.openImmersiveSpace) private var openImmersiveSpace var body: some View { Button { Task { @MainActor in switch appModel.immersiveSpaceState { case .open: appModel.immersiveSpaceState = .inTransition appModel.root = nil await dismissImmersiveSpace() case .closed: appModel.immersiveSpaceState = .inTransition switch await openImmersiveSpace(id: appModel.immersiveSpaceID) { case .opened: break case .userCancelled, .error: fallthrough @unknown default: appModel.immersiveSpaceState = .closed } case .inTransition: break } } } label: { Text(appModel.immersiveSpaceState == .open ? "Hide Immersive Space" : "Show Immersive Space") } .disabled(appModel.immersiveSpaceState == .inTransition) .animation(.none, value: 0) .fontWeight(.semibold) } }
2
0
498
Aug ’24
Unity/PolySpatial GameController framework failing to load
I have a simple example of a motion matching (MxM for Unity) character controller that uses Unity's input system and gamepad support. In editor the scene and inputs work as expected. When I build to headset the app stops at an initialization step where my game controller should kick in. The app doesn't crash but my character is frozen in A-Pose and doesn't respond to input. I'm wondering if this error I'm seeing in the logs is what's causing it? And if so how do I fix it? error 15:56:11.724200-0700 PolySpatialProjectTemplate NSBundle file:///System/Library/Frameworks/GameController.framework/ principal class is nil because all fallbacks have failed I'm using Xcode 16 beta 6 Unity 6000.0.17f1 VisionOS 2.0 beta 9
2
0
941
Sep ’24
Vision Pro App Stuck on Loading Screen – Works Fine on Simulator
Hi everyone, I'm currently developing an app for Vision Pro using SwiftUI, and I've encountered an issue when testing on the Vision Pro device. The app works perfectly fine on the Vision Pro simulator in Xcode, but when I run it on the actual device, it gets stuck on the loading screen. The logo appears and pulsates when it loads, as expected, but it never progresses beyond that point. Issue Details: The app doesn't crash, and I don't see any major errors in the console. However, in the debug logs, I encounter an exception: Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!" I’ve searched through my project, but there’s no direct reference to a selector named plane. I suspect it may be related to a framework or system call failing on the device. There’s also this warning: NSBundle file:///System/Library/PrivateFrameworks/MetalTools.framework/ principal class is nil because all fallbacks have failed. What I’ve Tried: Verified that all assets and resources are properly bundled and loading (since simulators tend to be more forgiving with file paths). Tested the app with minimal UI to isolate potential causes, but the issue persists. Checked the app's Info.plist configuration to ensure it’s properly set up for Vision Pro. No crashes, just a loading screen hang on the device, while the app works fine in the Vision Pro simulator. Additional Info: The app’s UI consists of a loading animation (pulsating logo) before transitioning to the main content. Using Xcode 16.1 Beta, VisionOS SDK. The app is based on SwiftUI, with Vision Pro optimizations for immersive experience. Has anyone experienced something similar when moving from the simulator to the Vision Pro hardware? Any help or guidance would be appreciated, especially with regards to the exception or potential resource loading issues specific to the device. Thanks in advance!
1
0
605
Sep ’24
Safari tab overview tilted windows/views - visionOS
I have searched everywhere for examples to replicate this awesome feature. Specifically I am talking about the tab overview in safari. How they achieve the tilt/angle of the windows/views towards the user and how they are placed. Is this a volumetric window? some sort of spatial lazy grid? anyone knows how they achieved this? Thanks
1
0
500
Sep ’24
Casting shadows on the ground
In visionOS 2 beta, I have a character loaded from a Reality Composer Pro scene standing on the floor, but he isn't casting a shadow on the floor. I added a GroundingShadowComponent in RealityView, and he does cast shadows on himself (e.g., his hands cast shadows on his shoes), but I don't see any shadow on the floor. Do I need to enable something to have my character cast a show on the real-world floor?
1
0
646
Sep ’24
Opening other apps while Immersive space is open?
I am working on a small side project for the Apple Vision pro. One thing I'm trying to figure out is can I open another app while having the immersive space open from my original app? As an example I want to present a fully immersed view displaying a 360 degree photo. I then want to allow the user to open up safari or any other app of their choice and use the immersive environment as a background? Is this possible? Everything I've read so far seems to say no but I wasn't sure if someone found out how to make this possible.
1
0
308
Sep ’24
Open the vision pro camera using Enterprise API and view it in application window
I want to see the vision pro camera view in my application window. I had write some code from apple, I stuck on CVPixelBuffer , How to convert pixelbuffer to video frame? Button("Camera Feed") { Task{ if #available(visionOS 2.0, *) { let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left]) let cameraFrameProvider = CameraFrameProvider() var arKitSession = ARKitSession() var pixelBuffer: CVPixelBuffer? await arKitSession.queryAuthorization(for: [.cameraAccess]) do { try await arKitSession.run([cameraFrameProvider]) } catch { return } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { return } for await cameraFrame in cameraFrameUpdates { guard let mainCameraSample = cameraFrame.sample(for: .left) else { continue } //==== print("=========================") print(mainCameraSample.pixelBuffer) print("=========================") // self.pixelBuffer = mainCameraSample.pixelBuffer } } else { // Fallback on earlier versions } } } I want to convert "mainCameraSample.pixelBuffer" in to video. Could you please guide me!!
2
0
633
Sep ’24
OcclusionMaterial renders plain black when custom skybox environment is set
Using OcclusionMaterial on macOS and iOS works fine in Non-AR mode when I set the background to just a simple color (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/color) but when I set a custom skybox (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/background-swift.struct/skybox(_:)) the OcclusionMaterial renders as fully black. I would expect it to properly occlude the content and show through the skybox behind it. This happens with box ARView and RealityView. On current iOS/macOS Betas as well as on older systems, e.g iOS 17 and macOS Sonoma. Feedback ID: FB15081053
0
0
434
Sep ’24
Xcode 16 – Symbol not found: ShapeResource generateConvex
Hi, I have a RealityKit app that I am building with Xcode 16. The app has a minimum deployment target of iOS 17. If I run it on an iOS 17 device the app crashes: dyld[15716]: Symbol not found: _$s10RealityKit13ShapeResourceC14generateConvex4fromAcA04MeshD0C_tYaKFZ Referenced from: … Expected in: …/System/Library/Frameworks/RealityFoundation.framework/RealityFoundation My code looks something like this: @available(iOS, introduced: 13.0, obsoleted: 18.0) @MainActor @preconcurrency func generateNonAsyncConvexShapeResource(from meshResource: MeshResource) throws -> ShapeResource { ShapeResource.generateConvex(from: meshResource) } @available(iOS 18.0, *) func generateConvexShapeAsync(from meshResource: MeshResource) async throws -> ShapeResource { // This will only be available for iOS 18 and above return try await ShapeResource.generateConvex(from: meshResource) } if let meshResource = try? modelEntity.model?.mesh.applying(transform: transform.matrix) { if #available(visionOS 1.0, iOS 18.0, *) { try? await generateConvexShapeAsync(from: meshResource)// await shapeResources.append(.generateConvex(from: meshResource)) } else { try? generateNonAsyncConvexShapeResource(from: meshResource) } } So I actually do check for the system and only call the async variant on iOS 18. Any hints how to fix that? Thanks!
2
0
1.4k
Sep ’24
Not getting camera frame using enterprise API in Vision Pro
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me. for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") } var body: some View { VStack { image .resizable() .scaledToFit() if(self.finalImage != nil){ self.finalImage! .resizable() .scaledToFit() }else{ image .resizable() .scaledToFit() } } .task { if #available(visionOS 2.0, *) { guard CameraFrameProvider.isSupported else { print("CameraFrameProvider not supported.") return } let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left]) let cameraFrameProvider = CameraFrameProvider() do { try await arkitSession.run([cameraFrameProvider]) } catch { guard let sessionError = error as? ARKitSession.Error else { preconditionFailure("ARKitSession.run() returned a non-session error: \(error)") print("ARKitSession.run() returned a non-session error: \(error)") } } guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else { preconditionFailure("Failed to get an async sequence for the first format.") print("Failed to get an async sequence for the first format.") } print("cameraFrameUpdates:: \(cameraFrameUpdates)") for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: \(cameraFrame)") print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))") guard let leftSample = cameraFrame.sample(for: .left) else { print("CameraFrameProviderSample - Nil camera frame left sample") print("CameraFrameProviderSample - Nil camera frame left sample") continue } self.pixelBuffer = leftSample.pixelBuffer print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========") self.finalImage = self.setImage() } } else { // Fallback on earlier versions } } }
3
0
899
Sep ’24
Convert SpatialEventGesture location3D to Entity
In my visionOS app I am attempting to get the location of a finger press (not a tap, but when the user first presses their fingers together). As far as I can tell, the only way to get this event is to use a SpatialEventGesture. I've currently got a DragGesture and I am able to use the convert functions in the passed in EntityTargetValue to convert the location3D from the DragEvent to my hit tested entity. But as far as I can tell the SpatialEventGesture doesn't use an EntityTargetValue. I've tried using the convert functions in my targeted entity (ie, myEntity.convert(position: from:)) but these do not return valid values. My questions are: Is SpatialEventGesture the correct way to get notified of finger presses? How do I convert the location3D in the SpatialEventGesture to my entity space?
2
0
539
Sep ’24
1 meter size limit on object visual presentation?
I’m encountering a 1-meter size limit on the visual presentation of objects presented in an immersive environment in vision os, both in the simulator and in the device For example, if I load a USDZ object that’s 1.0x0.5x0.05 meters, all of the 1.0x0.5 meter side is visible. If I scale it by a factor of 2.0, only a 1.0x1.0 viewport onto the object is shown, even though the object size reads out as scaled when queried by usdz.visualBounds(relativeTo: nil).extents and if the USDZ is animated the animation, the animation reflects the motion of the entire object I haven’t been able to determine why this is the case, nor any way to adjust/mitigate it. Is this a wired constraint of the system or is there a workaround. Target environment is visionos 1.2
1
0
598
Sep ’24
How to display spatial images or videos on swiftUI view
Hi! Now I am making a visionOS program. I have an idea that I want to embed spatial videos or pictures into my UI, but now I have encountered problems and have no way to implement my idea. I have tried the following work: Use AVPlayerViewController to play a spatial video, but it is only display spatial video when modalPresentationStyle =.fullscreen. Once embedded in swiftUI's view, it shows it as a normal 2D image. The method of https://developer.apple.com/forums/thread/733813 I also tried, using a shadergraph to realize the function of the spatial images displaying, but the material can only be attached on the entity, I don't know how to make it show up in view. I also tried to use CAMetalLayer to implement this function and write a custom shader to display spatial images, but I couldn't find a function like unity_StereoEyeIndex in unity to render binocular switching. Does anyone have a good solution to my problem? Thank you!
0
0
573
Sep ’24
Adapt distance/depth of view relative to user
Hi, I'm currently working on some messages that should appear in front of the user depending on the system's state of my visionOS app. How am I able to change the distance of the appearing message relative to the user if the message is displayed as a View. Or is this only possible if I would create an enitity for that message, and then set apply .setPosition() and .relativeTo() e.g. the head anchor? Currently I can change the x and y coordinates of the view as it works within a 2D space, but as I'm intending to display that view in my immersive space, it would be cool if I can display my message a little bit further away in the user's UI, as it currently is a little bit to close in the user's view. If there is a solution without the use of entities I would prefer that one. Thank you for your help! Below an example: Feedback.swift import SwiftUI struct Feedback: View { let message: String var body: some View { VStack { Text(message) } } .position(x: 0, y: -850) // how to adapt distance/depth relative to user in UI? } } ImmersiveView.swift import SwiftUI import RealityKit struct ImmersiveView: View { @State private var feedbackMessage = "Hello World" public var body: some View { VStack {} .overlay( Feedback(message: feedbackMessage) ) RealityView { content in let configuration = SpatialTrackingSession.Configuration(tracking: [.hand]) let spatialTrackingSession = SpatialTrackingSession.init() _ = await spatialTrackingSession.run(configuration) // Head let headEntity = AnchorEntity(.head) content.add(headEntity) } } }
2
0
589
Sep ’24