Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Reality Composer Pro Audio "On Tap" Behaviors Help
Looking for help on getting "On Tap" to work inside RCP for my AVP project. I can get it to work when using "on added to scene" but if I switch to "on tap", the audio will not play when attaching the audio to an entity in my scene. I'm using the same entity for the tap gesture that the audio is using for the emitter. Here is my work flow for the "on added to scene" that works correctly to help troubleshoot my non working "on tap". Behaviors: "on added to scene". action - timeline Input target: check mark enabled, allowed all Collision set to default Audio library: source mp3 file Chanel Audio: resource mp3 file above Timeline: Play Audio with mp3 file added This set up in RCP allows my AVP project to launch correctly with audio "on added to scene". But when switching behaviors to "on tap", the audio will no longer play and I can not figure out why. I've tried several different options and nothing works. Please help!
1
0
621
Jan ’25
Scene's origin relative to portal's window?
I am experimenting with RealityKit to set up a portal. Everything works, but I was wondering where the scene's origin is with respect to the front of the portal window? From experiments, the origin's X and Y appear to be at the center of the portal window, while the origin's Z appearing to be about a meter behind the portal window. Is this (at least roughly) correct? Is it documented anywhere? PS. I began with the standard visionOS app and edited the Reality Composer Pro file to create the scene.
5
0
612
Mar ’25
VisionOS hands replacement inside an Immersive Space
Still don't understand why no one is clarifying about this Apple Video https://developer.apple.com/videos/play/wwdc2023/10111 At the end of this video, there's an incomplete tutorial about connecting a USDZ with mesh and Skeleton structure to the hand tracking system. No example project is linked, and no one is giving the community any clarification. Please can you help us to understand how to proceed?
4
0
518
Mar ’25
Control mirroring of Apple Vision Pro on any devices ?
Hi ! I'm new on this forum, so if I need to update this post to have more info, or anything else, please let me know. I'm using the Apple Vision Pro to develop some app (with unity). To demonstrate what the user see on the headset, I would like to mirror the view on a device (an iPad in this case). I managed to do this without any issue. My problem is that, in the Vision Pro, I have an interface that the user can interact with. But I would like to be able to manage myself the interface on the iPad. What I mean is that the user can (or can't, doesn't matter) see the interface in the headset, and the interface is controlled by myself on the iPad. Is there any way to do this ? Is this a question I should ask on unity's forum ? (I don't think so, because it should be related to the mirroring function non ?)
2
0
348
Mar ’25
Digital Crown press when both immersive space and additional windows are presented
I have been experimenting with the Hello World sample app from https://developer.apple.com/documentation/visionos/world and I came across behavior that appears inconsistent with user-facing documentation describing the device controls at https://support.apple.com/en-gb/guide/apple-vision-pro/tan1e2a29e00/visionos I tried pressing simulator's "Home" button while "Objects in Orbit" immersive space was presented alongside with the main application window. According to user documentation, pressing Digital Crown should take the user directly to Home View. In my test a single press only dismissed the immersive space, I needed another press to "exit" the app and go to Home View. Is this behavior expected? I am assuming that "Home" button in the simulator behaves as if the user pressed Digital Crown on the device, I don't have access to the actual hardware.
5
0
385
Apr ’25
ARKit Planes do not appear where expected on visionOS
I'm using ARKitSession and PlaneDetectionProvider to detect planes. I have a basics process to create an entity for each detected plane. Each one will get a random color for the material. Each plane is sized based on the bounds of the anchor provided by ARKit. let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) Then I'm using this to position each entity. entity.transform = Transform(matrix: anchor.originFromAnchorTransform) This seems to be the right method, but many (not all) planes are not where they should be. The sizes look OK, but the X and Y positions off. Take this large green plane on the wall. It should span the entire wall, but it is offset along the X position so that it is pushed to the left from where the center of the anchor is. When I visualize surfaces using the Xcode debugging tools, that tool reports the planes where I'd expect them to be. Can you see what I'm getting wrong here? Full code below struct Example068: View { @State var session = ARKitSession() @State private var planeAnchors: [UUID: Entity] = [:] @State private var planeColors: [UUID: Color] = [:] var body: some View { RealityView { content in } update: { content in for (_, entity) in planeAnchors { if !content.entities.contains(entity) { content.add(entity) } } } .task { try! await setupAndRunPlaneDetection() } } func setupAndRunPlaneDetection() async throws { let planeData = PlaneDetectionProvider(alignments: [.horizontal, .vertical, .slanted]) if PlaneDetectionProvider.isSupported { do { try await session.run([planeData]) for await update in planeData.anchorUpdates { switch update.event { case .added, .updated: let anchor = update.anchor if planeColors[anchor.id] == nil { planeColors[anchor.id] = generatePastelColor() } let planeEntity = createPlaneEntity(for: anchor, color: planeColors[anchor.id]!) planeAnchors[anchor.id] = planeEntity case .removed: let anchor = update.anchor planeAnchors.removeValue(forKey: anchor.id) planeColors.removeValue(forKey: anchor.id) } } } catch { print("ARKit session error \(error)") } } } private func generatePastelColor() -> Color { let hue = Double.random(in: 0...1) let saturation = Double.random(in: 0.2...0.4) let brightness = Double.random(in: 0.8...1.0) return Color(hue: hue, saturation: saturation, brightness: brightness) } private func createPlaneEntity(for anchor: PlaneAnchor, color: Color) -> Entity { let mesh = MeshResource.generatePlane( width: anchor.geometry.extent.width, depth: anchor.geometry.extent.height ) var material = PhysicallyBasedMaterial() material.baseColor.tint = UIColor(color) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.transform = Transform(matrix: anchor.originFromAnchorTransform) return entity } }
3
0
161
Apr ’25
Values for SIMD3 and SIMD2 not showing up in Reality Composer Pro
Reality Composer Pro question related to custom components My custom component defines some properties to edit in RCP. Simple ones work find, but SIMD3 and SIMD2 do not. I'd expect to see default values but instead I get this 0s. If I try to run this the scene doesn't load. Once I enter some values it does and build and run again it works fine. More generally, does Apple have documentation on creating properties for components? The only examples I've seen show simple strings and floats. There are no details about vectors, conditional options, grouping properties, etc. public struct EntitySpawnerComponent: Component, Codable { public enum SpawnShape: String, Codable { case domeUpper case domeLower case sphere case box case plane case circle } // These prooerties get their default values in RCP /// The number of clones to create public var Copies: Int = 12 /// The shape to spawn entities in public var SpawnShape: SpawnShape = .domeUpper /// Radius for spherical shapes (dome, sphere, circle) public var Radius: Float = 5.0 // These properties DO NOT get their default values in RCP. The all show 0 /// Dimensions for box spawning (width, height, depth) public var BoxDimensions: SIMD3<Float> = SIMD3(2.0, 2.0, 2.0) /// Dimensions for plane spawning (width, depth) public var PlaneDimensions: SIMD2<Float> = SIMD2(2.0, 2.0) /// Track if we've already spawned copies public var HasSpawned: Bool = false public init() { } }
1
0
517
Dec ’24
RealityKit and MacOS - How to load Reality Composer Scene? Documentation Incomplete/Innacurate
"Although Xcode generates loading methods for all Reality Composer files in your Xcode project" I do not find this to be true, sadly. Does anyone have any luck or insight on how one can build just a simple MacOS app that will import a scene from a Reality File? The documentation suggests that the simple act of bringing a .Reality File in (What about .realitycomposerpro?) will generate code, but that doesn't seem to happen. The sample code (Spaceship) does not compile for MacOS. I'd really love just the most generic template of an Xcode Project that compiles with a button that pops open a scene., Like the VisionOS default immersive project.
2
0
811
Dec ’24
When to use an AnchorEntity or HandTrackingProvider in VisionOS
As I understand it there are two ways I can track a hand, or a joint, in RealityKit: either, create an AnchorEntity, for example AnchorEntity(.hand(.left, location: .palm)) or, set up an ARSession with a HandTrackingProvider ( a lot more code which I haven't repeated here). Assuming this is correct, when would I want to use one over the other?
2
0
401
Mar ’25
ShaderGraphMaterial on entity
Hi I try to make a 360 stereo viewer, and I have made a ShaderGraphMaterial on Reality Composer Pro. Im trying to use that material on a inverted sphere whitch is generated in Swift. When I try to attach the material I get this error "Type of expression is ambiguous without a type annotation" Here is the code (sorry im noob =) ): import SwiftUI import RealityKit import RealityKitContent import PhotosUI struct ImmersiveView: View { @Environment(AppModel.self) var appModel var body: some View { RealityView { content in // Add the initial RealityKit content guard let skyBoxEntity = await createSkybox() else { return } content.add(skyBoxEntity) } } } private func createSkybox () async -> Entity? { var matX = try? await ShaderGraphMaterial(named: "/Root/Mat_Stereo360", from: "360Stereo.usda", in: realityKitContentBundle) let sphere = await MeshResource.generateSphere(radius:1000) let entity = await Entity() entity.components.set(ModelComponent(mesh: sphere, materials: [matX])). //ERROR HERE: Type of expression is ambiguous without a type annotation //entity.scale *= .init(x:-1, y:1, z:1) return entity } I hope someone can help me =) Best regards, Kim
2
0
567
Jan ’25
How to handle tasks when the Vision Pro is taken off?
I have a grpc server running inside of a task. When the user takes the headset off, the grpc server will no longer work when they put the headset back on. I would like to have this action detected so that I can cancel the task (which will effectively close the grpc server). I am also using a visual indicator to let the user know if the server is running, but it will not accurately reflect the state of the server when removing and putting back on the headset.
1
0
287
Mar ’25
App Window Closure Sequence Impacts Main Interface Reload Behavior
My VisionOS App (Travel Immersive) has two interface windows: a main 2D interface window and a 3D Earth window. If the user first closes the main interface window and then the Earth window, clicking the app icon again will only launch the Earth window while failing to display the main interface window. However, if the user closes the Earth window first and then the main interface window, the app restarts normally‌. Below is the code of import SwiftUI @main struct Travel_ImmersiveApp: App { @StateObject private var appModel = AppModel() var body: some Scene { WindowGroup(id: "MainWindow") { ContentView() .environmentObject(appModel) .onDisappear { appModel.closeEarthWindow = true } } .windowStyle(.automatic) .defaultSize(width: 1280, height: 825) WindowGroup(id: "Earth") { if !appModel.closeEarthWindow { Globe3DView() .environmentObject(appModel) .onDisappear { appModel.isGlobeWindowOpen = false } } else { EmptyView() // 关闭时渲染空视图 } } .windowStyle(.volumetric) .defaultSize(width: 0.8, height: 0.8, depth: 0.8, in: .meters) ImmersiveSpace(id: "ImmersiveView") { ImmersiveView() .environmentObject(appModel) } } }
6
0
249
Apr ’25
Gaze, Eye Tracking responsive Stuff
I am wondering, is it possible to somehow configure a 3D object to respond to the gaze of a person, like change colors of some parts of the 3D-Model where a person is looking, i.e. where a person's gaze lands on the surface of the 3D-model ? For example, if there is a 3D model of a Cool Dragon 🐉 in the physical space of a person, when seen through with the mixed reality view, of a VisionPro. Now, it would be really cool to change only the color, or make some specific parts of the dragon skin shimmer, but only in the areas where a person is looking. Is this possible ? Is it do-able with eye-tracking of VisionPro ? Any advice would be appreciated. 🤝 🤓 I am new to iOS and VisionOS development.
1
0
220
Mar ’25