Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics

Post

Replies

Boosts

Views

Activity

Request for gaze data in fully immersive Metal apps
Hi, We are trying to port our Unity app from other XR devices to Vision Pro. Thus it's way easier for us to use the Metal rendering layer, fully immersive. And to stay true to the platform, we want to keep the gaze/pinch interaction system. But we just noticed that, unlike Polyspatial XR apps, VisionOS XR in Metal does not provide gaze info unless the user is actively pinching... Which forbids any attempt to give visual feedback on what they are looking at (buttons, etc). Is this planned in Apple's roadmap ? Thanks
3
0
295
Jan ’25
ShaderGraphMaterial on entity
Hi I try to make a 360 stereo viewer, and I have made a ShaderGraphMaterial on Reality Composer Pro. Im trying to use that material on a inverted sphere whitch is generated in Swift. When I try to attach the material I get this error "Type of expression is ambiguous without a type annotation" Here is the code (sorry im noob =) ): import SwiftUI import RealityKit import RealityKitContent import PhotosUI struct ImmersiveView: View { @Environment(AppModel.self) var appModel var body: some View { RealityView { content in // Add the initial RealityKit content guard let skyBoxEntity = await createSkybox() else { return } content.add(skyBoxEntity) } } } private func createSkybox () async -> Entity? { var matX = try? await ShaderGraphMaterial(named: "/Root/Mat_Stereo360", from: "360Stereo.usda", in: realityKitContentBundle) let sphere = await MeshResource.generateSphere(radius:1000) let entity = await Entity() entity.components.set(ModelComponent(mesh: sphere, materials: [matX])). //ERROR HERE: Type of expression is ambiguous without a type annotation //entity.scale *= .init(x:-1, y:1, z:1) return entity } I hope someone can help me =) Best regards, Kim
2
0
293
Jan ’25
Screenshot using visionOS (Code) on Apple Vision Pro
I want to create a screenshot (static image) of the current view on the Apple Vision Pro using written code in visionOS. Unfortunately, I currently can’t find a way to achieve this. The only option I’ve found so far is through Reality Composer Pro. However, since I want to accomplish this directly through code, this approach is not an option for me.
1
0
236
Jan ’25
Developer Capture and microphone input for audio-based apps
Hi Apple engineers, I'm currently working on an app that uses the incoming microphone audio and gives visual feedback to the user about the incoming audio. I would like to use Reality Composer Pro's Developer Capture to get a high quality recording of the app and its use cases for the App Store — but any time I have an in-progress capture, my app stops receiving the incoming audio. It almost seems as if the microphone audio is getting 'hijacked' during the screen capture, which prevents me from demonstrating the app's core features. Could you please advise on how to proceed?
2
0
316
Jan ’25
Reality Composer Pro Audio "On Tap" Behaviors Help
Looking for help on getting "On Tap" to work inside RCP for my AVP project. I can get it to work when using "on added to scene" but if I switch to "on tap", the audio will not play when attaching the audio to an entity in my scene. I'm using the same entity for the tap gesture that the audio is using for the emitter. Here is my work flow for the "on added to scene" that works correctly to help troubleshoot my non working "on tap". Behaviors: "on added to scene". action - timeline Input target: check mark enabled, allowed all Collision set to default Audio library: source mp3 file Chanel Audio: resource mp3 file above Timeline: Play Audio with mp3 file added This set up in RCP allows my AVP project to launch correctly with audio "on added to scene". But when switching behaviors to "on tap", the audio will no longer play and I can not figure out why. I've tried several different options and nothing works. Please help!
1
0
306
Jan ’25
Slow Auto Focus on iPhone 16 Pro with ARkit camera
I have recently started testing ARKit on an iPhone 16 Pro and I have noticed that the AutoFocus reaction on this device is much slower than other devices. For example, if I point the camera to a close object AutoFocus takes 4-5 seconds to stabilize, the focal length is adjusted very very slowly. In some cases (although this is rare) AutoFocus seems almost stuck and requires a bit of device movement to trigger. This is quite problematic when using some ARKit features like Image and Object detection as the detection algorithms struggle with out-of-focus images. This problem is limited to ARKit. AutoFocus is significantly more responsive when the standard AVFoundation Camera API is used. This behavior is easy to reproduce with any of the ARKit samples like https://developer.apple.com/documentation/arkit/arkit_in_ios/content_anchors/tracking_and_visualizing_planes Is anybody else experiencing this problem?
3
0
331
Jan ’25
Cast virtual light on real-world environments in VisionOS/RealityKit?
Hi everyone, I've been exploring an idea that involves using virtual light sources in VisionOS/RealityKit to interact with real-world objects. Specifically, I'd like to simulate a scenario where a virtual spotlight or other light source casts light or shadows onto real-world environments, creating the effect of virtual lighting interacting with physical surroundings. Is this currently feasible within VisionOS/RealityKit? Thank you!
1
0
288
Jan ’25
Reality View argument type does not conform to protocol view
I'm working on creating a panorama view in AVP. When I got to this line of code Xcode says that "Type 'Entity' does not conform to protocol 'View'": private var realityView: RealityView! as well as this line, with the same error message: private func setupPanoramaScene(for content: RealityView.Content) What should I put as a argument for reality view? It doesn't work without arguments either.
3
0
311
Jan ’25
ARKit: Prevent Asset Clipping
Hello Apple Team, I am working on a RealityKit project for iOS, where I need to place a 3D asset far away from the camera (approximately 15 to 30 meters). When enabling people occlusion, the 3D asset gets clipped when moved far away. Is it possible to enable people occlusion for assets at close range (less than 10 meters) while disabling it for assets farther away to prevent clipping? I understand that it is possible to switch configurations at runtime. However, I would like to place assets both close to and far from the camera simultaneously. Thank you for your help! Kind regards
1
0
285
Jan ’25
Combining ARKit Face Tracking with High-Resolution AVCapture and Perspective Rendering on Front Camera
Subject: Combining ARKit Face Tracking with High-Resolution AVCapture and Perspective Rendering on Front Camera Message: Hello Apple Developer Community, We’re developing an application using the front camera that requires both real-time ARKit face tracking/guidance and the capture of high-resolution still images via AVCaptureSession. Our goal is to leverage ARKit’s depth and face data to render a captured image from another perspective post-capture, maintaining high image quality. Our Approach: Real-Time ARKit Guidance: Utilize ARKit (e.g., ARFaceTrackingConfiguration) for continuous face tracking, depth, and scene understanding to guide the user in real time. High-Resolution Capture Transition: At the moment of capture, we plan to pause the ARKit session and switch to an AVCaptureSession to take a high-resolution image. We assume that for a front-facing image, the subject’s face is directly front-on, and the relative pose between the face and camera remains the same during the transition. The only variation we expect is a change in distance. Our intention is to minimize the delay between the last ARKit frame and the high-res capture to maintain temporal consistency, assuming that aside from distance, the face-camera relative pose remains unchanged. Post-Processing Perspective Rendering: Using the last ARKit face data (depth, pose, and landmarks) along with the high-resolution 2D image, we aim to render the scene from another perspective. We want to correct the perspective of the 2D image using SceneKit or RealityKit, leveraging the collected ARKit scene information to achieve a natural, high-quality rendering from a different viewpoint. The rendering should match the quality of a normally captured high-resolution image, adjusting for the difference in distance while using the stored ARKit data to correct perspective. Our Questions: Session Transition Best Practices: What are the recommended best practices to seamlessly pause ARKit and switch to a high-resolution AVCapture session on the front camera How can we minimize user movement or other issues during this brief transition, given our assumption that the face-camera pose remains largely consistent except for distance changes? Data Integration for Perspective Rendering: How can we effectively integrate stored ARKit face, depth, and pose data with the high-res image to perform accurate perspective correction or rendering from another viewpoint? Given that we assume the relative pose is constant except for distance, are there strategies or APIs to leverage this assumption for simplifying the perspective transformation? Perspective Correction with SceneKit/RealityKit: What techniques or workflows using SceneKit or RealityKit are recommended for correcting the perspective of a captured 2D image based on ARKit scene data? How can we use these frameworks to render the high-resolution image from an alternative perspective, while maintaining image quality and fidelity? 4. Pitfalls and Guidelines: What common pitfalls should we be aware of when combining ARKit tracking data with high-res capture and post-processing for perspective rendering? Are there performance considerations, recommended thresholds for acceptable temporal consistency, or validation techniques to ensure the ARKit data remains applicable at the moment of high-res capture? We appreciate any advice, sample code references, or documentation pointers that could assist us in implementing this workflow effectively. Thank you!
2
0
352
Jan ’25
RealityView Not Refreshing With SwiftData
Hi, I am trying to update what entities are visible in my RealityView. After the SwiftData set is updated, I have to restart the app for it to appear in the RealityView. Also, the RealityView does not close when I move to a different tab. It keeps everything on and tracking, leaving the model in the same location I left it. import SwiftUI import RealityKit import MountainLake import SwiftData struct RealityLakeView: View { @Environment(\.modelContext) private var context @Query private var items: [Item] var body: some View { RealityView { content in print("View Loaded") let lakeScene = try? await Entity(named: "Lake", in: mountainLakeBundle) let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2))) @MainActor func addEntity(name: String) { if let lakeEntity = lakeScene?.findEntity(named: name) { // Add the Cube_1 entity to the RealityView anchor.addChild(lakeEntity) } else { print(name + "entity not found in the Lake scene.") } } addEntity(name: "Island") for item in items { if(item.enabled) { addEntity(name: item.value) } } // Add the horizontal plane anchor to the scene content.add(anchor) content.camera = .spatialTracking } placeholder: { ProgressView() } .edgesIgnoringSafeArea(.all) } } #Preview { RealityLakeView() }
3
0
321
Jan ’25
RealityView Gestures for iOS
I started a new project using RealityKit and RealityView, intended as an AR app on iPhone and iPad, but eventually VisionOS as well. I'm challenged because I find much of the recent documentations, WWDC videos, etc, include features that are VisionOS only. Right now, I would simply like to create some gesture functionality that is similar to AR Quick Look defaults, meaning drag to reposition, two fingers to rotate or zoom. In the past, this would be implemented with something like: arView.installGestures([.all], for: entity) however, with RealityView I don't know how (or if possible) to access an ARView. In RealityKit, I have found this doc: https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures However, many of the features in that posting are VisionOS only, and I've found no good documentation on the topic that is specific or at least compatible with iOS. I know reverting to an ARView is an option, but I want to use RealityView if at all possible as I see it as more forward-looking.
1
0
251
Jan ’25
Custom Component causing exc_bad_access
Hello, After watching the Work with Reality Composer Pro content in Xcode, I had created the following custom component. public struct TestComponent : Component, Codable{ public var text : String = "helloWorld" public init() {} } I had registered the custom component as suggested in App.init function init() { RealityKitContent.TestComponent.registerComponent() } The custom component is decoded and realityView shows the sphere, when I load the "Scene" from realityKitContent bundle. But if I export the scene to a separate file named "test_scene.usdz" on disk and shared to the simulator and then trying to load it load in reality view causes EXC_BAD_ACCESS #0 0x0000000194c8d508 in Swift._StringObject.getSharedUTF8Start() -> Swift.UnsafePointer<Swift.UInt8> () Printing the loaded entity, shows the customComponent but when trying to load in show realityview , crashes the app immediately. Is there a way to fix it?
4
0
436
Jan ’25
What's the relation of SwiftUI frames' sizes and RealityKit Entities sizes
Currently I want to recreate a window which is similar to system window in ImmersiveSpace. But we only can use the meter unit in RealityKit. I create a plane entity, I don't know how to set the size using meter unit to make the plane's size totally consistent with the system window. Also, I want to know the z and y position of the system window in the immersive space.
1
0
211
Jan ’25
AudioPlaybackController stop playing when .plain window is closed
Suppose there was an immersiveSpace, and an Entity() being added to the space as child entity of the content. This entity is responsible for playing background music by calling prepareAudio, gaining a controller and play the music. (check the basic code below) When it was playing music, a .plain window and an immersiveSpace are both presented. I believe this immersiveSpace is holding the handle of the controller so as long as immersiveSpace is open, the music won't stop. However if I close the .plain window (by closing system-level close button), the music just stopped. But the immersiveSpace is still open. If right now I check the value of controller.isPlaying, it was still true. But you just cannot hear the music anymore. To reproduce, simply open an visionOS template App project, selecting volume and full immersive, and replace some code inImmersiveView.swift with the code below. Also simply drag any .mp3 file and replace the AudioFileResource's name. And you could reproduce this bug. RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Put skybox here. See example in World project available at // https://developer.apple.com/ if let audioResource = try? await AudioFileResource(named: "anyMP3file.mp3") { let ent = Entity() immersiveContentEntity.addChild(ent) let controller = ent.prepareAudio(audioResource) controller.play() } } } I wonder why this happen? I mean how should I keep the music playing when I close the .plain window? Thanks!
1
1
273
Jan ’25