Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Understanding the Distance for Physical Object Visibility in Vision Pro Immersive System
I understand that the system helps maintain user comfort by automatically adjusting the opacity of content in certain situations, like when someone moves too quickly or gets too close to a physical object. The content in front of them dims briefly to allow a clearer view of their surroundings. And I'd like to know the specific distance at which the system begins to show the physical object, or what criteria are used for this adjustment.
0
0
319
Oct ’24
Collisions are not detected if the entity is a child of a hand AnchorEntity
I have a created an AnchorEntity for my index finger tip and then created a model entity (A sphere) as a child of it. This model entity has a collision component and a physics body component. I tried using dynamic and kinematic modes for the physics body component. I have created a plane from a cube that has collision component and a static physics body. I have subscribed to the CollisionEvents.Began on this plane. I have also stored it in a EventSubscription state variable. @State private var collisionSubscription: EventSubscription? The I subscribed as follows collisionSubscription = content.subscribe(to: CollisionEvents.Began.self, on: self.boxTopCollision, { collisionEvent in print("something collided with the box top") }) The collision event fires when I directly put the sphere above the plane and let gravity do the collision, but when the the sphere is the child of the anchor entity, the collision events don't happen. I tried adding collision and physics body component directly to the anchor entity and that doesn't work too. I created another sphere with a physics body and a collision component and input target component and manipulate it with a drag gesture. When the manipulation is happening and collide the plane and the sphere the events don't happen when my sphere is touching the plane, but when the gesture end and the sphere is in contact with the plane, the event gets fired. I am confused as to why this is happening. All I want to do is have a collider on my finger tip and want to detect the collision with this plane. How can I make this work? Is there some unstated rule somewhere as where a physics body is manipulated manually it cannot trigger collision events? For more context. I am using SpatialTrackingSession with the tracking configuration of .hand. I am successfully able to track the finger tip.
3
0
683
Sep ’24
Eye Difference in Object Tracking
Hi all, I am having trouble debugging an error where the wireframe object entity representation for the Object Tracking Demo: "Explore object tracking for visionOS" appears incorrect in the right eye of the Vision Pro but correct in the left eye. Would anyone happen to know what is going on? I have attempted to offset the object by changing world coordinates, but this moves the object in both the left and the right eye. Could this be due to the new visionOS beta update (2.0 --> 2.2) ? I am currently using visionOS 2.2. Thanks!
1
0
375
Nov ’24
VisionOS, passthrough through broadcast shows a black background
Hey, I have Enterprise Access on the account and have added the passthrough capability and the entitlement on the main project and the "Broadcast Upload" extension, too. The broadcast works except it returns a black screen. I am attaching some screenshots below of the entitlement file. I have tried searching online to no avail, so any help would be greatly appreciated. I am also attaching the code. import Foundation import AVFoundation import ReplayKit class VideoAssetWriter { private var isRecording = false private var outputStream: OutputStream? private func setupConnection() { guard outputStream == nil else { return } print("setting up connection.") let serverIP = macIP let port = 12345 var readStream: Unmanaged<CFReadStream>? var writeStream: Unmanaged<CFWriteStream>? CFStreamCreatePairWithSocketToHost(kCFAllocatorDefault, serverIP as CFString, UInt32(port), &readStream, &writeStream) guard let writeStream = writeStream?.takeRetainedValue() else { print("Failed to create write stream") return } self.outputStream = writeStream as OutputStream self.outputStream?.open() } func startRecording() { isRecording = true } func processVideoSampleBuffer(_ sampleBuffer: CMSampleBuffer) { print("Processing Sample 1") guard isRecording else { return } print("Processing Sample 2") sendVideoChunkToServer(sampleBuffer) } private func sendVideoChunkToServer(_ sampleBuffer: CMSampleBuffer) { guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return } print("Processing Sample 3") let ciImage = CIImage(cvPixelBuffer: imageBuffer) let context = CIContext() guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return } print("Processing Sample 4") let image = UIImage(cgImage: cgImage) if let imageData = image.jpegData(compressionQuality: 0.5) { guard imageData.count <= 10_000_000 else { print("Frame too large: \(imageData.count) bytes") return } if outputStream == nil { setupConnection() } print("sending frame size up connection.") // Convert to network byte order (big-endian) var frameSize = UInt32(imageData.count).bigEndian let sizeData = Data(bytes: &frameSize, count: MemoryLayout<UInt32>.size) _ = sizeData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: sizeData.count) } print("sending image data up connection.") // Send frame data _ = imageData.withUnsafeBytes { outputStream?.write($0.baseAddress!.assumingMemoryBound(to: UInt8.self), maxLength: imageData.count) } } } func stopRecording() { isRecording = false outputStream?.close() outputStream = nil } } This is the broadcast picker view wrapper: // Broadcast Picker View wrapper struct BroadcastButtonView: UIViewRepresentable { func makeUIView(context: Context) -> RPSystemBroadcastPickerView { let broadcastPickerView = RPSystemBroadcastPickerView( frame: CGRect(x: 0, y: 0, width: 200, height: 200) ) // Make sure this matches your broadcast extension bundle identifier broadcastPickerView.preferredExtension = "my-extension-bundle-identifier" broadcastPickerView.showsMicrophoneButton = false return broadcastPickerView } func updateUIView(_ uiView: RPSystemBroadcastPickerView, context: Context) { } } The extension SampleHandler: override func broadcastPaused() { print("paused broadcast") // User has requested to pause the broadcast. Samples will stop being delivered. } override func broadcastResumed() { print("resumed broadcast") // User has requested to resume the broadcast. Samples delivery will resume. } override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) { print("broadcast received") assetWriter?.processVideoSampleBuffer(sampleBuffer) } Looking forward to any and all help. Information Property list: Information property list for the extension: The capabilities:
1
0
482
Dec ’24
Post processing in VisionOS
WWDC21 had a cool demo project with fish, with a watery, misty look (Dive into RealityKit). It used post processing in RealityKit, but the ARView class isn’t available in VisionOS. Can CompositorLayer be used instead for post processing in full immersion?
0
0
289
Nov ’24
The multiview video screen turns blank when returning
I am encountering an issue while using the multiview video demo provided at this link "https://developer.apple.com/documentation/avkit/creating-a-multiview-video-playback-experience-in-visionos/". Specifically, when running on versions of visionOS prior to 2.2, navigating back results in a blank screen. Has anyone else experienced this problem and found a solution? Any advice or workaround would be greatly appreciated.
1
0
346
Feb ’25
Spatial streaming from iPhone
Hi, I am trying to stream spatial video in realtime from my iPhone 16. I am able to record spatial video as a file output using: let videoDeviceOutput = AVCaptureMovieFileOutput() However, when I try to grab the raw sample buffer, it doesn't include any spatial information: let captureOutput = AVCaptureVideoDataOutput() //when init camera session.addOutput(captureOutput) captureOutput.setSampleBufferDelegate(self, queue: sessionQueue) //finally func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { //use sample buffer (but no spatial data available here) } Is this how it's supposed to work or maybe I am missing something? this video: https://developer.apple.com/videos/play/wwdc2023/10071 gives us a clue towards setting up spatial streaming and I've got the backend all ready for 3D HLS streaming. Now I am only stuck at how to send the video stream to my server.
1
0
768
Oct ’24
Vision Pro app crashes when scene loads in ImmersiveSpace
Hello, I am getting following error on console and my app crashes. It goes to dark and then Apple logo appears and app crashes apply fence tx failed (client=0x61dbbfd7) [0xfffffecc (ipc/mig) server died] [C:3] Error received: Connection interrupted. Failed to commit transaction (client=0x94097449) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xe9684b50) [0x10000003 (ipc/send) invalid destination port] [C:3-1] Error received: Connection interrupted. Failed to commit transaction (client=0xbcac17e9) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0x52392119) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xff841d17) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xdef5c915) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xefdc8bf3) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xd50c1eff) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0x15690a46) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0xf296f56b) [0x10000003 (ipc/send) invalid destination port] Failed to commit transaction (client=0x61dbbfd7) [0x10000003 (ipc/send) invalid destination port] apply fence tx failed (client=0x61dbbfd7) [0x10000003 (ipc/send) invalid destination port] nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_read_request_report [C1] Receive failed with error "No message available on STREAM" nw_protocol_socket_reset_linger [C1:2] setsockopt SO_LINGER failed [22: Invalid argument] apply fence tx failed (client=0x61dbbfd7) [0x10000003 (ipc/send) invalid destination port] Failed to set override status for bind point component member. Failed to set override status for bind point component member. Failed to set override status for bind point component member. Message from debugger: Terminated due to signal 9 Could you please tell me what's the reason and how can I resolve this. When I loads 2,3 times then app works fine from that point onwards. But this happens time to time when debug.
0
0
549
Sep ’24
HandTracking
Hi guys I'm currently developing a game for the Vision Pro and i'm trying to figure out how the hand tracking works so I can make a superpower appear when the user looks at their hand and widens it. But im really struggling to wrap my head around the whole concept and how to implement it in my code. Is there anything out there (other than apple doc) or anyone who could help me shed some light on the whole idea and how I could actually usefully implement it? would be much appreciated Thanks
1
0
422
Oct ’24
Issue with Vertex Animation Scaling Not Working in Reality Composer Pro
Hi everyone, I’m working on a project in Reality Composer Pro, and I’ve encountered an issue with vertex animations. Here’s what I’ve done: I created a model in Blender, which includes two animations: One that scales the vertices of the model. Another that moves the model's position. I imported both animations into Reality Composer Pro, and the position animation works fine, but the vertex scaling animation does not seem to work. What I’m Trying to Achieve: I want the vertex scaling animation to play correctly in Reality Composer Pro alongside the movement animation. Problem: The position animation works as expected, but the vertex scaling animation does not work when applied in Reality Composer Pro. I have checked the vertex scaling animation in other software, and it works fine there. The issue seems to be specific to Reality Composer Pro. Is vertex animation scaling supported in Reality Composer Pro? If so, what might be causing this issue? Any advice or solutions would be greatly appreciated!
0
0
430
Dec ’24
ARKit AnchorUpdate<ImageAnchor>.event Behavior Changes in visionOS 2.1
In visionOS beta, when using ARKit for image detection, the initially detected AnchorUpdate status is .add, and subsequent detections of the same image are marked as .update. However, after toggling immersiveSpace, the same image is detected with the status .add again. After updating to visionOS 2.1, the first detection status remains `add, and subsequent detections of the same image remain .update, even after toggling immersiveSpace. Could this be due to a change in processing flow?
2
0
430
Nov ’24
I am developing a Immersive Video App for VisionOs but I got a issue regarding app and video player window
In Vision OS app, We have two types of windows: Main App Window – This is the default window that launches when the app starts. It displays the video listings and other primary content. Immersive Space Window – This opens only when a user starts streaming or playing a video. Issue: When entering the immersive space, the main app window remains visible in front of it unless manually closed. To avoid this, I currently close the main window when transitioning to immersive space and reopen it when exiting. However, this causes the app to restart instead of resuming from its previous state. Desired Behavior: I want the main app window to retain its state and seamlessly resume from where it was before entering immersive mode, rather than restarting. Attempts & Challenges: Tried managing opacity, visibility, and state preservation, but none worked as expected. Couldn’t find a way to push the main window to the background while bringing the immersive space to the foreground. Looking for a solution to keep the main window’s state intact while transitioning between immersive and normal modes.
1
0
81
Mar ’25
Access persona on VisionOS
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
3
0
1.2k
Nov ’24
How to access the Scene instance of a visionOS app using immersive space in a unit test?
I have a visionOS app using immersive space with RealityView. The app adds RealityKit entities to the app's Scene instance, and uses raycast to find CollisionCastHits. I want now to write a unit test to check if the app finds the right hits.
To do so, I have to access the Scene instance to add entities, and to check if they are hit by scene.raycast. But how can I access the scene instance? I can access it e.g. after creating the RealityView via its content parameter, or via @Environment(\.realityKitScene). But this seems not to be possible in a unit test. I tried the following test function: @MainActor @Test func test() async throws { var scene: RealityKit.Scene? await withCheckedContinuation { continuation in _ = RealityView(make: { content in print("make") let entity = Entity() content.add(entity) scene = entity.scene continuation.resume() }) } #expect(scene != nil) } But this logs ◇ Test test() started.
SWIFT TASK CONTINUATION MISUSE: test() leaked its continuation! The reason is apparently that the make closure of RealityView is only called when SwiftUI calls it within the body of a SwiftUI View. So, is it possible at all to access the app's scene i a unit test?
0
0
386
Dec ’24
ImmersiveSpace with system environments
Hi, When opening an ImmersiveSpace with the .mixed style, is it possible to keep the user's current selected system immersive environment? Currently, the system immersive environment will be dismissed. ImmersiveSpace(id: "some id") { SomeRealityView() } .immersionStyle(selection: .constant(.mixed), in: .mixed)
1
0
445
Oct ’24
VisionOS Unity - VR Mode - Control Render Resolution?
I'm using Unity 2022.3.56f, with Apple VisionOS App Mode set to 'Virtual Reality - Fully Immersive Space'. It seems that the render resolution of my game in the Apple Vision Pro when I build is well below the native resolution of the AVP displays. I can't see a setting in XR Plug-in Management Apple visionOS options, or in Quality settings, to increase the render resolutions. Is this possible? I tried setting: UnityEngine.XR.XRSettings.eyeTextureResolutionScale= 2.0f For example, but this doesn't seem to do anything to the render resolution in the build.
0
0
327
Feb ’25