RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts

Post

Replies

Boosts

Views

Activity

Sticky Horizontal AnchorEntity
Hi, I'm trying to use AnchorEntity for horizontal surfaces. It works when the entity is being created, but I'm looking for a way to snap this entity to the nearest surface, after translating it, for example with a DragGesture. What would be the best way to achieve this? Using raycast, creating a new anchor, trackingMode to continuous etc. Do I need to use ARKitSession as I want continuous tracking?
1
0
58
8h
Nested RealityKit entity collisions priority
Hello, I'm struggling trying to interact with a RealityKit entity nested (or at least visually nested) in a second one. I think I've the same issue as mentioned on this StackOverflow post, but I can't manage to reproduce the solution. https://stackoverflow.com/questions/79244424/how-to-prioritize-a-specific-entity-when-collision-boxes-overlap-in-realitykit What I'd like to achieve is to translate the red box using a DragGesture, while still be able to interact using a TapGesture on the sphere. Currently, I can only do one at a time. Does anyone know the solution? extension CollisionGroup { static let parent: CollisionGroup = CollisionGroup(rawValue: 1 << 0) static let child: CollisionGroup = CollisionGroup(rawValue: 1 << 1) } struct ImmersiveView: View { var body: some View { RealityView { content in let boxMesh = MeshResource.generateBox(size: 0.35) let boxMaterial = SimpleMaterial(color: .red.withAlphaComponent(0.25), isMetallic: false) let boxEntity = ModelEntity(mesh: boxMesh, materials: [boxMaterial]) let sphereMesh = MeshResource.generateSphere(radius: 0.05) let sphereMaterial = SimpleMaterial(color: .blue, isMetallic: false) let sphereEntity = ModelEntity(mesh: sphereMesh, materials: [sphereMaterial]) content.add(sphereEntity) content.add(boxEntity) boxEntity.components.set(InputTargetComponent()) boxEntity.components.set( CollisionComponent( shapes: [ShapeResource.generateBox(size: SIMD3<Float>(repeating: 0.35))], isStatic: true, filter: CollisionFilter( group: .parent, mask: .parent.subtracting(.child) ) ) ) sphereEntity.components.set(InputTargetComponent()) sphereEntity.components.set(HoverEffectComponent()) sphereEntity.components.set( CollisionComponent( shapes: [ShapeResource.generateSphere(radius: 0.05)], isStatic: true, filter: CollisionFilter( group: .child, mask: .child.subtracting(.parent) ) ) ) } } }
3
0
167
2d
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator)
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator) App Overview App Name: Extn Browser Bundle ID: ai.extn.browser Purpose: A visionOS web browser that plays 360°/180° VR videos in an immersive sphere environment Development Environment & SDK Versions Component Version Xcode 26.2 Swift 6.2 visionOS Deployment Target 26.2 Swift Concurrency MainActor isolation enabled App is released in the TestFlight. Frameworks Used SwiftUI - UI framework RealityKit - 3D rendering, MeshResource, ModelEntity, VideoMaterial AVFoundation - AVPlayer, AVAudioSession WebKit - WKWebView for browser functionality Network - NWListener for local proxy server Sphere Video Mechanism The app creates an immersive 360° video experience using the following approach: // 1. Create sphere mesh (10 meter radius for immersive viewing) let mesh = MeshResource.generateSphere(radius: 10.0) // 2. Create initial transparent material var material = UnlitMaterial() material.color = .init(tint: .clear) // 3. Create entity and invert sphere (negative X scale) let sphere = ModelEntity(mesh: mesh, materials: [material]) sphere.scale = SIMD3<Float>(-1, 1, 1) // Inverts normals for inside-out viewing sphere.position = SIMD3<Float>(0, 1.5, 0) // Eye level // 4. Create AVPlayer with video URL let player = AVPlayer(url: videoURL) // 5. Configure audio session for visionOS let audioSession = AVAudioSession.sharedInstance() try audioSession.setCategory(.playback, mode: .moviePlayback, options: [.mixWithOthers]) try audioSession.setActive(true) // 6. Create VideoMaterial and apply to sphere let videoMaterial = VideoMaterial(avPlayer: player) if var modelComponent = sphere.components[ModelComponent.self] { modelComponent.materials = [videoMaterial] sphere.components.set(modelComponent) } // 7. Start playback player.play() ImmersiveSpace Configuration // browserApp.swift ImmersiveSpace(id: appModel.immersiveSpaceID) { ImmersiveView() .environment(appModel) } .immersionStyle(selection: .constant(.mixed), in: .mixed) Entitlements <!-- browser.entitlements --> <key>com.apple.security.app-sandbox</key> <true/> <key>com.apple.security.network.client</key> <true/> <key>com.apple.security.network.server</key> <true/> Info.plist Network Configuration <key>NSAppTransportSecurity</key> <dict> <key>NSAllowsArbitraryLoads</key> <true/> </dict> The Issue Behavior in Simulator: Video plays correctly on the inverted sphere surface - 360° video is visible and wraps around the user as expected. Behavior on Physical Vision Pro: The sphere displays a black screen. No video content is visible, though the sphere entity itself is present. Important: Not a DRM/Licensing Issue This issue is NOT related to Digital Rights Management (DRM) or FairPlay. I have tested with: Unlicensed raw MP4 video files (no DRM protection) Self-hosted video content with no copy protection Direct MP4 URLs from CDN without any licensing requirements The same black screen behavior occurs with all unprotected video sources, ruling out DRM as the cause. (Plain H.264 MP4, no DRM) Screen Recording: Working in Simulator The following screen recording demonstrates playing a 360° YouTube video in the immersive sphere on the visionOS Simulator: https://cdn.commenda.kr/screen-001.mov This confirms that the VideoMaterial and sphere rendering work correctly in the simulator, but the same setup shows a black screen on the physical Vision Pro device. Observations AVPlayer status reports .readyToPlay - The video appears to load successfully VideoMaterial is created without errors - No exceptions thrown Sphere entity renders - The geometry is visible (black surface) Audio session is configured - No errors during audio session setup Network requests succeed - The video URL is accessible from the device Same result with local/unprotected content - DRM is not a factor Console Logs (Device) The logging shows: Sphere created and added to scene AVPlayer created with correct URL VideoMaterial created and applied Player status transitions to .readyToPlay player.play() called successfully Rate shows 1.0 (playing) Despite all success indicators, the rendered output is black. Questions for Apple Are there known differences in VideoMaterial behavior between the visionOS Simulator and physical Vision Pro hardware? Does VideoMaterial(avPlayer:) require specific video codec/format requirements that differ on device? (The test video is a standard H.264 MP4) Is there a required Metal capability or GPU feature for VideoMaterial that may not be available in certain contexts on device? Does the immersion style (.mixed) affect VideoMaterial rendering on hardware? Are there additional entitlements required for video texture rendering in RealityKit on physical hardware? Attempted Solutions Configured AVAudioSession with .playback category Added delay before player.play() to ensure material is applied Verified sphere scale inversion (-1, 1, 1) Tested multiple video URLs (including raw, unlicensed MP4 files) Confirmed network connectivity on device Ruled out DRM/FairPlay issues by testing unprotected content Environment Details Device: Apple Vision Pro visionOS Version: 26.2 Xcode Version: 26.2 macOS Version: Darwin 25.2.0
0
0
85
2d
How to visualize AR-dependent app if not supported on Simulator for Swift Student Challenge?
Recently, applications for the Swift Student Challenge opened up. I noticed, that when selected "where to run your app" (mine was developed in Xcode 26), and you select Xcode26, there is a note underneath it that basically says all Xcode projects will be run on the simulator. What if my project is dependent on AR? How would I let the judges test my submission?
2
0
160
2d
Lighting disabled inside PortalComponent in Progressive/Full Immersion (visionOS 26.2)
Hello, I am experiencing an issue where lighting is disabled within a Portal Component specifically when using Progressive or Full Immersion styles in visionOS 26.2. [Steps to Reproduce] Create a scene in Reality Composer Pro with custom lighting (Directional Light, IBL, etc.) and load it as an Entity. Add a PortalComponent to this Entity. Observe the Entity in an ImmersiveSpace under different Immersion Styles. [Observed Behavior] Mixed Immersion: Lighting works as expected.Progressive / Full Immersion: Lighting is disabled, causing the content to render incorrectly. [Additional Context] Without PortalComponent: The same Reality Composer Pro scene renders lighting correctly across all Immersion Styles (Mixed, Progressive, and Full).Regression: In applications built with Xcode 16.4 / visionOS 2.5, lighting inside the Portal functioned correctly. This issue appears to have emerged with Xcode 26.2 and visionOS 26.2. [Questions] Is the disabling of lighting inside Portals during Full/Progressive Immersion an intended architectural change or optimization in visionOS 26.2? Are there any workarounds, such as specific PortalComponent configurations or new API flags, to re-enable lighting in these immersion modes? Thank you.
1
0
420
4d
ModelEntity position values are stale after world recenter (Crown long press)
Hi everyone, I’m working with RealityKit on visionOS and I’m seeing unexpected behavior when the user long-presses the Digital Crown, which recenters the world. Observed behavior: When the world is recentered via long-pressing the Crown, the models remain visually in the correct place (as expected). However, if I query the model’s position or transform immediately after recentering (e.g. entity.position or similar), I still get the old values from before recenter. As soon as I interact with the model using a gesture (drag/rotate/scale), the position updates and then querying it returns the correct, updated values. So effectively: Recenter happens Visual position is correct Programmatic position remains stale First gesture causes the position to “snap” to the correct updated value Questions: Is there any event, notification, or callback that fires when the world is recentered due to a long press of the Crown button? Is there a recommended way to get the updated world-space transform immediately after recenter, without waiting for a gesture? Is this expected behavior due to deferred/lazy transform updates in RealityKit? Right now it feels like recentering updates the coordinate system but doesn’t immediately commit new transform values to entities until some interaction occurs. Any guidance or best-practice patterns for handling this would be appreciated. Thanks!
2
0
593
1w
RealityKit .Kinematic + collisions (visionOs)
Hi everyone, I'm new to visionOS development. I'm trying to create a physics-based scene (with gravity) where users can pick up and move objects on a workbench. I am struggling with physics interactions during the drag gesture: Kinematic Mode: If I switch to .kinematic during the drag, the object moves smoothly but clips through other objects (no collisions). Dynamic Mode: I tried keeping it .dynamic and applying linear velocity toward the hand position, but the movement feels laggy and unresponsive. Hybrid Approach: I tried switching to .kinematic during DragGesture.onChange and back to .dynamic on collision, but this causes the entity to jitter/shake violently when touching other objects. Has anyone found a clean way to drag objects while maintaining solid collisions. Thanks for your help!
2
0
692
1w
Showing 3D/AR content on multiple pages in iOS with RealityView
Hi. I'm trying to show 3D or AR content in multiple pages on an iOS app but I have found that if a RealityView is used earlier in a flow then future uses will never display the camera feed, even if those earlier uses do not use any spatial tracking. For example, in the following simplified view, the second page realityView will not display the camera feed even though the first view does not use it not start a tracking session. struct ContentView: View { var body: some View { NavigationStack { VStack { RealityView { content in content.camera = .virtual // showing model skipped for brevity } NavigationLink("Second Page") { RealityView { content in content.camera = .spatialTracking } } } } } } What is the way around this so that 3D content can be displayed in multiple places in the app without preventing AR content from working? I have also found the same problem when wrapping an ARView for use in SwiftUI.
0
0
201
1w
In SwiftUI on macOS, using instancing in RealityKit, how can I set individual colours per instance?
I have written this function: @available(macOS 26.0, *) func instancing() async -> Entity { let entity = Entity() do { // 1. Create a CustomMaterial let library = offscreenRenderer.pointRenderer!.device.makeDefaultLibrary()! let surfaceShader = CustomMaterial.SurfaceShader( named: "surfaceShaderWithCustomUniforms", // This must match the function name in Metal in: library ) let instanceCount = 10 // No idea how to actually use this... // let bufferSize = instanceCount * MemoryLayout<UInt32>.stride // // // Create the descriptor // var descriptor = LowLevelBuffer.Descriptor(capacity: bufferSize, sizeMultiple: MemoryLayout<UInt32>.stride) // // // Initialize the buffer // let lowLevelBuffer = try LowLevelBuffer(descriptor: descriptor) // lowLevelBuffer.withUnsafeMutableBytes { rawBytes in // // Bind the raw memory to the UInt32 type // let pointer = rawBytes.bindMemory(to: UInt32.self) // pointer[1] = 0xff_0000 // pointer[0] = 0x00_ff00 // pointer[2] = 0x00_00ff // pointer[3] = 0xff_ff00 // pointer[4] = 0xff_00ff // pointer[5] = 0x00_ffff // pointer[6] = 0xff_ffff // pointer[7] = 0x7f_0000 // pointer[8] = 0x00_7f00 // pointer[9] = 0x00_007f // } var material = try CustomMaterial(surfaceShader: surfaceShader, lightingModel: .lit) material.withMutableUniforms(ofType: SurfaceCustomUniforms.self, stage: .surfaceShader) { params, resources in params.argb = 0xff_0000 } // 2. Create the ModelComponent (provides the MESH and MATERIAL) let mesh = MeshResource.generateSphere(radius: 0.5) let modelComponent = ModelComponent(mesh: mesh, materials: [material]) // 3. Create the MeshInstancesComponent (provides the INSTANCE TRANSFORMS) let instanceData = try LowLevelInstanceData(instanceCount: instanceCount) instanceData.withMutableTransforms { transforms in for i in 0..<instanceCount { let instanceAngle = 2 * .pi * Float(i) / Float(instanceCount) let radialTranslation: SIMD3<Float> = [-sin(instanceAngle), cos(instanceAngle), 0] * 4 // Position each sphere around a circle. let transform = Transform( scale: .one, rotation: simd_quatf(angle: instanceAngle, axis: [0, 0, 1]), translation: radialTranslation ) transforms[i] = transform.matrix } } let instancesComponent = try MeshInstancesComponent(mesh: mesh, instances: instanceData) // 4. Attach BOTH to the same entity entity.components.set(modelComponent) entity.components.set(instancesComponent) } catch { print("Failed to create mesh instances: \(error)") } return entity } and this is the corresponding Metal shader typedef struct { uint32_t argb; } SurfaceCustomUniforms; [[stitchable]] void surfaceShaderWithCustomUniforms(realitykit::surface_parameters params, constant SurfaceCustomUniforms &customParams) { half3 color = { static_cast<half>((customParams.argb >> 16) & 0xff), static_cast<half>((customParams.argb >> 8) & 0xff), static_cast<half>(customParams.argb & 0xff) }; params.surface().set_base_color(color); } which works well and generates 10 red spheres. While listening to the WWDC25 presentation on what's new in RealityKit I am positive to hear the presenter saying that it is possible to customise each instance using a LowLevelBuffer, but so far all my attempts have failed. Is it possible, and if so how ? Thanks for reading and for your help. Kind regards, Christian
1
1
229
1w
With manipulation component, once you let go, how to prevent the entity from disappearing while animating it back into the volume
So with the new ManipulationComponent, we can choose "stay" and then if you drag it out of your volume, once you let go it will instantly disappear. We can "animate" it back to inside the volume, eg.: content.subscribe(to: ManipulationEvents.WillRelease.self) { event in Entity.animate( .easeInOut(duration: 1), body: { event.entity.position = [0, 0.2, 0] }, completion: {} ) }, Howeve,r for the duration that it travels outside of the volume it's invisible the whole time. In this apple video, it seems to be visible when dragging and when letting go, but perhaps that's not a volume they're dragging it out of? https://youtu.be/VtenPKrvPOU?si=y1zoZOs2IMyDzOm6&t=1748 Does anyone know how to keep the entity visible even when after letting the entity go while you animate it back towards inside of your volume?
1
1
910
2w
The AccessoryAnchor transform does not match any of the Accessory.LocationName options.
I am using AccessoryTrackingProvider from ARKit to get the transform of the PSVR2 controller via originFromAnchorTransform of the AccessoryAnchor. I also am trying to use AnchorEntity on the controller using RealityKit However, none of the three options for Accessory.LocationName, which should be used to define the AnchorEntity target, seem to match the position on the controller which is being sent from ARKit. The picture attached is showing two transforms: RealityKit - using .gripSurface to define the AnchoringComponent.Target.accesssory location. ARKit - using originFromAnchorTransform for AccessoryTrackingProvider. They are not aligned at the same point. As for the other options of Accessory.LocationName, using .aim is located at the tip of the controller and .grip is the same position as .gripSurface but with a different orientation. I am wondering why there is not an option for Accessory.LocationName that actually matches the transform captured by ARKit?
3
0
1.3k
2w
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
12
4
1.3k
2w
Manipulation stops working when changing rooms
This post documents an issue I reported in feedback FB19610114 and see if anyone knows of a workaround. Here is a copy of the feedback. Short version Manipulation (SwiftUI OR RealityKit) fails to translate entities after changing rooms. By changing rooms, I mean a human wearing an Apple Vision Pro leaving one room and entering another room. Once this issue occurs, it impacts all apps that use these features. A device restart is the only solution I have to fix it. Feedback FB19610114 This is an odd one. I'm using the new Manipulation Component in visionOS 26. Most of the time this works well. Sometime it stops working and when it does the only way to get it working again is to reboot the headset. When this happens, I can continue to rotate and scale items, but translation no longer works. It is as if the item is stuck to a fixed point in the parent scene (window, volume, etc). When this bug occurs, it affects every app across the entire operating system that is using manipulation, including the RealityKit component AND the SwiftUI version. This is not limited to one app and is not limited to apps that I am working on. Once this error occurs, it affects literally any application across the operating system that is using this API, including apps from Apple. I won't speculate on the cause of this, but I do know of one way where I can always get it to happen. Here is how to reproduce it: Make an Xcode project with a single entity that uses the Manipulation Component. There is no need to customize the configuration of this component. The default implementation will work. Build and run this app on device. You can keep running from device or quit and launch the app like normal on device. Open the app and manipulate the entity - it should work as expected. Physically walk into another room. It is vital that you leave the current room that you are in and enter a different room entirely. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - it should still be working as expected at this point. Physically, move yourself and your headset into the original room where you started. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - you should now see the issue. When I follow the steps above, then 100% of the time manipulation translation stops working at this point. It will impact any application using this API. The only way to fix it is to restart my headset. A few points to keep in mind It does not matter if an app is actively being run from Xcode. When this occurs, it impacts every single app, not just one. When this occurs, rotation and scaling continue to work, but the entity/view cannot be translated. This impacts BOTH the SwiftUI version and the RealityKit version. When this occurs, the only way to "fix" it is to reboot the device.
6
3
943
2w
How to cast shadow on OcclusionMaterial in visionOS
I have a ModelEntity with GroundingShadowComponent entity.enumerateHierarchy { child, stop in child.components.set(GroundingShadowComponent(castsShadow: true)) } When I set it on the table, I can see the shadow on the table, even if I disable plane detection. However, when I enable plane detection, and the plane's material is OcclusionMaterial. I can not see the shadow on the table. As far as I know, receivesDynamicLighting is not usable in VisionOS. So how can I cast shadow on OcclusionMaterial in VisionOS? Or rather, is it possible to have the shadow properly displayed on the tabletop while ensuring that I cannot see objects beneath the table through it?
1
0
482
2w
Multiply exr lightmap in Reality Composer Pro Shader Graph
I’m trying to use EXR lightmaps to overlay baked lighting on top of a base texture in the RCP Shader Graph. When I multiply an EXR image set to Image(float) with an 8-bit base texture, the output becomes Image(float). I can’t connect that to the BaseColor input on the UnlitSurface node, since it only accepts Color3f. I expected to be able to use a Convert node between the Multiply node and the BaseColor input, but when I do that, the result becomes black and white instead of the expected outcome: the EXR multiplied with the base texture using a baseline value of 1, where values below 1 in the EXR would darken the base texture and values above 1 would brighten it. Is there any documentation on how to properly overlay a 32-bit EXR lightmap in the RCP Shader Graph, or is the black-and-white output from the Convert node a bug?
7
0
1k
3w
TapGesture stops responding on ViewAttachmentComponent after disabling or removing and re-adding the Entity (visionOS 26)
Issue When an Entity with a ViewAttachmentComponent is: disabled using isEnabled = false removed using removeFromParent() and then enabled or added back again, the attached SwiftUI view is rendered correctly, but tap interactions stop working. Specifically: Button actions inside the attached view do not fire TapGesture closures on child views do not respond Expected Behavior Tap interactions inside the attached view should continue to work after the Entity is re-enabled or re-added. Actual Behavior After being disabled or removed once, all tap interactions stop responding. Comparison When displaying the same SwiftUI view using RealityViewAttachments, this issue does not occur. Removing and re-displaying the attachment still allows taps to work correctly. Reproduction Attached sample code reproduces the issue: A RealityView with an Entity that has a ViewAttachmentComponent The attached SwiftUI view contains a Toggle The toggle updates isEnabled on the Entity After toggling off and on, tap interactions stop responding Environment Xcode 26 visionOS 26 Question Is this expected behavior of ViewAttachmentComponent, or a bug? Is there a recommended way to temporarily hide or disable an Entity with ViewAttachmentComponent without breaking tap interactions? import SwiftUI import RealityKit struct GestureTestView: View { @State var sampleEnabled = true @State var sampleEntity: Entity? var body: some View { RealityView { contents, attachments in // After deleting and re-displaying it, taps no longer respond. let sample = Entity(components: ViewAttachmentComponent(rootView: SampleView())) // Executed successfully //let sample = attachments.entity(for: "SampleView")! contents.add(sample) sample.position = [0, 1.2, -1] sampleEntity = sample let toggleButton = Entity(components: ViewAttachmentComponent(rootView: ToggleButtonView(isOn: $sampleEnabled))) contents.add(toggleButton) toggleButton.position = [0, 1, -1] } update: { _, _ in // run update closure print(sampleEnabled) // update sample entity enable sampleEntity?.isEnabled = sampleEnabled } attachments: { Attachment(id: "SampleView") { SampleView() } } } } struct ToggleButtonView: View { @Binding var isOn: Bool var body: some View { VStack { Toggle(isOn: $isOn) { Text("Toggle") } } .padding() .glassBackgroundEffect() } } struct SampleView: View { var body: some View { VStack { Button { print("Hello, World!") } label: { Text("Hello, World!") .padding() } } .padding() .glassBackgroundEffect() } } #Preview(immersionStyle: .mixed) { GestureTestView() }
2
0
420
Jan ’26
'__abort_with_payload' from CompositorNonUI on visionOS 26.2 (device + simulator, Omniverse streaming)
I am developing a custom app for Apple Vision Pro using Compositor Services to stream content from NVIDIA Omniverse. The app is based on: https://github.com/NVIDIA-Omniverse/apple-configurator-sample Environment: Device: Apple Vision Pro OS Version: visionOS 26.2 Xcode Version: 26.2 The Issue: The application crashes hard (__abort_with_payload) in "libsystem_kernel.dylib" on Task 6 immediately after initialization. This appears to be a deliberate abort triggered by the compositor, not a typical crash. The issue occurs on both physical device and simulator. Important detail: The console output shows a specific CLIENT BUG assertion. By checking the metadata of the warning, I found that it is related to "Library: CompositorNonUI". Relevant console output before abort: Missed 'FrameLimiter' target of 90.0 Hz running compositor services to get IPD, FOV, etc fence tx observer 14f27 timed out after 0.600000 fence tx observer bc1b timed out after 0.600000 BUG IN CLIENT: For mixed reality experiences please use cp_drawable_compute_projection API
0
0
97
Jan ’26