Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.

All subtopics
Posts under Spatial Computing topic

Post

Replies

Boosts

Views

Activity

Xcode 26 - extremely long time to open immersive space
The issue reproducible with empty project. When you run it and tap "Open immersive space" it takes a couple of minutes to respond. The issue only reproducible on real device with debugger attached. Reproducible other developers too (not specific to my environment). Issue doesn't exists in Xcode 16. Afer initial long delay subsequent opens works fine. Console logs: nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket] nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket] Failed to set dependencies on asset 9303749952624825765 because NetworkAssetManager does not have an asset entity for that id. void * _Nullable NSMapGet(NSMapTable * _Nonnull, const void * _Nullable): map table argument is NULL PSO compilation completed for driver shader copyFromBufferToTexture so=0 sbpr=256 sbpi=16384 ss=(64, 64, 1) p=70 sc=1 ds=0 dl=0 do=(0, 0, 0) in 1997 XPC connection interrupted <<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606 Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle} Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle} [b30780-MRUIFeedbackTypeButtonWithBackgroundTouchDown] Playback timed out before completion (after 3111 ms) Failed to set dependencies on asset 7089614247973236977 because NetworkAssetManager does not have an asset entity for that id.
4
2
306
7h
visionOS – Starting GroupActivity FaceTime Call dismisses Immersive Space
Hello, I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed. This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected. Looking at the console on my Mac I found this log: Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050 Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful. Thanks!
0
0
71
12h
ARSkeleton3D modelTransform always return nil
I use ARKit for motion tracking. I get the skeleton joint coordinates and use them for animation. I didn't make any changes to the code, but I updated the iOS version from 18 to 26, and modelTransform now always returns nil. https://developer.apple.com/documentation/arkit/arskeleton3d/modeltransform(for:) For example bodyAnchor.skeleton.modelTransform(for: .init(rawValue: "head_joint")) bodyAnchor is ARBodyAnchor. I see the default skeleton on the screen, but now I can't get the coordinates out of it. I'm using an example from Apple's WWDC presentation. https://developer.apple.com/documentation/arkit/capturing-body-motion-in-3d Are there any changes in the API? Or just bug?
3
0
310
15h
Request File Access from Unity for Apple Vision Pro
Hi, I am trying to load files from the Apple Vision Pro's storage into a Unity App (using Apple visionOS XR Plugin and not PolySpatial package). So far, I've tried using UnitySimpleFileBrowser and UnityStandaloneFileBrowser (both aren't made for the Vision Pro and don't work there), and then implemented my own naive file browser that at least allows me to view directories (that I can see from the App Sandbox). This is of course very limited: Gray folders can't be accessed, the only 3 available ones don't contain anything where a user would put files through the "Files" app. I know that an app can request access to these "Files & Folders": So my question is: Is there a way to request this access for a Unity-built app at the moment? If yes, what do I need to do? I've looked into the generated Xcode project's "Capabilities", but did not find anything related to file access. Any help is appreciated!
3
0
158
16h
Request: Option to Disable PSVR2 Sense Controller Low-Power Mode on visionOS (ARKit + Vision Pro Development)
Hi everyone, We’re developing a Unity project for Apple Vision Pro that connects PSVR2 Sense controllers for advanced interaction and input. We’ve encountered a major limitation: when the controller is not held close to the designated hand (e.g., resting on a table or held by the non designated hand), the Sense controller enters a low-power or reduced-update mode. This results in noticeably reduced tracking update frequency and responsiveness until the controller is held again. For certain use cases, this behavior is undesirable. In our case, it prevents continuous real-time tracking of the controller even when it’s stationary or being tracked externally. Request: Please consider exposing an API flag or developer option in ARKit to disable and optionally delay the low-power mode when the app requires full-rate updates regardless of proximity or hand pose detection.
1
0
60
16h
Getting the world position of a QR code
Hi, would love for your help in that matter. I try to get the position in space of two QR codes to make an alignment to their positions in space. The detection shows that the QR codes position is always 0,0,0 and I don't understand why. Here's my code: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @ObservedObject var qrCoordinator: QRCoordinator @ObservedObject var coordinator: ImmersiveCoordinator let qrName: String @Binding var startQRDetection: Bool @State private var anchor: AnchorEntity? = nil @State private var detectionTask: Task<Void, Never>? = nil var body: some View { RealityView { content in // Add the QR anchor once (must exist before detection starts) if anchor == nil { let imageAnchor = AnchorEntity(.image(group: "QRs", name: qrName)) content.add(imageAnchor) anchor = imageAnchor print("📌 Created anchor for \(qrName)") } } .onChange(of: startQRDetection) { enabled in if enabled { startDetection() } else { stopDetection() } } .onDisappear { stopDetection() } } private func startDetection() { guard detectionTask == nil, let anchor = anchor else { return } detectionTask = Task { var detected = false while !Task.isCancelled && !detected { print("🔎 Checking \(qrName)... isAnchored=\(anchor.isAnchored)") if anchor.isAnchored { // wait a short moment to let transform update try? await Task.sleep(nanoseconds: 100_000_000) let worldPos = anchor.position(relativeTo: nil) if worldPos != .zero { // relative to modelRootEntity if available var posToSave = worldPos if let modelEntity = coordinator.modelRootEntity { posToSave = anchor.position(relativeTo: modelEntity) print("converted to model position") } else { print("⚠️ modelRootEntity not available, using world position") } print("✅ \(qrName) detected at position: world=\(worldPos) saved=\(posToSave)") if qrName == "reanchor1" { qrCoordinator.qr1Position = posToSave let marker = createMarker(color: [0,1,0]) marker.position = .zero // sits directly on QR marker.position = SIMD3<Float>(0, 0.02, 0) anchor.addChild(marker) print("marker1 added") } else if qrName == "reanchor2" { qrCoordinator.qr2Position = posToSave let marker = createMarker(color: [0,0,1]) marker.position = posToSave // sits directly on QR marker.position = SIMD3<Float>(0, 0.02, 0) anchor.addChild(marker) print("marker2 added") } detected = true } else { print("⚠️ \(qrName) anchored but still at origin, retrying...") } } try? await Task.sleep(nanoseconds: 500_000_000) // throttle loop } print("🛑 QR detection loop ended for \(qrName)") detectionTask = nil } } private func stopDetection() { detectionTask?.cancel() detectionTask = nil } private func createMarker(color: SIMD3<Float>) -> ModelEntity { let sphere = MeshResource.generateSphere(radius: 0.05) let material = SimpleMaterial(color: UIColor( red: CGFloat(color.x), green: CGFloat(color.y), blue: CGFloat(color.z), alpha: 1.0 ), isMetallic: false) let marker = ModelEntity(mesh: sphere, materials: [material]) marker.name = "marker" return marker } }
2
0
443
21h
ManipulationComponent create parent/child crash
Hello, If you add a ManipulationComponent to a RealityKit entity and then continue to add instructions, sooner or later you will encounter a crash with the following error message: Attempting to move entity “%s” (%p) under “%s” (%p), but the new parent entity is currently being removed. Changing the parent/child entities of an entity in an event handler while that entity is already being reassigned is not supported. CoreSimulator 1048 – Device: Apple Vision Pro 4K (B87DD32A-E862-4791-8B71-92E50CE6EC06) – Runtime: visionOS 26.0 (23M336) – Device Type: Apple Vision Pro The problem occurs precisely with this code: ManipulationComponent.configureEntity(object) I adapted Apple's ObjectPlacementExample and made the changes available via GitHub. The desired behavior is that I add entities to ManipulationComponent and then Realitiykit runs stably and does not crash randomly. GitHub Repo Thanks Andre
2
0
248
1d
How to update TextureResource with MTLTexture?
Hi I have a monitoring app, that will take input video from uvc and process it using Metal, and eventually get a MTLTexture. The problem I'm facing is I have to convert MTLTexture to CGImage then call TextureResource.replace, which is super slow. Metal processing speed is same as input frame rate(50pfs), but MTLTexture -> CGImage -> TextureResource only got 7fps... Is there any way I can make it faster?
2
0
350
1d
VisionPro Enterprise.license file
I have read in the apple documentation and on forums that in order to access the camera and capture images on VisionPro, both an Entitlement and an Enterprise.license are required. I already have the Entitlement, but I don’t yet have the Enterprise.license. I would like to ask: is the Enterprise.license strictly required to gain camera access for capturing images? How can I obtain this file, and does it require an Enterprise account? Currently, my developer account is a regular Developer 99$, not an Enterprise account.
1
0
319
2d
AVPlayer stutters when using AVPlayerItemVideoOutput
We’re trying to build a custom player for Unity. For this, we’re using AVPlayer with AVPlayerItemVideoOutput to get textures. However, we noticed that playback is not smooth and the stream often freezes. For testing, we used this 8K video: https://deovr.com/nwfnq1 The video was played using the following code: @objc public func playVideo(urlString: String) { guard let url = URL(string: urlString) else { return } let pItem = AVPlayerItem(url: url) playerItem = pItem pItem.preferredForwardBufferDuration = 10.0 let pixelBufferAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, kCVPixelBufferMetalCompatibilityKey as String: true, ] let output = AVPlayerItemVideoOutput( pixelBufferAttributes: pixelBufferAttributes ) pItem.add(output) playerItemObserver = pItem.observe(\.status) { [weak self] pItem, _ in guard pItem.status == .readyToPlay else { return } self?.playerItemObserver = nil self?.player.play() } player = AVPlayer(playerItem: pItem) player.currentItem?.preferredPeakBitRate = 35_000_000 } When AVPlayerItemVideoOutput is attached, the video stutters and the log looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: 4.08s | buffer: 4.08s 🟡 Buffer ahead: -0.07s | buffer: 0.00s 🟡 Buffer ahead: 2.94s | buffer: 3.49s 🟡 Buffer ahead: 2.50s | buffer: 4.06s 🟡 Buffer ahead: 1.74s | buffer: 4.30s 🟡 Buffer ahead: 0.74s | buffer: 4.30s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.09s | buffer: 4.30s 🟠 Playback may stall 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.41s | buffer: 1.43s 🟡 Buffer ahead: 1.07s | buffer: 1.43s 🟣 Buffer full 🟡 Buffer ahead: 0.47s | buffer: 1.65s 🟠 Playback may stall 🛑 Buffer empty 🟡 Buffer ahead: 0.10s | buffer: 1.65s 🟠 Playback may stall 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟡 Buffer ahead: 1.99s | buffer: 2.03s 🟣 Buffer full 🟣 Buffer full 🟡 Buffer ahead: 1.41s | buffer: 2.00s 🟡 Buffer ahead: 0.68s | buffer: 2.27s 🟡 Buffer ahead: 0.09s | buffer: 2.27s 🟠 Playback may stall 🛑 Buffer empty 🟠 Playback may stall When we remove AVPlayerItemVideoOutput from the player, the video plays smoothly, and the output looks like this: 🟢 Playback likely to keep up 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.94s | buffer: 1.94s 🟡 Buffer ahead: 1.22s | buffer: 2.22s 🟡 Buffer ahead: 1.05s | buffer: 3.05s 🟡 Buffer ahead: 1.12s | buffer: 4.12s 🟡 Buffer ahead: 1.18s | buffer: 5.18s 🟡 Buffer ahead: 0.72s | buffer: 5.72s 🟡 Buffer ahead: 1.27s | buffer: 7.28s 🟡 Buffer ahead: 2.09s | buffer: 3.03s 🟡 Buffer ahead: 4.16s | buffer: 6.10s 🟡 Buffer ahead: 6.66s | buffer: 7.09s 🟡 Buffer ahead: 5.66s | buffer: 7.09s 🟡 Buffer ahead: 4.66s | buffer: 7.09s 🟡 Buffer ahead: 4.02s | buffer: 7.45s 🟡 Buffer ahead: 3.62s | buffer: 8.05s 🟡 Buffer ahead: 2.62s | buffer: 8.05s 🟡 Buffer ahead: 2.49s | buffer: 3.53s 🟡 Buffer ahead: 2.43s | buffer: 3.38s 🟡 Buffer ahead: 1.90s | buffer: 3.85s We’ve tried different attribute settings for AVPlayerItemVideoOutput. We also removed all logic related to reading frame data, but the choppy playback still remained. Can you advise whether this is a player issue or if we’re doing something wrong?
1
0
346
2d
Reoccurring World Tracking / Scene Exceeded Limit Error
Hi, We’ve been successfully using the RoomPlan API in our application for over two years. Recently, however, users have reported encountering persistent capture errors during their sessions. Specifically, the errors observed are: CaptureError.worldTrackingFailure CaptureError.exceedSceneSizeLimit What we have observed: Persistent Errors: The errors continue to occur even after initiating new capture sessions. Normal Usage: Our implementation adheres to typical usage patterns of the RoomPlan API without exceeding any documented room size limits. Limited Feature Usage: We are not utilizing the WorldTracking feature for the StructureBuilder functionality to stitch rooms together. Potential State Caching: Given that these errors persist across sessions, we suspect that there might be memory or state cached between sessions that is not being cleared, particularly since we are not taking advantage of StructureBuilder. Request: Could you please advise if there is any internal caching or memory retention between capture sessions that might lead to these errors? Additionally, we would appreciate guidance on how to clear or manage this state when the StructureBuilder feature is not in use. Here is a generalised version of our capture session initialization code to help diagnose the issue. struct RoomARCaptureView: UIViewRepresentable { typealias Handler = (CapturedRoom, Error?) -> Void @Binding var stop: Bool @Binding var done: Bool let completion: Handler? func makeUIView(context: Self.Context) -> RoomCaptureView { let view = RoomCaptureView(frame: .zero) view.delegate = context.coordinator view.captureSession.run(configuration: .init()) return view } func updateUIView(_ uiView: RoomCaptureView, context: Self.Context) { if stop { // Stop the session only once, multiple times causes issues with the final presentation uiView.captureSession.stop() stop = false done = true } } static func dismantleUIView(_ uiView: RoomCaptureView, coordinator: Self.Coordinator) { uiView.captureSession.stop() } func makeCoordinator() -> ARViewCoordinator { ARViewCoordinator(completion) } @objc(ARViewCoordinator) class ARViewCoordinator: NSObject, RoomCaptureViewDelegate { var completion: Handler? public required init?(coder: NSCoder) {} public func encode(with coder: NSCoder) {} public init(_ completion: Handler?) { super.init() self.completion = completion } public func captureView(shouldPresent roomDataForProcessing: CapturedRoomData, error: (Error)?) -> Bool { return true } public func captureView(didPresent processedResult: CapturedRoom, error: (Error)?) { completion?(processedResult, error) } } } Thank you for your assistance.
5
2
575
3d
Roomplan exceeded scene size limit error. (RoomCaptureSession.CaptureError.exceedSceneSizeLimit)
Error: RoomCaptureSession.CaptureError.exceedSceneSizeLimit Apple Documentation Explanation: An error that indicates when the scene size grows past the framework’s limitations. Issue: This error is popping up in my iPhone 14 Pro (128 GB) after a few roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it. Does anyone have any idea on how to approach to this issue?
1
1
803
3d
Setting immerstionStyle while in immersive space breaks all entities.
I have my immersive space set up like: ImmersiveSpace(id: "Theater") { ImmersiveTeleopView() .environment(appModel) .onAppear() { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } } .immersionStyle(selection: .constant(appModel.immersionStyle.style), in: .mixed, .full) Which allows me to set the immersive style while in the space (from a Picker on a SwiftUI window). The scene responds correctly but a lot of the functionality of my immersive space is gone after the change in style; in that I am no longer able to enable/disable entities (which I also have a toggles for in the SwiftUI window). I have to exit and reenter the immersive space to regain the ability to change the enabled state of my entities. My appModel.immersionStyle is inspired by the Compositor-Services demo (although I am using a RealityView) listed in https://developer.apple.com/documentation/CompositorServices/interacting-with-virtual-content-blended-with-passthrough and looks like this: public enum IStyle: String, CaseIterable, Identifiable { case mixedStyle, fullStyle public var id: Self { self } var style: ImmersionStyle { switch self { case .mixedStyle: return .mixed case .fullStyle: return .full } } } /// Maintains app-wide state @MainActor @Observable class AppModel { // Immersion Style public var immersionStyle: IStyle = .mixedStyle
1
0
191
6d
Safari-like toolbar in visionOS
I like the toolbar visionOS's Safari uses for back & forward page, share, etc. It floats above the window. My attempt to do this with ornaments isn't as satisfying as they partially cover the window. My attempts with toolbar haven't produced visible results. Is this Safari-style toolbar for visionOS exposed by Apple in the API's? If so, could someone point me to documentation or sample code? Thanks!
1
0
152
6d
HoverEffectStyle in visionOS 26.0
This is no longer highlighting my entity when looking at it: RealityView { content let hoverComponent = HoverEffectComponent(.spotlight( HoverEffectComponent.SpotlightHoverEffectStyle( color: .white, strength: 2.0 ) )) entity.components.set(hoverComponent) The entity is in a window. The same code works in an immersive view. Collision Component and Input type are set in RCP. It's also stopped working on my published app (built under visionOS 2.x) using my visionOS 26 device. If I use a 2.x simulator, it works. Is this a bug or is there something I'm missing? Thanks.
2
0
798
1w
ManipulationComponent in both parent and child entities
Hello, In my project, I have attached a ManipulationComponent to Entity A and as expected, I'm able interact with it using the built-in gestures. I have another Entity B which is a child of A that I would like to interact with as well, so I attempted to add a ManipulationComponent to B. However, no gestures seem to be registered on B; I can still interact with A but B cannot be interacted with despite having ManipulationComponents on both entities. So I'm wondering if I'm just doing something wrong, if this is an issue with the ManipulationComponent, or if this is a limitation of the API. Attached is the code used to add the ManipulationComponent to an Entity and it was done on both A and B: let mc = ManipulationComponent() model.components.set(mc) var boxShape = ShapeResource.generateBox(width: 0.25, height: 0.05, depth: 0.25) boxShape = boxShape.offsetBy(translation: simd_float3(0, -0.05, -0.25)) ManipulationComponent.configureEntity(model, collisionShapes: [boxShape]) if var mc = model.components[ManipulationComponent.self] { mc.releaseBehavior = .stay mc.dynamics.inertia = .low model.components.set(mc) } I am using visionOS 26.0; let me know if there's any additional information needed.
1
0
321
1w
Manipulation stops working when changing rooms
This post documents an issue I reported in feedback FB19610114 and see if anyone knows of a workaround. Here is a copy of the feedback. Short version Manipulation (SwiftUI OR RealityKit) fails to translate entities after changing rooms. By changing rooms, I mean a human wearing an Apple Vision Pro leaving one room and entering another room. Once this issue occurs, it impacts all apps that use these features. A device restart is the only solution I have to fix it. Feedback FB19610114 This is an odd one. I'm using the new Manipulation Component in visionOS 26. Most of the time this works well. Sometime it stops working and when it does the only way to get it working again is to reboot the headset. When this happens, I can continue to rotate and scale items, but translation no longer works. It is as if the item is stuck to a fixed point in the parent scene (window, volume, etc). When this bug occurs, it affects every app across the entire operating system that is using manipulation, including the RealityKit component AND the SwiftUI version. This is not limited to one app and is not limited to apps that I am working on. Once this error occurs, it affects literally any application across the operating system that is using this API, including apps from Apple. I won't speculate on the cause of this, but I do know of one way where I can always get it to happen. Here is how to reproduce it: Make an Xcode project with a single entity that uses the Manipulation Component. There is no need to customize the configuration of this component. The default implementation will work. Build and run this app on device. You can keep running from device or quit and launch the app like normal on device. Open the app and manipulate the entity - it should work as expected. Physically walk into another room. It is vital that you leave the current room that you are in and enter a different room entirely. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - it should still be working as expected at this point. Physically, move yourself and your headset into the original room where you started. Use the digital crown to recenter your view and bring your window or volume to you. Test the manipulation on the entity again - you should now see the issue. When I follow the steps above, then 100% of the time manipulation translation stops working at this point. It will impact any application using this API. The only way to fix it is to restart my headset. A few points to keep in mind It does not matter if an app is actively being run from Xcode. When this occurs, it impacts every single app, not just one. When this occurs, rotation and scaling continue to work, but the entity/view cannot be translated. This impacts BOTH the SwiftUI version and the RealityKit version. When this occurs, the only way to "fix" it is to reboot the device.
5
3
581
1w
Multiple-frames BlendShape (failed) Animation in Reality Composer Pro
Goal: To render in an apple vision pro app, the solid-mechanics 3D simulation results coming form an FEA code. Starting point: I have surface vtks with deformations on each node. Each time step has a a mesh with the nodal coordinates. This is straighforward translatable to a usd MeshSequence. Unfortunately, the results cannot be simplified to a scaling o linear transformation as you would do with other game-oriented animations. Tools: Right now, I am using Xcode and reality composer pro (RCP) to build the scenes. Technical limitations: I am aware that RCP can do animations with BlendMesh and skeletons and that MeshSequence is not a problem. Progress: Coverting to the sequence of vtk meshes to a usd MeshSequence is straighforward. This animates correctly in Preview and Blender (see screenshot). I managed to convert from MeshSequence to multiple keys and BlendMesh. This also animates correctly in Blender and preview. Unfortunately, the BlendMesh of multiple blended meshes shows a zero animation time in RCP (see screenshot below) Also, see below usda file scheme for the animation. Of course I am not showing full vectors such as faceVertexCounts, faceVertexIndex, normals. Question: what is the right set up to create a BlendMesh animation that RCP will correctly import and animate, form a set of Meshes or multiple key shapes? Blender animation Time zero RCP "animations" #usda 1.0 ( defaultPrim = "BlendMeshRoot" doc = "Blender v4.5.3 LTS" endTimeCode = 48 framesPerSecond = 24 metersPerUnit = 1 startTimeCode = 0 timeCodesPerSecond = 24 upAxis = "Z" ) def Xform "BlendMeshRoot" ( customData = { dictionary Blender = { bool generated = 1 } } ) { def SkelRoot "Mesh" { custom string userProperties:blender:object_name = "Mesh" float3 xformOp:rotateXYZ = (89.99999, -0, 0) float3 xformOp:scale = (0.009999999, 0.01, 0.01) double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"] def Mesh "Mesh" ( active = true prepend apiSchemas = ["MaterialBindingAPI", "SkelBindingAPI"] ) { uniform bool doubleSided = 1 float3[] extent = [(25.091871, -34.121277, -13.298501), (299.94482, 245.10088, 202.35126)] int[] faceVertexCounts = [3, 3, ... int[] faceVertexIndices = [0, 10293, ... rel material:binding = </BlendMeshRoot/_materials/MeshSequence_Default> normal3f[] normals = [(-0.3632836, -0.9102419, -0.19870725), .... point3f[] points = [(244.41148, 155.42062, 70.454926),..... float3[] primvars:node_displacement = [(93.54703, 110.9341, 48.37992).... float3[] primvars:Normals = [(-0.0050530406, -0.9910114, -0.13368203),... int[] primvars:skel:jointIndices = [0, 0, 0, 0, 0 ... float[] primvars:skel:jointWeights = [1, 1, 1, 1, 1... uniform token[] skel:blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"] rel skel:blendShapeTargets = [ </BlendMeshRoot/Mesh/Mesh/frame_0000>, ....... </BlendMeshRoot/Mesh/Mesh/frame_0005>, ] prepend rel skel:skeleton = </BlendMeshRoot/Mesh/Skel> uniform token subdivisionScheme = "none" custom string userProperties:blender:data_name = "Mesh" custom float userProperties:originalTime float userProperties:originalTime.timeSamples = { 0: 0, } def BlendShape "frame_0000" { uniform vector3f[] offsets = [(0, 0, 0), (0, 0, 0),..... uniform int[] pointIndices = [0, 1, 2, ..... } ..... ..... #### BlendShape frame to 0005 ..... def Skeleton "Skel" ( prepend apiSchemas = ["SkelBindingAPI"] ) { uniform matrix4d[] bindTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )] uniform token[] joints = ["joint1"] uniform matrix4d[] restTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )] prepend rel skel:animationSource = </BlendMeshRoot/Mesh/Skel/Anim> def SkelAnimation "Anim" { uniform token[] blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"] float[] blendShapeWeights.timeSamples = { 0: [1, 0, 0, 0, 0, 0], 1: [0.9697085, 0.03029152, 0, 0, 0, 0], 2: [0.88787615, 0.11212383, 0, 0, 0, 0], ..... 46: [0, 0, 0, 0, 0.11212379, 0.8878762], 47: [0, 0, 0, 0, 0.030291557, 0.96970844], 48: [0, 0, 0, 0, 0, 1], } } } } def Scope "_materials" { def Material "MeshSequence_Default" { token outputs:surface.connect = </BlendMeshRoot/_materials/MeshSequence_Default/Principled_BSDF.outputs:surface> custom string userProperties:blender:data_name = "MeshSequence_Default" def Shader "Principled_BSDF" { uniform token info:id = "UsdPreviewSurface" float inputs:clearcoat = 0 float inputs:clearcoatRoughness = 0.03 color3f inputs:diffuseColor = (0.8, 0.4, 0.3) float inputs:ior = 1.5 float inputs:metallic = 0 float inputs:opacity = 1 float inputs:roughness = 0.5 float inputs:specular = 0.2 token outputs:surface } } } def Scope "AnimationClips" { custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim> } def RealityKitComponent "AnimationLibrary" { custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim> custom token info:id = "RealityKit.AnimationLibrary" custom double realitykit:approximateDuration = 2 custom double[] realitykit:clipDurations = [2] custom string[] realitykit:clipNames = ["Anim"] custom rel realitykit:clipTargets = </BlendMeshRoot/Mesh/Skel/Anim> custom double realitykit:frameRate = 24 custom bool realitykit:isAnimationLibrary = 1 } }
2
1
322
1w