Hi, I'm developing a prototype VisionOS game.
How to access the bones or joints information when exporting a USD file from Blender to RCP?
The animation in RCP works fine and the joints' information is correctly embedded in the USDA file (with usdchecker). However, RCP does not read it in USDA, USDC or USDZ. It should be possible based on Apple WWDC24 (Compose Interactive 3D content in RCP).
I want to attach and detach an entity to a particular bone in certain moments. So I need the bones' data. They are standard mixamo animations. My mesh is a single unified mesh. Using Blender 4.4
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
That title would have made a great WWDC Sessions. Unfortunately, it seems like nothing is new in Reality Composer Pro this year. I've noticed at all versions of the Xcode Beta this summer have shipped with Reality Composer Pro version 2.0. There have been slight bumps in the build number. I haven't found any new features or seen any documentation to indicate that anything has changed.
So the question is, what is the state of Reality Composer Pro? Should we continue to use this tool or start doing everything in code? A huge number of Sample Projects use Reality Composer Pro, so it seems like Apple is still using it even if they didn't update it this year.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender.
When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation.
In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation.
In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation.
I can see in the Apple sample apps models can have multiple animations in their Animation Library.
I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
I have a simple visionOS app that creates an Entity, writes it to the device, and then attempts to load it. However, when the entity file get overwritten, it affects the ability for the app to load it correctly.
Here is my code for saving the entity.
import SwiftUI
import RealityKit
import UniformTypeIdentifiers
struct ContentView: View {
var body: some View {
VStack {
ToggleImmersiveSpaceButton()
Button("Save Entity") {
Task {
// if let entity = await buildEntityHierarchy(from: urdfPath) {
let type = UTType.realityFile
let filename = "testing.\(type.preferredFilenameExtension ?? "bin")"
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileURL = documentsURL.appendingPathComponent(filename)
do {
let mesh = MeshResource.generateBox(size: 1, cornerRadius: 0.05)
let material = SimpleMaterial(color: .blue, isMetallic: true)
let modelComponent = ModelComponent(mesh: mesh, materials: [material])
let entity = Entity()
entity.components.set(modelComponent)
print("Writing \(fileURL)")
try await entity.write(to: fileURL)
} catch {
print("Failed writing")
}
}
}
}
.padding()
}
}
Every time I press "Save Entity", I see a warning similar to:
Writing file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality
Failed to set dependencies on asset 1941054755064863441 because NetworkAssetManager does not have an asset entity for that id.
When I open the immersive space, I attempt to load the same file:
import SwiftUI
import RealityKit
import UniformTypeIdentifiers
struct ImmersiveView: View {
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
guard
let type = UTType.realityFile.preferredFilenameExtension
else {
return
}
let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let fileURL = documentsURL.appendingPathComponent("testing.\(type)")
guard FileManager.default.fileExists(atPath: fileURL.path) else {
print("❌ File does not exist at path: \(fileURL.path)")
return
}
if let entity = try? await Entity(contentsOf: fileURL) {
content.add(entity)
}
}
}
}
I also get errors after I overwrite the entity (by pressing "Save Entity" after I have successfully loaded it once). The warnings that appear when the Immersive space attempts to load the new entity are:
Asset 13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.
Asset 8308977590385781534 Scene (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Scene_0.compiledscene failure: Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to read archive entry.
AssetLoadRequest failed because asset failed to load '13277375032756336327 Mesh (RealityFileAsset)URL/file:///var/mobile/Containers/Data/Application/1140E7D6-D365-48A4-8BED-17BEA34E3F1E/Documents/testing.reality/Mesh_0.compiledmesh' (Asset provider load failed: type 'RealityFileAsset' -- RERealityArchive: Failed to open load stream for entry 'assets/Mesh_0.compiledmesh'.)
The order of operations to make this happen:
Launch app
Press "Save Entity" to save the entity
"Open Immersive Space" to view entity
Press "Save Entity" to overwrite the entity
"Open Immersive Space" to view entity, failed asset load request
Also
Launch app, the entity should still be save from last time the app ran
"Open Immersive Space" to view entity
Press "Save Entity" to overwrite the entity
"Open Immersive Space" to view entity, failed asset load request
NOTE: It appears I can get it to work slightly better by pressing the "Save Entity" button twice before attempting to view it again in the immersive space.
I have a scene that has been assembled in RCP but I'm losing the correct hierarchy and transforms when running the scene in the headset or the simulator.
This is in RCP:
This is at runtime with the debugger:
As you can see the "MAIN_WAGON" entity is gone and part of the hierarchy are now children of "TRAIN_ROOT" instead.
Another issue is that not only part of the hieararchy disappears, it also reverts back to default values of the transform instead of what is set in RCP:
This is in RCP:
This is in the simulator/headset:
I'm filing a feedback ticket too and will post the number here.
Anyone had a similar issue and found a fix or workaround ?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
Hi there,
I'm developing a visionOS app that is using the anchor points and mesh from SceneReconstructionProvider anchor updates. I load an ImmersiveSpace using a RealityView and apply a ShaderGraphMaterial (from a Shader Graph in Reality Composer Pro) to the mesh and use calls to setParameter to dynamically update the material on very rapid frequency. The mesh is locked (no more updates) before the calls to setParameter. This process works for a few minutes but then eventually I get the following error in the console:
assertion failure: Index out of range (operator[]:line 789) index = 13662, max = 1
With the following stack trace:
Thread 1 Queue : com.apple.main-thread (serial)
#0 0x00000002880f90d0 in __abort_with_payload ()
#1 0x000000028812a6dc in abort_with_payload_wrapper_internal ()
#2 0x000000028812a710 in abort_with_payload ()
#3 0x0000000288003f40 in _os_crash_msg ()
#4 0x00000001dc9ff624 in re::ecs2::ComponentBucketsBase::addComponent ()
#5 0x00000001dc9ffadc in re::ecs2::ComponentBucketsBase::moveComponent ()
#6 0x00000001dc8b0278 in re::ecs2::MaterialParameterBlockArrayComponentStateImpl::processPreparingComponents ()
#7 0x00000001dc8b05e4 in re::ecs2::MaterialParameterBlockArraySystem::update ()
#8 0x00000001dd008744 in re::Scheduler::executePhase ()
#9 0x00000001dc032ec4 in re::Engine::executePhase ()
#10 0x0000000248121898 in RCPSharedSimulationExecuteUpdate ()
#11 0x00000002264e488c in __59-[MRUISharedSimulation _doJoinWithConnectionContext:error:]_block_invoke.44 ()
#12 0x0000000268c5fe9c in _UIUpdateSequenceRunNext ()
#13 0x00000002696ea540 in schedulerStepScheduledMainSectionContinue ()
#14 0x000000026af8d284 in UC::DriverCore::continueProcessing ()
#15 0x00000001a1bd4e6c in CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION ()
#16 0x00000001a1bd4db0 in __CFRunLoopDoSource0 ()
#17 0x00000001a1bd44f0 in __CFRunLoopDoSources0 ()
#18 0x00000001a1bd3640 in __CFRunLoopRun ()
#19 0x00000001a1bce284 in _CFRunLoopRunSpecificWithOptions ()
#20 0x00000001eff12d2c in GSEventRunModal ()
#21 0x00000002697de878 in -[UIApplication _run] ()
#22 0x00000002697e33c0 in UIApplicationMain ()
#23 0x00000001b56651e4 in closure #1 (Swift.UnsafeMutablePointer<Swift.Optional<Swift.UnsafeMutablePointer<Swift.Int8>>>) -> Swift.Never in SwiftUI.KitRendererCommon(Swift.AnyObject.Type) -> Swift.Never ()
#24 0x00000001b5664f08 in SwiftUI.runApp<τ_0_0 where τ_0_0: SwiftUI.App>(τ_0_0) -> Swift.Never ()
#25 0x00000001b53ad570 in static SwiftUI.App.main() -> () ()
#26 0x0000000101bc7b9c in static MetalRendererApp.$main() ()
#27 0x0000000101bc7bdc in main ()
#28 0x0000000197fd0284 in start ()
Any advice on how to solve this or prevent the error?
Thanks!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
Any way to extend the video recording time in Reality Composer Pro from 3:00 to any longer value, such as editing preferences in Terminal or other workaround?
Is there any way to use the strap and a USB-C cable as a live video stream input source that would mirror to Quicktime or some other video capture tool?
I am assuming there is no online documentation or user manual for the strap, but please correct me if I'm wrong. Thank you.
Hello,
I'm working on a visionOS project that uses Reality Composer Pro, and we are managing our project files with Git.
We've noticed that simply opening and closing the Reality Composer Pro application consistently generates changes in the following files, even when no explicit modifications have been made by the developer:
{ProjectName}/Packages/RealityKitContent/Package.realitycomposerpro/PluginData/*******/ShaderGraphEditorPluginID/ShaderGraphEditorPluginID
{ProjectName}/Packages/RealityKitContent/Package.realitycomposerpro/WorkspaceData/SceneMetadataList.json
Could you please clarify the purpose of these files? Why do they appear as modified when no direct changes are made from our end?
More importantly, is it safe to add these files to our .gitignore to prevent them from being tracked by Git? We are concerned that ignoring these files might lead to unexpected issues or inconsistencies when other team members pull the latest changes, especially if these files contain critical project metadata or state that needs to be synchronized.
Any insights or recommended best practices for managing Reality Composer Pro projects with Git would be greatly appreciated.
Thank you for your time and assistance.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
Hi, I’m currently using Reality Composer Pro on macOS, and I’m trying to work with particle emitters using only the built-in tools — no Xcode, no custom Swift code.
Here’s what I’m trying to achieve:
1. Is it possible to trigger a particle emitter with a tap gesture, directly inside Reality Composer Pro?
2. Can I add a particle emitter to a timeline, so that it emits at a specific time during an animation or behavior sequence?
3. Most importantly, is there a way to precisely control emission behavior — such as when particles start or stop — using visual and intuitive controls, without scripting?
My goal is to set up particle effects that are interactive and timeline-controlled, but managed entirely through Reality Composer Pro’s GUI, not through code.
Any suggestions, tips, or examples would be very helpful. Thanks in advance!
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode.
However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro.
Am I missing something obvious here, because this seems like a huge oversight.
If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro.
Thanks
Stan
I am trying to loop my videoMaterial. I have researched the AXQueuePlayer and AVPlayerLooper and tried to implement them into my code.
Please see attached.
There are no errors showing up but the videoMaterial is no longer working.
Please see the attached for the working code that plays the videoMaterial.
I am stumped can anyone help me solve this?
Thank you.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I am still not finding resources to know how to replace hands in a full immersive Space.... I reached the goal by creating a ARKit session that can detect the USDZ hand mesh Joints and connect to the hand-tracked-joints.... but I feel that's not the best solution... I really want to use the RealityKit potential to track and replace hands (with USDZ skinned ones) in an immersive environment, but the only resources I found are from November 2023.... :(
Can someone help me?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
In Reality Composer Pro 2.0 (448.120.2), if I use the "Create new scene in the project" button more than once, this feature doesn't seem to work.
The second time I use "Create new scene in the project", Reality Composer Pro displays a new empty USDA file in the project browser with the wrong icon (a yellow document icon instead of a 3D box icon).
Also, it doesn't create the new scene's USDA file on disk or display the scene in entity hierarchy browser or create a new tab.
Consequently, whenever I want to add more than one new scene to my project, I have to repeatedly quit and restart Reality Composer Pro.
In my use of RCP just now, I had to quit and restart RCP six times to create seven new scenes.
Is this a known issue?
Repro steps:
In a Reality Composer Pro project, create a new folder using the "add folder" icon button in the project browser.
Inside the new folder, click the "Create new scene in the project" icon button.
Click the "Create new scene in the project" icon button a second time.
Expected behavior:
A new USDA file is created on disk. The new USDA file's root entity appears in the entity hierarchy browser and a corresponding new tab is created.
Observed behavior:
The USDA file for this scene is not created on disk and it does not appear in the entity hierarchy browser in a new tab. In the project browser view, a yellow document icon appears and it does not appear to correspond to an actual USDA file.
Thank you for any insight you can provide about this issue.
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
I see no way to scale an entity with a hover effect.
The closest I can find is by using HoverEffectComponent with a shader hover effect. Maybe I can change the scale with a ShaderGraph, but I cannot figure out how.
Anyone could share ideas or nodes setup to implement a gaussian blur on shader graph material, with a blur size parameter? Thanks!
help me please i cant remove little messed up pixels and they are getting more and more please help me!!
2025 Macbook pro no protection screen actual screen
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code.
However, I have several instances of the same model in my scene.
Is it possible to make only one specific model respond to a notification?
For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
Reality Composer
RealityKit
visionOS
I believe I have created a videoMaterial and assigned it to a mesh with code I found in the Developer's Documentation but Im getting this error.
"Trailing closure passed to parameter of type 'String' that does not accept a closure"
I have attached a photo of the code and where the error happens.
Any help will greatly be appreciated.