Reality Composer Pro

RSS for tag

Leverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps

Posts under Reality Composer Pro tag

87 Posts

Post

Replies

Boosts

Views

Activity

Reality Composer to Reality Composer Pro?
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode. However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro. Am I missing something obvious here, because this seems like a huge oversight. If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro. Thanks Stan
2
1
254
Jun ’25
How to trigger and timeline-control particle emitters in Reality Composer Pro (without Xcode or code)?
Hi, I’m currently using Reality Composer Pro on macOS, and I’m trying to work with particle emitters using only the built-in tools — no Xcode, no custom Swift code. Here’s what I’m trying to achieve: 1. Is it possible to trigger a particle emitter with a tap gesture, directly inside Reality Composer Pro? 2. Can I add a particle emitter to a timeline, so that it emits at a specific time during an animation or behavior sequence? 3. Most importantly, is there a way to precisely control emission behavior — such as when particles start or stop — using visual and intuitive controls, without scripting? My goal is to set up particle effects that are interactive and timeline-controlled, but managed entirely through Reality Composer Pro’s GUI, not through code. Any suggestions, tips, or examples would be very helpful. Thanks in advance!
1
0
124
Jun ’25
Reality Composer Pro "Create new scene in the project" doesn't work more than once
In Reality Composer Pro 2.0 (448.120.2), if I use the "Create new scene in the project" button more than once, this feature doesn't seem to work. The second time I use "Create new scene in the project", Reality Composer Pro displays a new empty USDA file in the project browser with the wrong icon (a yellow document icon instead of a 3D box icon). Also, it doesn't create the new scene's USDA file on disk or display the scene in entity hierarchy browser or create a new tab. Consequently, whenever I want to add more than one new scene to my project, I have to repeatedly quit and restart Reality Composer Pro. In my use of RCP just now, I had to quit and restart RCP six times to create seven new scenes. Is this a known issue? Repro steps: In a Reality Composer Pro project, create a new folder using the "add folder" icon button in the project browser. Inside the new folder, click the "Create new scene in the project" icon button. Click the "Create new scene in the project" icon button a second time. Expected behavior: A new USDA file is created on disk. The new USDA file's root entity appears in the entity hierarchy browser and a corresponding new tab is created. Observed behavior: The USDA file for this scene is not created on disk and it does not appear in the entity hierarchy browser in a new tab. In the project browser view, a yellow document icon appears and it does not appear to correspond to an actual USDA file. Thank you for any insight you can provide about this issue.
1
0
147
Jun ’25
white gap between objects in RealityView
I want to display a huge image in RealityView in 3d space on Vision Pro. of course instead of one giant file I'm using a lot of big images. to achieve this, I'm generating multiple planes exactly beside each others and put each image on them. although the planes are exactly beside each others but there is still a white gap between them.(image below) **Does anybody know how to fix this issue? **
0
0
152
May ’25
How to use CharacterControllerComponent.
I am trying to implement a ChacterControllerComponent using the following URL. https://developer.apple.com/documentation/realitykit/charactercontrollercomponent I have written sample code, but PhysicsSimulationEvents.WillSimulate is not executed and nothing happens. import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { let gravity: SIMD3<Float> = [0, -50, 0] let jumpSpeed: Float = 10 enum PlayerInput { case none, jump } @State private var testCharacter: Entity = Entity() @State private var myPlayerInput = PlayerInput.none var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) testCharacter = immersiveContentEntity.findEntity(named: "Capsule")! testCharacter.components.set(CharacterControllerComponent()) let _ = content.subscribe(to: PhysicsSimulationEvents.WillSimulate.self, on: testCharacter) { event in print("subscribe run") let deltaTime: Float = Float(event.deltaTime) var velocity: SIMD3<Float> = .zero var isOnGround: Bool = false // RealityKit automatically adds `CharacterControllerStateComponent` after moving the character for the first time. if let ccState = testCharacter.components[CharacterControllerStateComponent.self] { velocity = ccState.velocity isOnGround = ccState.isOnGround } if !isOnGround { // Gravity is a force, so you need to accumulate it for each frame. velocity += gravity * deltaTime } else if myPlayerInput == .jump { // Set the character's velocity directly to launch it in the air when the player jumps. velocity.y = jumpSpeed } testCharacter.moveCharacter(by: velocity * deltaTime, deltaTime: deltaTime, relativeTo: nil) { event in print("playerEntity collided with \(event.hitEntity.name)") } } } } } } The scene is loaded from RCP. It is simple, just a capsule on a pedestal. Do I need a separate code to run testCharacter from this state?
0
0
160
May ’25
How to get the transform of the joint in a skeleton when it is Animating
I have an entity that was created using Mixamo, and it has an animation. after the animation completes the mesh of the robot is not where the entity is positioned. I want to do something like when the animation finishes, I set the root entity's transform to the mesh's transform. There are no transformations applied to any of the children of this root of the model, which means that the transformations are applied to the skeleton due the the playing of animations. Is there a way where I can apply the final position of the root of the skeleton to the root entity to make sure to position the entity where the animation has ended just before the next animation plays?
0
0
124
May ’25
How to add and remove child entities to a rigged entity in RealityKit?
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app. The character has customization such as clothing items and hair and all objects are properly weighted to the rig. The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting hierarchy: Before exporting for the animation (armature modifier applied), I simply had to store the Model entities and swap them in but now when I export with the Armature Modifier applied, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities and applying new materials to them is no longer as simple. Here's a demo blend file and usdc export with a setup like mine, having an animated bone to swing a cube and sphere, to be swapped so that only one is visible https://www.dropbox.com/scl/fo/be2q6qcztc83z7c4gj1w0/AMapxWc_ip2KZ8oTOYDUMv8?rlkey=rcdaggcxq06dyen09mw5mqmem&st=bnc0d7j0&dl=0 This is how I'm loading the entity and removing a part, with the demo files import SwiftUI import RealityKit struct SwapDemoView: View { var body: some View { RealityView { content in let camera = PerspectiveCamera() camera.transform.translation = SIMD3(x: 0, y: 0.1, z: 3) guard let root = try? await Entity(named: "simpleSwapDemo") else { fatalError("simpleSwapDemo.usdc is not present") } print(root) // Get initial hierarchy guard let cube = root.findEntity(named: "Cube") else { fatalError("Entity cube doesn't exist") } cube.removeFromParent() // <-- Cube is still visible after removal print(root) // Get hierarchy to confirm removal of cube let resource = root.availableAnimations[0] root.playAnimation(resource.repeat()) content.add(root) content.add(camera) } .background(.white) } } And this is what the entity hierarchy looks like in RealityKit before cube removal ▿ 'root' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : ModelEntity, children: 2 ⟐ SynchronizationComponent ⟐ ModelComponent ⟐ SkeletalPosesComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : Entity ⟐ SynchronizationComponent ⟐ Transform ▿ 'Primitives' : Entity, children: 2 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity ⟐ SynchronizationComponent ⟐ Transform ▿ 'Cube' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Cube' : Entity ⟐ SynchronizationComponent ⟐ Transform And here's the hierarchy after removal ▿ 'root' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : ModelEntity, children: 2 ⟐ SynchronizationComponent ⟐ ModelComponent ⟐ SkeletalPosesComponent ⟐ AnimationLibraryComponent ⟐ Transform ▿ 'Armature' : Entity ⟐ SynchronizationComponent ⟐ Transform ▿ 'Primitives' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity, children: 1 ⟐ SynchronizationComponent ⟐ Transform ▿ 'Sphere' : Entity ⟐ SynchronizationComponent ⟐ Transform And this is the result: What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
1
0
138
May ’25
How to configure RealityKit entities for animations on a modular character?
I am currently using RealityKit (perspective camera) to render a character in my swiftUI app. The character has customization such as clothing items and hair and all objects are properly weighted to the rig. The way the model is setup in Blender is like so: Groups of objects that will be swapped (ex: Shoes -> Shoes objects) and an armature. I then export it to usdc with all objects active. This is the resulting entity hierarchy, viewed in Reality Composer Pro: My problem is that when I export with the Armature Modifier applied to the objects, so that animations get exported, the ModelComponent gets flattened to the armature and swapping entities is no longer as simple as removing the entity with the corresponding name. What's the best practice here? Should animation be exported separately and then applied to the skeleton? If so, how is that achieved? I'm not really sure how to proceed here.
1
0
87
May ’25
Reality Composer Pro 2.0 shader graphs can't be loaded on visionOS 1
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface: On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity. However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console: If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2. Is it a known issue that Reality Composer 2 can't be used with visionOS 1? Is this intentional behavior? I've submitted feedback as FB14828873, including a sample project and repro steps. If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]." Thank you.
7
0
1.6k
May ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However moving the referenceobject quickly causes tracking to stop. (I know this is a limitation and I am trying to embrace it as a feature) Is there a way to play a USDZ animation at the last known location, after detecting that the reference object is no longer tracked? is it possible to set this up in Reality Composer pro? I'm trying to get the USDZ to play before the Virtual Content disappears (due to reference object not being located). So that it smooths out the vanishing of the content. Nearly everything is set up in Reality Composer pro with my immersive.scene just adding virtual content to the reference object which anchors it in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackingProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and/or if this is the right way to go about it. Also I have implemented this at the beginning of object tracking. All I had to do was add a onAppear behavior to the object to play a USDZ and that works. Doing it for disappearing (due to loss of reference object) seems to be a lot harder.
0
0
110
Apr ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However by moving the referenceobject quickly, it causes tracking to stop. (I know this is a limitation so im trying to make it a feature) IS there a way to play a USDZ animation at the last known location, after detecting that reference object is no longer being tracked? is it possible to set this up in Reality Composer pro? Nearly everything is set up in Reality Composer pro with my immersive.scene just anchoring virtual content to the Reference object in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and if this is actually the right way to do it. Apologies for my lack of knowledge.
0
0
86
Apr ’25
How to read a video stream that includes both the physical world and digital space when obtaining a video stream from VisionPro by applying for the Enterprise API but only reading a video stream that contains both the physical world and digital space
We have successfully obtained the permissions for "Main Camera access" and "Passthrough in screen capture" from Apple. Currently, the video streams we have received are from the physical world and do not include the digital world. How can we obtain video streams from both the physical and digital worlds? thank you!
0
0
102
Apr ’25
attenuation map covers over object
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
1
0
103
Apr ’25
VisionOS custom hover effect and click sound
I have created a custom hover effect per this WWDC video and many other examples on the Net: https://developer.apple.com/videos/play/wwdc2024/10152/ I can get the button to expand when looked at within a VisionOS device and it will invoke a tap event when tapped but there is no click sound like a normal SwiftUI button does in VisionOS! I can't for the life of me figure out why. Any help would be appreciated!
2
0
293
Apr ’25
Animation handling on Scene change
I work on a game where I use timeline animations in Reality Composer Pro. The game runs in an immersive space, but can be paused where I then move the whole level root entity from the immersive space to another RealityView in a Window Group. When the player continues I do it exactly the other way around to move the level root from the window group back to my immersive space RealityView. And it seems like all animations get automatically stopped and restarted when the scene gets changed. The problem is, it does not resume where it stopped before, it completely starts again from where it stopped and therefore, has for example a wrong y offset as visible in the picture. For example in the picture, the yellow sphere loops the following animation: 0 to 100 100 to -100 -100 to 0 If I now pause the game (and basically switch scenes), the previous animation gets stopped and restarted at position y = 100. So now it loops: 100 to 200 200 to 0 0 to 100 I already tried all kind of setups - like: Setting the animations relative to root, parent, local Using behaviors (on Added to Scene, on Notification) And finally even by accessing the availableAnimations directly and saving the playback controller of the animation There I saw, if I manually trigger the following code before switching the scene, everything works as expected: Button("Reset") { animationPlaybackController.time = 0 animationPlaybackController.pause() animationPlaybackController.stop(blendOutDuration: 0.00001) } But if I use time = 0 with .stop() directly, the time = 0 seems to be ignored and I get the same behavior as before that it stops in a wrong y offset, hence my assumption that animations get stopped and invalidated once they change the scene. I tried to call the code manually on ImmersiveSpace.onDisappear, WindowGroup.onAppear and different kind of SceneEvents subscriptions, but unfortunately nothing worked. So am I doing something wrong in general or is there a way to fix this?
0
0
94
Apr ’25
Activate hoverEffect on separate entity attachment view
Hi, I'm working on RealityView and I have two entities in RCP. In order to set views for both entities, I have to create two separate attachments for each entity. What I want to achieve is that when I hover (by eye) on one entity's attachment, it would trigger the hover effect of the other entity's attachment. I try to use the hoverEffectGroup, but it would only activate the hover effect in a subview, instead a complete separate view. I refer to the following WWDC instruction for the hover effect. https://developer.apple.com/videos/play/wwdc2024/10152/
0
0
71
Apr ’25