In Reality Composer Pro 2.0 (448.120.2), if I use the "Create new scene in the project" button more than once, this feature doesn't seem to work.
The second time I use "Create new scene in the project", Reality Composer Pro displays a new empty USDA file in the project browser with the wrong icon (a yellow document icon instead of a 3D box icon).
Also, it doesn't create the new scene's USDA file on disk or display the scene in entity hierarchy browser or create a new tab.
Consequently, whenever I want to add more than one new scene to my project, I have to repeatedly quit and restart Reality Composer Pro.
In my use of RCP just now, I had to quit and restart RCP six times to create seven new scenes.
Is this a known issue?
Repro steps:
In a Reality Composer Pro project, create a new folder using the "add folder" icon button in the project browser.
Inside the new folder, click the "Create new scene in the project" icon button.
Click the "Create new scene in the project" icon button a second time.
Expected behavior:
A new USDA file is created on disk. The new USDA file's root entity appears in the entity hierarchy browser and a corresponding new tab is created.
Observed behavior:
The USDA file for this scene is not created on disk and it does not appear in the entity hierarchy browser in a new tab. In the project browser view, a yellow document icon appears and it does not appear to correspond to an actual USDA file.
Thank you for any insight you can provide about this issue.
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
I see no way to scale an entity with a hover effect.
The closest I can find is by using HoverEffectComponent with a shader hover effect. Maybe I can change the scale with a ShaderGraph, but I cannot figure out how.
Anyone could share ideas or nodes setup to implement a gaussian blur on shader graph material, with a blur size parameter? Thanks!
help me please i cant remove little messed up pixels and they are getting more and more please help me!!
2025 Macbook pro no protection screen actual screen
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code.
However, I have several instances of the same model in my scene.
Is it possible to make only one specific model respond to a notification?
For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
Reality Composer
RealityKit
visionOS
I believe I have created a videoMaterial and assigned it to a mesh with code I found in the Developer's Documentation but Im getting this error.
"Trailing closure passed to parameter of type 'String' that does not accept a closure"
I have attached a photo of the code and where the error happens.
Any help will greatly be appreciated.
Hi there
I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However moving the referenceobject quickly causes tracking to stop. (I know this is a limitation and I am trying to embrace it as a feature)
Is there a way to play a USDZ animation at the last known location, after detecting that the reference object is no longer tracked? is it possible to set this up in Reality Composer pro?
I'm trying to get the USDZ to play before the Virtual Content disappears (due to reference object not being located). So that it smooths out the vanishing of the content.
Nearly everything is set up in Reality Composer pro with my immersive.scene just adding virtual content to the reference object which anchors it in the RCP Scene, so my immersive view just does this -
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
& this
.onAppear {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
I have tried Using SpatialTracking & WorldTrackingProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and/or if this is the right way to go about it.
Also I have implemented this at the beginning of object tracking.
All I had to do was add a onAppear behavior to the object to play a USDZ and that works.
Doing it for disappearing (due to loss of reference object) seems to be a lot harder.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
AR / VR
RealityKit
Reality Composer Pro
Hi there
I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However by moving the referenceobject quickly, it causes tracking to stop. (I know this is a limitation so im trying to make it a feature)
IS there a way to play a USDZ animation at the last known location, after detecting that reference object is no longer being tracked? is it possible to set this up in Reality Composer pro?
Nearly everything is set up in Reality Composer pro with my immersive.scene just anchoring virtual content to the Reference object in the RCP Scene, so my immersive view just does this -
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
& this
.onAppear {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
I have tried Using SpatialTracking & WorldTrackProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and if this is actually the right way to do it.
Apologies for my lack of knowledge.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
visionOS
We have successfully obtained the permissions for "Main Camera access" and "Passthrough in screen capture" from Apple. Currently, the video streams we have received are from the physical world and do not include the digital world. How can we obtain video streams from both the physical and digital worlds?
thank you!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Enterprise
Swift
Reality Composer Pro
visionOS
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
具体表现为:在Unity编辑器中材质显示正常,但部署到Vision Pro真机后部分材质丢失或Shader效果异常(如透明通道失效、光照计算错误等)。此问题影响了开发进度,希望得到技术支持的帮助
Specific results: The materials are displayed normally in the Unity editor, but after being deployed to the Vision Pro real machine, some materials are lost or the Shader effect is abnormal (such as transparent channel failure, antenna calculation error, etc.). This problem has affected the development progress, and I hope to get help from technical support
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I have two cubes in my blender project. But one gets lost after importing the USDZ file which is exported from the blender project.
It seems that Apple frameworks don't support non-English USDZ.
Hi, I'm trying to place an object in front of AVPlayer that is docked in VideoDockingRegion, but when launched in immersive space, the video passes through the objects placed in front of. How do I make sure these objects are visible?
image for reference
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
Shader Graph Editor
I want to make a model with added bones move by dragging it with gestures,This is a model exported using Blender,
What I understand is using IKComponent, but I don't know how to use it specifically
Wondering if this is even possible without using CVImageBuffer and passing each frame as an image which I imagine will be very expensive.
Have a PoC of a shader graph that applies a radial zoom effect to an image. In RealityKit I'm passing the image as a resource:
if let textureResource = try? await TextureResource(named: "fuji") {
let value = MaterialParameters.Value.textureResource(textureResource)
try? material.setParameter(name: "MyImage", value: value)
model.model?.materials = [material]
}
Thanks in advance
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
visionOS
I want to select a sub model under a large model in a mixed space, and when I select this sub model, I will add a stroke to it, similar to the effect of selecting a model in Reality Composer Pro ,How to create entity strokes similar to this effect
Hi, I am trying to create a simple effect to create feather edges on the image using Reality Composer Pro. Something like this:
As you can see it has softer edges on all sides that dissolves into transparency with the background.
this is what I have been able to achieve on my own.
I want to use the "feather" input node value (float) from 0.0 to 1.0 to increase or decrease the strength of the feather edges.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
Shader Graph Editor
visionOS
We have a project which is currently being built as a XCFramework.
The framework contains a custom component to be used with entities in Reality Composer Pro.
I have tried to se set the RCP Package.swift file to reference the framework package for the in the dependancies.
Nothing that I do with the folder path to reference the code is working.
Do I need to change the project to be using Swift source code instead of a XCFramework?
The component needs to be in the framework as there is a class in the framework that works directly with the custom compoent.
I am able to reference the XCFramework as a Swift Package with other projects.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro