Reality Composer Pro

RSS for tag

Prototype and produce content for AR experiences using Reality Composer Pro.

Learn More

Posts under Reality Composer Pro subtopic

Post

Replies

Boosts

Views

Activity

How to programmatically update Model Position Offset of GeometryModifier?
is it possible to dynamically update ModelPositionOffset of GeometryModifier with a depth map image? in my code I set up the parameter for "DepthMapTexture" universal input node and tried setting the depth map for depthTextureResource. I have 2 DrawableQueues. One for setting InputTexture, and one for setting DepthMapTexture. This only shows the part that concerns setting DepthMapTexture this is where I define the plane entity. and this is the shader graph what I noticed with GeometryModifier is that, the depthMap image has to be same as input image's dimensions. and when I applied this material to usdz file, with pre-assigned image and depth map from RCP, and loaded that Entity from code, depth map was applied correctly. what I am unsure is that if it is impossible to define a model entity from code, apply ShaderGraphMaterial from RCP, and dynamically update the image used in GeometryModifier. Maybe I'm missing something when defining Entity, something that allows geometric modifications?
1
0
268
Mar ’25
Reality Composer Pro Performance on iOS
I need help to wrap my head around this... If I import the Reality Composer Pro package and load it into an ARView, I will see 1.3gb of memory usage and about 180-220% cpu usage. The frames will start at around 60fps, and then eventually drop to around 30fps. If I export the usdz from Reality Composer Pro and load that into the same ARView, I will see about 1gb of memory usage and around 150% cpu usage; fps holds longer at 60 but eventually drops. If I load that same usdz into a QuickLook view, I will see about 55mb of memory usage, 9-11% cpu, and the frames stay locked at 116fps. The only thing I notice is the button I have is slightly less responsive, but it all still works fine. I don't understand. How can I make the ARView work as efficiently as QuickLook?
0
0
196
Mar ’25
RC Pro Timeline Notification Not Received in Xcode
I'm having a heck of a time getting this to work. I'm trying to add an event notification at the end of a timeline animation to trigger something in code but I'm not receiving the notification from RC Pro. I've watched that Compose Interactive 3D Content video quite a few times now and have tried many different ways. RC Pro has the correct ID names on the notifications. I'm not a programmer at all. Just a lowly 3D artist. Here is my code... import SwiftUI import RealityKit import RealityKitContent extension Notification.Name { static let button1Pressed = Notification.Name("button1pressed") static let button2Pressed = Notification.Name("button2pressed") static let button3Pressed = Notification.Name("button3pressed") } struct MainButtons: View { @State private var transitionToNextSceneForButton1 = false @State private var transitionToNextSceneForButton2 = false @State private var transitionToNextSceneForButton3 = false @Environment(AppModel.self) var appModel @Environment(\.dismissWindow) var dismissWindow // Notification publishers for each button private let button1PressedReceived = NotificationCenter.default.publisher(for: .button1Pressed) private let button2PressedReceived = NotificationCenter.default.publisher(for: .button2Pressed) private let button3PressedReceived = NotificationCenter.default.publisher(for: .button3Pressed) var body: some View { ZStack { RealityView { content in // Load your RC Pro scene that contains the 3D buttons. if let immersiveContentEntity = try? await Entity(named: "MainButtons", in: realityKitContentBundle) { content.add(immersiveContentEntity) } } // Optionally attach a gesture if you want to debug a generic tap: .gesture( TapGesture().targetedToAnyEntity().onEnded { value in print("3D Object tapped") _ = value.entity.applyTapForBehaviors() // Do not post a test notification here—rely on RC Pro timeline events. } ) } .onAppear { dismissWindow(id: "main") // Remove any test notification posting code. } // Listen for distinct button notifications. .onReceive(button1PressedReceived) { (output) in print("Button 1 pressed notification received") transitionToNextSceneForButton1 = true } .onReceive(button2PressedReceived.receive(on: DispatchQueue.main)) { _ in print("Button 2 pressed notification received") transitionToNextSceneForButton2 = true } .onReceive(button3PressedReceived.receive(on: DispatchQueue.main)) { _ in print("Button 3 pressed notification received") transitionToNextSceneForButton3 = true } // Present next scenes for each button as needed. For example, for button 1: .fullScreenCover(isPresented: $transitionToNextSceneForButton1) { FacilityTour() .environment(appModel) } // You can add additional fullScreenCover modifiers for button 2 and 3 transitions. } }
3
0
87
Mar ’25
Having an issue with using a custom component with Reality Composer Pro
We have a project which is currently being built as a XCFramework. The framework contains a custom component to be used with entities in Reality Composer Pro. I have tried to se set the RCP Package.swift file to reference the framework package for the in the dependancies. Nothing that I do with the folder path to reference the code is working. Do I need to change the project to be using Swift source code instead of a XCFramework? The component needs to be in the framework as there is a class in the framework that works directly with the custom compoent. I am able to reference the XCFramework as a Swift Package with other projects.
2
0
82
Mar ’25
Shader Graph Material: Feather Edges Image Effect
Hi, I am trying to create a simple effect to create feather edges on the image using Reality Composer Pro. Something like this: As you can see it has softer edges on all sides that dissolves into transparency with the background. this is what I have been able to achieve on my own. I want to use the "feather" input node value (float) from 0.0 to 1.0 to increase or decrease the strength of the feather edges.
1
0
82
Mar ’25
Pass Video/ Frames to a Shader Graph?
Wondering if this is even possible without using CVImageBuffer and passing each frame as an image which I imagine will be very expensive. Have a PoC of a shader graph that applies a radial zoom effect to an image. In RealityKit I'm passing the image as a resource: if let textureResource = try? await TextureResource(named: "fuji") { let value = MaterialParameters.Value.textureResource(textureResource) try? material.setParameter(name: "MyImage", value: value) model.model?.materials = [material] } Thanks in advance
0
0
66
Apr ’25
目前在使用Unity进行开发时,遇到了材质(Material)和Shader在visionOS设备上无法正常渲染的问题
具体表现为:在Unity编辑器中材质显示正常,但部署到Vision Pro真机后部分材质丢失或Shader效果异常(如透明通道失效、光照计算错误等)。此问题影响了开发进度,希望得到技术支持的帮助 Specific results: The materials are displayed normally in the Unity editor, but after being deployed to the Vision Pro real machine, some materials are lost or the Shader effect is abnormal (such as transparent channel failure, antenna calculation error, etc.). This problem has affected the development progress, and I hope to get help from technical support
0
0
45
Apr ’25
attenuation map covers over object
hi, I'm trying to create a virtual movie theater, but after running computeDiffuseReflectionUVs.py and applying attenuation map, I noticed the light falloff effect just covers over the objects. I used apple provided attenuation map (did not specify the attenuation map name on python script) with sample size of 6000. I thought the python script would calculate vertices and create shadow for, say, back of the chairs. Am I understanding this wrong?
1
0
64
Apr ’25
How to read a video stream that includes both the physical world and digital space when obtaining a video stream from VisionPro by applying for the Enterprise API but only reading a video stream that contains both the physical world and digital space
We have successfully obtained the permissions for "Main Camera access" and "Passthrough in screen capture" from Apple. Currently, the video streams we have received are from the physical world and do not include the digital world. How can we obtain video streams from both the physical and digital worlds? thank you!
0
0
50
Apr ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However by moving the referenceobject quickly, it causes tracking to stop. (I know this is a limitation so im trying to make it a feature) IS there a way to play a USDZ animation at the last known location, after detecting that reference object is no longer being tracked? is it possible to set this up in Reality Composer pro? Nearly everything is set up in Reality Composer pro with my immersive.scene just anchoring virtual content to the Reference object in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and if this is actually the right way to do it. Apologies for my lack of knowledge.
0
0
47
Apr ’25
Playing USDZ animation at last known location of reference object
Hi there I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However moving the referenceobject quickly causes tracking to stop. (I know this is a limitation and I am trying to embrace it as a feature) Is there a way to play a USDZ animation at the last known location, after detecting that the reference object is no longer tracked? is it possible to set this up in Reality Composer pro? I'm trying to get the USDZ to play before the Virtual Content disappears (due to reference object not being located). So that it smooths out the vanishing of the content. Nearly everything is set up in Reality Composer pro with my immersive.scene just adding virtual content to the reference object which anchors it in the RCP Scene, so my immersive view just does this - if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) & this .onAppear { appModel.immersiveSpaceState = .open } .onDisappear { appModel.immersiveSpaceState = .closed } I have tried Using SpatialTracking & WorldTrackingProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and/or if this is the right way to go about it. Also I have implemented this at the beginning of object tracking. All I had to do was add a onAppear behavior to the object to play a USDZ and that works. Doing it for disappearing (due to loss of reference object) seems to be a lot harder.
0
0
71
Apr ’25
Launching a timeline on a specific model via notification
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code. However, I have several instances of the same model in my scene. Is it possible to make only one specific model respond to a notification? For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
1
1
79
May ’25