How to create fire effect in Reality Composer Pro? Should I use particle emitter?
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello! I am working on some cool project reconstructions for a client. They design lobby sized installations with LED walls. I am being tasked with converting these over to AR at scale. I've got my first test in headset and it looks great! However, the desire to just walk around more than the 10' x 10' safe area zone totally takes one out of the immersive VR experience - which is pretty counter intuitive. Is there any way for us developers to by pass this hard limit, so that clients who are requesting more room-scale options can actually enjoy this in VR?
Alternatively, is there a way to hook up a PS5 controller to a "player start" so I can navigate inside the VR volume? I'm really trying to embrace Reality Composer Pro, but it seems extremely limiting as I wait for Unreal Engine to get its act together. sigh. Thanks for any help or suggestions.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Given my limited knowledge of physics, I would appreciate it if individuals with a solid understanding of the subject could provide insights into this matter. I have added a physical component to a entity in Reality Composer Pro, but I am seeking guidance on how to achieve the following:
Make an object float in the air (with a slight downward motion reminiscent of the moon’s surface)
Enable the object to move at a slow pace
Implement a strong rebound force
I would be grateful if you could provide appropriate values for these parameters. Thank you for your assistance.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Design
RealityKit
Reality Composer Pro
visionOS
I saw onnoffitacation in the Behavior configuration of Reality Composer pro, which asked me to enter the Nofficatition name, that is to say, this requires swift in Xcode to send a message. There is a message name in the message, so I hope you can write a list for me how to use Swift in Xcode to send a message containing the message name.(There is an answer in https://developer.apple.com/forums/thread/756978, but it doesn't work.)
and in the time line in Reality Composer Pro, there is a Notification action, which is used to send messages to swift. How can I ask swift to detect whether the Notification action has sent a message?(There is an answer in https://developer.apple.com/videos/play/wwdc2024/10102/, but it doesn't work.)
I have asked this question before (https://developer.apple.com/forums/thread/756978). Those answers were available before, but now they are all invalid in the latest system. I hope you can help me. Thank you.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
SwiftUI
RealityKit
Reality Composer Pro
visionOS
If I create a visionOS app project, it automatically creates a RealityKitContent package. However, if I create a Multiplatform project (for visionOS, macOS, and iOS), the package is not added. Is it possible to add it manually? How? (using Xcode 16.1 beta 2)
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hello!
I would like to do exactly this:
https://youtu.be/Cun8K7ctKp0?si=TgWvtdw-VdlBVL0R
I can't seem to find any documentation on getting a PS5 controller hooked up properly in Reality Composer Pro and Xcode, driving a character with animation (or, moving an object around freely) then over to the Vision Pro.
Additionally, I would also like to learn how to use a controller to move a VR camera around a scene, so that we can navigate in custom built spaces - similar to Meta virtual environments, or Steam VR home environments.
Typically, I would do this with Unreal Engine, but unfortunately, AVP support is still in its infancy there. So I figured, why not try to do it natively?
Any help with concrete tutorials or documentation would be greatly appreciated.
Thx!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I have a rkasset package, from which I load my scene.
In the scene, I'm using entity.findEntity(named:"..") to find entities to activate/deactivate.
When I have entities deactivated in the *.usda, they are not found with this method. Further inspection shows that the deactivated entities seem not to be compiled into the build.
Is there anything I can set that prevents skipping the build for these deactivated entities?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I am new to the graph editor and was able to achieve some results. However, I am noticing that my graphs are getting very tangled, confusing, and hard to debug. I was wondering whether:
is it possible to define variables, to store the value of computations, and refer to them in other parts of the graph, without having to link them graphically? This would help in tidying the tangled mess I created. In the "Explore materials in Reality Composer Pro" video, I saw that it is possible to create "instances", but I am not sure if that is what I need. For example: does the shader compiler optimize them, so that there is no need to recompute each instance?
Is there any functionality to debug the graph, trying inputs and seeing what the numeric outputs would be?
As you can see, it is a transparent spherical shell model with a ball inside. Everything is normal on the front side, but there are strange mesh triangles on the side and back view. I don't know if this is as expected and what I need to do to remove these strange effects.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer
RealityKit
Reality Composer Pro
visionOS
I am able to create a custom node graph by selecting nodes and then choosing the "Compose Node Graph" option in the context menu. After that, when I select my custom node graph, I see in the top-right panel that it is possible to define inputs and outputs. However, I was not able to figure out how to link those to the inputs and outputs in the underlying nodes.
I am new here, my name is Axel - Hi! 👋 I am both a visionOS beginner and an expert in things 3D, now figuring my way into Reality Composer Pro. Most things are intuitive and easy to understand, yet there is one thing I cannot seem to figure out, and I feel really stupid because it must be there, and that's Keyframing: I would simply make a new Timeline and animate published parameters in a Shader Graph over time.
I know how to do this via Xcode and Custom Components, but that can't be it because that will break even on simple to medium animation in terms of previewing and fine-tuning, complete overkill for a simple value animation. Since key framing is the most basic functionality of 3D apps next to move, I am sure it is in RCP somewhere.
Anyone got a pointer for me?
Hi!
Im making project with Xcode and Reality Composer Pro. I'm trying to play timeline in Reality Composer Pro using codes without setting Behaviors on entities. And I also tried to send notification from Xcode to entities in Reality Composer Pro to play timeline(I already set "OnNotification" with Behaviors component). But it's not working well, and I couldn't figure out any problems. Are there solutions about it?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Xcode
RealityKit
Reality Composer Pro
visionOS
Hi!
I'm using timeline in Reality Composer Pro. I tried to enable entities that were disabled at the beginning of the scene to be enabled in the middle of the Timeline playback using the 'Enable Entities'. But it didn't work well as I imagined. (It was keep appearing before starting Timeline)
How do I solve this problem? Are there good solutions about it?
I use this piece of code in Unity to get the distance length of my model entering another model. I have set collision markers at the tip and end of the model and performed raycasting, but Unity currently does not support object tracking in VisionOS. Therefore, I plan to use SwiftUI for native development. In Reality Composer Pro, I haven't seen a collision editing feature like in Unity; I can only set the size of the collision body but cannot manually adjust or visualize the shape and size of my collision body.
I want to achieve similar functionality using SwiftUI, to be able to calculate and display the distance that my model A, like a needle or ruler, penetrates into another model or a physical object's interior. Is there a similar functionality available, or other coding methods to achieve this?
void CalculateLengthInsideOrgan()
{
// Direction from the base of the probe to the tip
Vector3 direction = probeTip.position - probeBase.position;
float probeLength = direction.magnitude;
// Raycasting
RaycastHit[] hits = Physics.RaycastAll(probeBase.position, direction, probeLength, organLayerMask);
if (hits.Length > 0)
{
// Calculate the length entering the organ
float distanceToFirstHit = hits[0].distance;
lengthInsideOrgan = probeLength - distanceToFirstHit;
}
else
{
lengthInsideOrgan = 0f;
}
}
If I put an alpha image texture on a model created in Blender and run it on
RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below.
I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and
t hen impor ted it into Reality Composer Pro.
When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her,
t he following behaviors were obser ved in t he transparent areas
・The transparent areas do not become transparent
・The transparent areas become transparent toget her wit h t he image behind t hem
the order of t he images becomes incorrect
Best regards.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
RealityKit
Reality Composer Pro
visionOS
I am currently developing a game that runs on VisionOS using RealityKit and Swift.
I have a question regarding particle emitters.
It seems that there is a sorting order (render queue) between particle emitters themselves, but there doesn’t appear to be a render queue between particle emitters and regular model entities.
If such a feature exists, could you please provide a simple example?
Thank you!
Hi!
I wanna ask that if it's possible to make mirror material with Shader Graph in Reality Composer Pro. The mirror should reflect entities in Reality Composer Pro scene.
I found that it works with SceneKit, but I'm using RealityKitContent in my project. Are there ways to solve this?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
visionOS
Hello!
https://forums.developer.apple.com/forums/thread/762763
I read this thread, and this is similar to what I'm trying to do.
I have two entities in the scene, "HandTrackingEntity", "HandScanner".
"HandTrackingEntity": I put Anchoring Component, Collision Component (Trigger) here.
"HandScanner": I put Behaviors Component(OnCollision), and Collision Component here.
Here is the pictures how I set the components.
and I set physicsSimulation property to .none.
I was expecting that Timeline will be played when I put my hand(with HandTrackingEntity) on "HandScanner" entity. But it didn't work.
Am I missing some steps? And I need sample codes to understand how to apply 'physicsSimulation' property. I'd appreciate it if you could let me know about it.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
Reality Composer Pro question related to custom components
My custom component defines some properties to edit in RCP. Simple ones work find, but SIMD3 and SIMD2 do not. I'd expect to see default values but instead I get this 0s. If I try to run this the scene doesn't load. Once I enter some values it does and build and run again it works fine.
More generally, does Apple have documentation on creating properties for components? The only examples I've seen show simple strings and floats. There are no details about vectors, conditional options, grouping properties, etc.
public struct EntitySpawnerComponent: Component, Codable {
public enum SpawnShape: String, Codable {
case domeUpper
case domeLower
case sphere
case box
case plane
case circle
}
// These prooerties get their default values in RCP
/// The number of clones to create
public var Copies: Int = 12
/// The shape to spawn entities in
public var SpawnShape: SpawnShape = .domeUpper
/// Radius for spherical shapes (dome, sphere, circle)
public var Radius: Float = 5.0
// These properties DO NOT get their default values in RCP. The all show 0
/// Dimensions for box spawning (width, height, depth)
public var BoxDimensions: SIMD3<Float> = SIMD3(2.0, 2.0, 2.0)
/// Dimensions for plane spawning (width, depth)
public var PlaneDimensions: SIMD2<Float> = SIMD2(2.0, 2.0)
/// Track if we've already spawned copies
public var HasSpawned: Bool = false
public init() {
}
}
I want to render a 3d/stereoscopic video in an Apple Vision Pro window using RealityKit/RealityView. The video is a left-right stereo. The straight forward approach would be to spawn a quad, and give it a custom Shader Graph material, which has a CameraIndexSwitch. The CameraIndexSwitch chooses between the right texture vs the left texture.
https://i.sstatic.net/XawqjNcg.png
The issue I have here is that I have to extract the video frames from my AVSampleBufferVideoRenderer. This should work ok, but not if I'm playing FairPlay content.
So, my question is, how to render stereo FairPlay videos in a SwiftUI RealityView?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Metal
MetalKit
RealityKit
AVFoundation