In the particle effect of RealityKit, there is a Type:
ParticleEmitterComponent.Presets
He can invoke certain particle effects in certain systems, but I am interested in learning how to modify these particle entities (such as adjusting the color, the number of particles, the generation rangeβ¦)?
Shader Graph Editor
RSS for tagA easy to use visual editor for your materials all within Reality Composer Pro
Posts under Shader Graph Editor tag
39 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hey, I am having issues getting my Material X shaders to work properly in Reality Composer Pro that I've authored in Houdini.
The shader is very simple. It starts with a tiled image node that is written to the diffuse color of the preview surface node. This node is called mtxltileimage2.
When I create a tiled image node in RCP and configure it to have the same parameter values I get the texture to show up correctly. This node is called TiledImage.
One difference I can identify is that the second node has a grey icon whereas the first node has a blue icon. Could this be related to this issue?
Here is the USD viewer output for the two variants of the tiled image node.
Any pointers, misconceptions and help would be greatly appreciated. My goal is to be able and author these shaders in Houdini and import them into RCP. Trying to figure out the right pipeline for this workflow.
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.
Hi!
I wanna ask that if it's possible to make mirror material with Shader Graph in Reality Composer Pro. The mirror should reflect entities in Reality Composer Pro scene.
I found that it works with SceneKit, but I'm using RealityKitContent in my project. Are there ways to solve this?
ShaderGraph
NodeGraph
I am new here, my name is Axel - Hi! π I am both a visionOS beginner and an expert in things 3D, now figuring my way into Reality Composer Pro. Most things are intuitive and easy to understand, yet there is one thing I cannot seem to figure out, and I feel really stupid because it must be there, and that's Keyframing: I would simply make a new Timeline and animate published parameters in a Shader Graph over time.
I know how to do this via Xcode and Custom Components, but that can't be it because that will break even on simple to medium animation in terms of previewing and fine-tuning, complete overkill for a simple value animation. Since key framing is the most basic functionality of 3D apps next to move, I am sure it is in RCP somewhere.
Anyone got a pointer for me?
I've tried importing a 3D KTX file into Reality Composer Pro ShaderGraph, but got nothing output. Realitykit doesn't support Image3D yet?
First, I use pyktx to create a 3D KTX texture file and it seems correct in Finder preview.
import numpy as np
from pyktx import KtxTexture2, KtxTextureCreateInfo, KtxTextureCreateStorage, VkFormat
size = 32
image = np.zeros((size, size, size, 3), dtype=np.uint8)
for i in range(size):
image[i, :, :, 0] = np.interp(i, [0, size], [50, 255])
image[:, i, :, 1] = np.interp(i, [0, size], [100, 255])
image[:, :, i, 2] = np.interp(i, [0, size], [150, 255])
info = KtxTextureCreateInfo(
gl_internal_format=None,
vk_format=VkFormat.VK_FORMAT_R8G8B8_SRGB,
base_width=image.shape[2],
base_height=image.shape[1],
base_depth=image.shape[0],
num_dimensions=3,
num_levels=1,
num_layers=1,
num_faces=1,
generate_mipmaps=False,
is_array=False,
)
texture = KtxTexture2.create(info, KtxTextureCreateStorage.ALLOC)
for _ in range(1):
texture.set_image_from_memory(0, 0, _, image[_].tobytes())
texture.write_to_named_file(f'{size}.ktx')
Then I import it into ShaderGraph like this but seems nothing could be read.
In addition, I tested 2D KTX file and it worked.
I also tested different vk_format or KTX1/KTX2 but did not work.
I also tested Image3DPixel/Image2DArray/Image3DRead and none of them work as long as it's 3D.
I am able to create a custom node graph by selecting nodes and then choosing the "Compose Node Graph" option in the context menu. After that, when I select my custom node graph, I see in the top-right panel that it is possible to define inputs and outputs. However, I was not able to figure out how to link those to the inputs and outputs in the underlying nodes.
I am new to the graph editor and was able to achieve some results. However, I am noticing that my graphs are getting very tangled, confusing, and hard to debug. I was wondering whether:
is it possible to define variables, to store the value of computations, and refer to them in other parts of the graph, without having to link them graphically? This would help in tidying the tangled mess I created. In the "Explore materials in Reality Composer Pro" video, I saw that it is possible to create "instances", but I am not sure if that is what I need. For example: does the shader compiler optimize them, so that there is no need to recompute each instance?
Is there any functionality to debug the graph, trying inputs and seeing what the numeric outputs would be?
Hello,
I am looking to create a shader to update an entity's rendering. As a basic example say I want to recolour an entity, but leave its original textures showing through:
I understand with VisionOS I need to use Reality Composer Pro to create the shader, but I'm lost as how to reference the original colour that I'm trying to update in the node graph. All my attempts appear to completely override the textures in the entity (and its sub-entities) that I want to impact. Also the tutorials / examples I've looked at appear to create materials, not add an effect on top of existing materials.
Any hints or pointers?
Assuming this is possible, I've been trying to load the material in code, and apply to an entity. But do I need to do this to all child entities, or just the topmost?
do {
let entity = MyAssets.createModelEntity(.plane) // Loads from bundle and performs config
let material = try await ShaderGraphMaterial(named: "/Root/TestMaterial", from: "Test", in: realityKitContentBundle)
entity.applyToChildren {
$0.components[ModelComponent.self]?.materials = [material]
}
root.addChild(entity)
} catch {
fatalError(error.localizedDescription)
}
I just do as the document in https://developer.apple.com/documentation/shadergraph/realitykit/cube-image-(realitykit)
I have a .ktx file, and use CubeImage node to load it, then Convert node, but it shows black.
I check it on my Vision Pro, it's still black, I don't know why? Is it something wrong?
ps: I also use Image node to load .ktx file, it shows one image, so I belive .ktx file is right. I alse checked it on Vision Pro.
Hi, I'm trying to understand how I can get 3- or 4-channel per-vertex data into the Graph Editor.
From my tests, it seems that:
the "Geometric Property" node does not give access to 4-channel data,
"Geometric Property (vector3)" does not give me access to custom properties besides the ones defined in MaterialX core
the "Texture Coordinates" node has a vector4f mode (yay!), but according to MaterialX spec, texcoords must have 2 or 3 channels, and I can't get 4-channel data to show up there either.
My assumption so far is that I must be missing some "magic" β for example, do the primvars in a file have to be in a specific order, independent of their names? Or do their names matter? (E.g. convention would be primars:st and primvars:st1 and so on)
Unfortunately the forum doesn't allow me to attach any USDZ or ZIP files or GDrive links; if there's a way to share a test file I'm happy to do so!
The 3D furniture model I built uses some smooth specular reflection materials. I hope to only reflect the HDR image of the ImageBasedLight component I set myself, without reflecting the light source of the AR real environment. How to achieve this in the following scenario?
How to avoid being affected by the light source of the AR real environment when using PBR materials
When using Shader Graph, how can EnvironmentRadiance not be affected by the light source of the AR real environment?
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error:
RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound)
This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit)
RealityView { content in
do {
let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)])
bgEntity.position.z = -0.2
content.add(bgEntity)
let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial")
let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial])
content.add(testEntity)
content.cameraTarget = testEntity
} catch {
print("Shader Graph Load Error:")
dump(error)
}
}
.realityViewCameraControls(.orbit)
.edgesIgnoringSafeArea(.all)
Feedback ID: FB15081296
The ShaderGraph Node Blurred Background (RealityKit) β https://developer.apple.com/documentation/shadergraph/realitykit/blurred-background-(realitykit) works fine within the RealityComposer Pro 2 editor but isn't working on iOS 18 or macOS 15. Instead of the blurred content it just renders as opaque in a single color (Screenshot 2).
Interestingly it also fails to render within RealityComposer Pro when no other entities are within the scene, e.g only a background skybox set.
Expected Behavior: It would be great if this node worked the same way as it does on visionOS since this would allow for really interesting and nice effects for scenes.
Feedback ID: FB15081190
I am trying to make a shader that resembles a laser like this:
I've been experimenting with a basic Fresnel shader to start, but the Fresnel shader has a problem at high viewing angles where the top has a very different color than the rest of the capsule.
This might work for a laser shader once inverted and fine tuned:
However, when viewed from the top, it doesn't look so good anymore:
Ideally, the purple edge is always ONLY on the edge, and the rest of the surface is the light pink color, no matter the viewing angle. How can I accomplish this to create something that looks like a laser?
I have a custom material using Shader Graph in Reality Composer Pro, and I am trying to rig up sliders to values to control the shader. I am able to read the values from the Shader Graph without a problem, and I can even update them when setting them from the LLDB command line and then getting the values back. But the changes are not reflected in the graphics. Is there some sort of update() method or something that is required to read the changed parameter values?
On a related note, I am trying to understand what the MaterialParameters.Handle property is and why one would access a MaterialParameter via the handle vs just the name.
In a scenario involving one of the entities in a Reality Composer Pro environment, I intend for this entity to display a blue material when viewed by the user. To achieve this, I have added the following Shader Graphs to the materials associated with this entity:
Additionally, I have included the HoverEffectComponent component to the Reality View in the code:
RealityView { content in
if let model = try? await Entity(named: βWorldSceneβ, in: realityKitContentBundle) {
let hoverEffect = HoverEffectComponent(.shader(.default))
model.components.set(hoverEffect)
content.add(model)
}
}
However, hover this entity, I am unable to observe any visual reaction. Could you please provide guidance on how to resolve this issue?
Hey there,
I'm wonder if any one knows how to make the shader graph which is shown in wwdc24 video. I tired couple things but couldn't get same result I could make it in Unity and Blender but not in RCP.
thank you.
https://developer.apple.com/wwdc24/10106
Unity 2022.3.33f1
For some reason modifying MeshRenderer material shader SetVectorArray doesn't work on IOS, but it works on android and windows builds!!
I was working on Fog Of War, where I used SimpleFOW by Revision3 it's very simple FOW shader where it manipulates the alpha based on UVs vertices.
This is the FogOfWarShaderControl.cs script
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
namespace SimpleFOW
{
[RequireComponent(typeof(MeshRenderer))]
public class FogOfWarShaderControl : MonoBehaviour
{
public static FogOfWarShaderControl Instance { get; private set; }
[Header("Maximum amount of revealing points")]
[SerializeField] private uint maximumPoints = 512;
[Header("Game Camera")]
[SerializeField] private Camera mainCamera;
private List<Vector4> points = new List<Vector4>();
private Vector2 meshSize, meshExtents;
private Vector4[] sendBuffer;
private MeshRenderer meshRenderer;
private void Awake()
{
Instance = this;
Init();
}
// Initialize required variables
public void Init()
{
meshRenderer = GetComponent<MeshRenderer>();
meshExtents = meshRenderer.bounds.extents;
meshSize = meshRenderer.bounds.size;
points = new List<Vector4>();
sendBuffer = new Vector4[maximumPoints];
}
// Transform world point to UV coordinate of FOW mesh
public Vector2 WorldPointToMeshUV(Vector2 wp)
{
Vector2 toRet = Vector2.zero;
toRet.x = (transform.position.x - wp.x + meshExtents.x) / meshSize.x;
toRet.y = (transform.position.y - wp.y + meshExtents.y) / meshSize.y;
return toRet;
}
// Show or hide FOW
public void SetEnabled(bool on)
{
meshRenderer.enabled = on;
}
// Add revealing point to FOW renderer if amount of points is lower than MAX_POINTS
public void AddPoint(Vector2 worldPoint)
{
if (points.Count < maximumPoints)
{
points.Add(WorldPointToMeshUV(worldPoint));
}
}
// Remove FOW revealing point
public void RemovePoint(Vector2 worldPoint)
{
if (worldPoint == new Vector2(0, 0))
{
return;
}
if (points.Contains(WorldPointToMeshUV(worldPoint)))
{
points.Remove(WorldPointToMeshUV(worldPoint));
}
}
// Send any change to revealing point list to shader for rendering
public void SendPoints()
{
points.ToArray().CopyTo(sendBuffer, 0);
meshRenderer.material.SetVectorArray("_PointArray", sendBuffer);
meshRenderer.material.SetInt("_PointCount", points.Count);
}
// Send new range value to shader
public void SendRange(float range)
{
meshRenderer.material.SetFloat("_RadarRange", range);
}
// Send new scale value to shader
public void SendScale(float scale)
{
meshRenderer.material.SetFloat("_Scale", scale);
}
}
}
And this is the FogOfWar.shader
Shader "Revision3/FogOfWar"
{
Properties
{
_MainTex ("Texture", 2D) = "black" {}
_PointCount("Point count", Range(0,512)) = 0
_Scale("Scale", Float) = 1.0
_RadarRange("Range", Float) = .5
_MaxAlpha("Maximum Alpha", Float) = 1.0
}
SubShader
{
Tags { "RenderType"="Transparent" "Queue"="Transparent" }
LOD 100
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
// make fog work
#pragma multi_compile_fog
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
UNITY_FOG_COORDS(1)
float4 vertex : SV_POSITION;
};
sampler2D _MainTex;
float4 _MainTex_ST;
float _RadarRange;
uint _PointCount;
float _Scale;
float _MaxAlpha;
float2 _PointArray[512];
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}
float getDistance(float2 pa[512], float2 uv) {
float cdist = 99999.0;
for (uint i = 0; i < _PointCount; i++) {
cdist = min(cdist, distance(pa[i]*_Scale, uv));
}
return cdist;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = tex2D(_MainTex, i.uv);
i.uv *= _Scale;
if (_PointCount > 0)
col.w = min(_MaxAlpha, max(0.0f, getDistance(_PointArray, i.uv) - _RadarRange));
else
col.w = _MaxAlpha;
return col;
}
ENDCG
}
}
}
Now I create a gameobject called FogOfWar as follows
And then in Unit.cs script and Building.cs script I add the following logic
private Vector3 lastPos;
private void Update()
{
if (lastPos != transform.position)
{
FogOfWarShaderControl.Instance.RemovePoint(lastPos);
FogOfWarShaderControl.Instance.AddPoint(transform.position);
lastPos = transform.position;
FogOfWarShaderControl.Instance.SendPoints();
}
}
Now this gives me the effect of FOW as follows on IOS
where the result should be as follows on other devices
I don't know what causes this to happen only on IOS devices.
The logic works fine on android/windows/Linux/editor but not IOS devices.
So why metal API doesn't support shader set vector array?!