Most models are only available as glb or fbx, so I usually reexport them into usdz using Blender.
When I import them into Reality Composer Pro, Mesh, Textures etc look great, but in the Animation Library subsection all I can see is one default subtree animation.
In Blender I can see all available animations and play them individually. The default subtree animation just plays the default idle animation.
In fact when I open the nonlinear animation view in Blender and select a different animation as the default animation, the exported usdz shows the newly selected animation as default subtree animation.
I can see in the Apple sample apps models can have multiple animations in their Animation Library.
I'm using the latest Blender 4.5 and the usdz exporter should be working properly?
Reality Composer Pro
RSS for tagPrototype and produce content for AR experiences using Reality Composer Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
That title would have made a great WWDC Sessions. Unfortunately, it seems like nothing is new in Reality Composer Pro this year. I've noticed at all versions of the Xcode Beta this summer have shipped with Reality Composer Pro version 2.0. There have been slight bumps in the build number. I haven't found any new features or seen any documentation to indicate that anything has changed.
So the question is, what is the state of Reality Composer Pro? Should we continue to use this tool or start doing everything in code? A huge number of Sample Projects use Reality Composer Pro, so it seems like Apple is still using it even if they didn't update it this year.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience.
That works really well.
I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook.
Am I missing something, or is it no longer possible to export .reality files?
Thanks.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
QuickLook
Reality Composer
Reality Composer Pro
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0.
I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later).
I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes:
I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations:
https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6
When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible?
As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation:
import bpy
import os
# Set the directory where you want to save the USD files
output_directory = “/path/to/export”
# Ensure the directory exists
if not os.path.exists(output_directory):
os.makedirs(output_directory)
# Function to export current scene as USD
def export_scene_as_usd(output_path, start_frame, end_frame):
bpy.context.scene.frame_start = start_frame
bpy.context.scene.frame_end = end_frame
# Export the scene as a USD file
bpy.ops.wm.usd_export(
filepath=output_path,
export_animation=True
)
# Save the current scene name
original_scene = bpy.context.scene.name
# Iterate through each action and export it as a USD file
for action in bpy.data.actions:
# Create a new scene for each action
bpy.context.window.scene = bpy.data.scenes[original_scene].copy()
new_scene = bpy.context.scene
# Link the action to all relevant objects
for obj in new_scene.objects:
if obj.animation_data is not None:
obj.animation_data.action = action
# Determine the frame range for the action
start_frame, end_frame = action.frame_range
# Export the scene as a USD file
output_path = os.path.join(output_directory, f"{action.name}.usdc")
export_scene_as_usd(output_path, int(start_frame), int(end_frame))
# Delete the temporary scene to free memory
bpy.data.scenes.remove(new_scene)
print("Export completed.")
I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually.
I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations.
I apply materials/textures to each so they appear the same, using Shader Graph.
Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code.
Two questions:
Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines?
If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode?
I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI.
Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
It's all about notifications to trigger actions from RCP's new Timeline system. From Compose interactive 3D content in Reality Composer Pro I am actually starting to confuse why there was need to use Entity.applyTapForBehaviors in code to trigger content in Behaviors Component. Simply because in Behaviors Component, we have chosen OnTap to allow a "Tap Notification" to trigger our action (on a selected target object).
Then I guess by selecting OnCollision this trigger, I should write something like CollisionEvent.entityA.applyCollisionForBehaviors, which we don't have. And ofc the collision on my object won't trigger this action (because I only did things in RCP not in code).
Ignoring this post has pointed out we could use Behaviors Component's OnNotification to trigger something for now.
I found that I could still use OnTap trigger but actually put my code Entity.applyTapForBehaviors under my subscribed collision's begin event. That actually works better than OnCollision
So what is the design principles here? And how could I trigger a collision notification to let my Behaviors Component's OnCollision actually works?
Goal: To render in an apple vision pro app, the solid-mechanics 3D simulation results coming form an FEA code.
Starting point: I have surface vtks with deformations on each node. Each time step has a a mesh with the nodal coordinates. This is straighforward translatable to a usd MeshSequence. Unfortunately, the results cannot be simplified to a scaling o linear transformation as you would do with other game-oriented animations.
Tools: Right now, I am using Xcode and reality composer pro (RCP) to build the scenes.
Technical limitations: I am aware that RCP can do animations with BlendMesh and skeletons and that MeshSequence is not a problem.
Progress:
Coverting to the sequence of vtk meshes to a usd MeshSequence is straighforward. This animates correctly in Preview and Blender (see screenshot).
I managed to convert from MeshSequence to multiple keys and BlendMesh. This also animates correctly in Blender and preview. Unfortunately, the BlendMesh of multiple blended meshes shows a zero animation time in RCP (see screenshot below)
Also, see below usda file scheme for the animation. Of course I am not showing full vectors such as faceVertexCounts, faceVertexIndex, normals.
Question: what is the right set up to create a BlendMesh animation that RCP will correctly import and animate, form a set of Meshes or multiple key shapes?
Blender animation
Time zero RCP "animations"
#usda 1.0
(
defaultPrim = "BlendMeshRoot"
doc = "Blender v4.5.3 LTS"
endTimeCode = 48
framesPerSecond = 24
metersPerUnit = 1
startTimeCode = 0
timeCodesPerSecond = 24
upAxis = "Z"
)
def Xform "BlendMeshRoot" (
customData = {
dictionary Blender = {
bool generated = 1
}
}
)
{
def SkelRoot "Mesh"
{
custom string userProperties:blender:object_name = "Mesh"
float3 xformOp:rotateXYZ = (89.99999, -0, 0)
float3 xformOp:scale = (0.009999999, 0.01, 0.01)
double3 xformOp:translate = (0, 0, 0)
uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"]
def Mesh "Mesh" (
active = true
prepend apiSchemas = ["MaterialBindingAPI", "SkelBindingAPI"]
)
{
uniform bool doubleSided = 1
float3[] extent = [(25.091871, -34.121277, -13.298501), (299.94482, 245.10088, 202.35126)]
int[] faceVertexCounts = [3, 3, ...
int[] faceVertexIndices = [0, 10293, ...
rel material:binding = </BlendMeshRoot/_materials/MeshSequence_Default>
normal3f[] normals = [(-0.3632836, -0.9102419, -0.19870725), ....
point3f[] points = [(244.41148, 155.42062, 70.454926),.....
float3[] primvars:node_displacement = [(93.54703, 110.9341, 48.37992)....
float3[] primvars:Normals = [(-0.0050530406, -0.9910114, -0.13368203),...
int[] primvars:skel:jointIndices = [0, 0, 0, 0, 0 ...
float[] primvars:skel:jointWeights = [1, 1, 1, 1, 1...
uniform token[] skel:blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"]
rel skel:blendShapeTargets = [
</BlendMeshRoot/Mesh/Mesh/frame_0000>,
.......
</BlendMeshRoot/Mesh/Mesh/frame_0005>,
]
prepend rel skel:skeleton = </BlendMeshRoot/Mesh/Skel>
uniform token subdivisionScheme = "none"
custom string userProperties:blender:data_name = "Mesh"
custom float userProperties:originalTime
float userProperties:originalTime.timeSamples = {
0: 0,
}
def BlendShape "frame_0000"
{
uniform vector3f[] offsets = [(0, 0, 0), (0, 0, 0),.....
uniform int[] pointIndices = [0, 1, 2, .....
}
.....
.....
#### BlendShape frame to 0005
.....
def Skeleton "Skel" (
prepend apiSchemas = ["SkelBindingAPI"]
)
{
uniform matrix4d[] bindTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )]
uniform token[] joints = ["joint1"]
uniform matrix4d[] restTransforms = [( (1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1) )]
prepend rel skel:animationSource = </BlendMeshRoot/Mesh/Skel/Anim>
def SkelAnimation "Anim"
{
uniform token[] blendShapes = ["frame_0000", "frame_0001", "frame_0002", "frame_0003", "frame_0004", "frame_0005"]
float[] blendShapeWeights.timeSamples = {
0: [1, 0, 0, 0, 0, 0],
1: [0.9697085, 0.03029152, 0, 0, 0, 0],
2: [0.88787615, 0.11212383, 0, 0, 0, 0],
.....
46: [0, 0, 0, 0, 0.11212379, 0.8878762],
47: [0, 0, 0, 0, 0.030291557, 0.96970844],
48: [0, 0, 0, 0, 0, 1],
}
}
}
}
def Scope "_materials"
{
def Material "MeshSequence_Default"
{
token outputs:surface.connect = </BlendMeshRoot/_materials/MeshSequence_Default/Principled_BSDF.outputs:surface>
custom string userProperties:blender:data_name = "MeshSequence_Default"
def Shader "Principled_BSDF"
{
uniform token info:id = "UsdPreviewSurface"
float inputs:clearcoat = 0
float inputs:clearcoatRoughness = 0.03
color3f inputs:diffuseColor = (0.8, 0.4, 0.3)
float inputs:ior = 1.5
float inputs:metallic = 0
float inputs:opacity = 1
float inputs:roughness = 0.5
float inputs:specular = 0.2
token outputs:surface
}
}
}
def Scope "AnimationClips"
{
custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim>
}
def RealityKitComponent "AnimationLibrary"
{
custom rel animations = </BlendMeshRoot/Mesh/Skel/Anim>
custom token info:id = "RealityKit.AnimationLibrary"
custom double realitykit:approximateDuration = 2
custom double[] realitykit:clipDurations = [2]
custom string[] realitykit:clipNames = ["Anim"]
custom rel realitykit:clipTargets = </BlendMeshRoot/Mesh/Skel/Anim>
custom double realitykit:frameRate = 24
custom bool realitykit:isAnimationLibrary = 1
}
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Swift Packages
Developer Tools
Reality Converter
Reality Composer
The ShaderGraph Node Blurred Background (RealityKit) – https://developer.apple.com/documentation/shadergraph/realitykit/blurred-background-(realitykit) works fine within the RealityComposer Pro 2 editor but isn't working on iOS 18 or macOS 15. Instead of the blurred content it just renders as opaque in a single color (Screenshot 2).
Interestingly it also fails to render within RealityComposer Pro when no other entities are within the scene, e.g only a background skybox set.
Expected Behavior: It would be great if this node worked the same way as it does on visionOS since this would allow for really interesting and nice effects for scenes.
Feedback ID: FB15081190
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error:
RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound)
This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit)
RealityView { content in
do {
let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)])
bgEntity.position.z = -0.2
content.add(bgEntity)
let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial")
let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial])
content.add(testEntity)
content.cameraTarget = testEntity
} catch {
print("Shader Graph Load Error:")
dump(error)
}
}
.realityViewCameraControls(.orbit)
.edgesIgnoringSafeArea(.all)
Feedback ID: FB15081296
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
My friend cannot build my visionOS project in the simulator. He gets the following error.
Error:
[xrsimulator] Exception thrown during compile: cannotGetRkassetsContents(path: "/Users/path/to/Packages/RealityKitContent/Sources/RealityKitContent/RealityKitContent.rkassets")
In Xcode, he is able to open the RealityKitContent package in realityComposer Pro by clicking on the Package.realitycomposerpro file. No warnings show up wrt this error in RCP either. All scenes appear to be usable/navigable in RCP. This error only comes up when he tries to build the project in Xcode command+b. The is no other information in the Report Navigator's Build logs for this error. The error is always followed by this next error.
Error:
Tool exited with code 1
Yikes, please help!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code.
However, I have several instances of the same model in my scene.
Is it possible to make only one specific model respond to a notification?
For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
Reality Composer
RealityKit
visionOS
I sketched a idea for a project in Reality Composer on my iPad, thinking when I had a chance to sit down I would work it up in Xcode.
However, when I got back to my computer, I discovered I cannot open a file created in Reality Composer (or the exported Reality file) in Reality Composer Pro.
Am I missing something obvious here, because this seems like a huge oversight.
If anyone, can let me know how to open a file created in Reality Composer in Reality Composer Pro, I would greatly appreciate it. Partly, because there seems to be objects available in Reality Composer that are not in Reality Composer Pro.
Thanks
Stan
是对原本VisionPro每个App的内存限制做了扩展嘛?放宽了内存限制么?
In courses like Compose interactive 3D content in Reality Composer Pro Realitykit Engineers recommended working with Reality Composer Pro to create RealityKit packages to embed in our Realitykit Xcode projects.
And, comparing the workflow to Unity/Unreal, I can see the reasoning since it is nice to prepare scenes/materials/assets visually.
Now when we also want to run a Xcode Cloud CI/CD pipeline this seems to come into conflict:
When adding a basic *.usdz to the RealityKitContent.rkassets folder, every build we run on Xcode cloud fails with:
Compile Reality Asset RealityKitContent.rkassets
❌realitytool requires Metal for this operation and it is not available in this build environment
I have also found this related forum post here but it was specifically about compiling a *.skybox.
Reality Composer is no longer available in XCode 15 Release. Is this intended or will be available in later releases? Do I need to revert to Xcode15 Beta8 to get Reality Composer?
I made an animation in Blender using geometry nodes that I exported to USDC file (then I used Reality Converter to convert to USDZ) and I can see the animation when viewing from the finder but does not play after importing to RCP. Any idea how I can play the animation? Or can the animation be accessed through Xcode?
Thanks!
Hi!
I'm trying to play video on monitor 3D model like a material.
I wanna know if it's possible work. I searched about it, but I couldn't get enough information. Thank you in advance.
Hi,
In the downloadable WWDC sample project "CreatingASpaceshipGame" there is an audio file named "WorkMusic.aiff", as well mentioned in the video. Info says it's PCM 4-channel Quadrophonic.
Where can I find further information on how this file was authored? Was it simply exported from Logic Pro with Quadrophonic Surround settings or did it have any other specific treatment?
Thanks,
Axel
Hi!
I wanna ask that if it's possible to make mirror material with Shader Graph in Reality Composer Pro. The mirror should reflect entities in Reality Composer Pro scene.
I found that it works with SceneKit, but I'm using RealityKitContent in my project. Are there ways to solve this?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
visionOS
Hello!
https://forums.developer.apple.com/forums/thread/762763
I read this thread, and this is similar to what I'm trying to do.
I have two entities in the scene, "HandTrackingEntity", "HandScanner".
"HandTrackingEntity": I put Anchoring Component, Collision Component (Trigger) here.
"HandScanner": I put Behaviors Component(OnCollision), and Collision Component here.
Here is the pictures how I set the components.
and I set physicsSimulation property to .none.
I was expecting that Timeline will be played when I put my hand(with HandTrackingEntity) on "HandScanner" entity. But it didn't work.
Am I missing some steps? And I need sample codes to understand how to apply 'physicsSimulation' property. I'd appreciate it if you could let me know about it.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS
Reality Composer Pro question related to custom components
My custom component defines some properties to edit in RCP. Simple ones work find, but SIMD3 and SIMD2 do not. I'd expect to see default values but instead I get this 0s. If I try to run this the scene doesn't load. Once I enter some values it does and build and run again it works fine.
More generally, does Apple have documentation on creating properties for components? The only examples I've seen show simple strings and floats. There are no details about vectors, conditional options, grouping properties, etc.
public struct EntitySpawnerComponent: Component, Codable {
public enum SpawnShape: String, Codable {
case domeUpper
case domeLower
case sphere
case box
case plane
case circle
}
// These prooerties get their default values in RCP
/// The number of clones to create
public var Copies: Int = 12
/// The shape to spawn entities in
public var SpawnShape: SpawnShape = .domeUpper
/// Radius for spherical shapes (dome, sphere, circle)
public var Radius: Float = 5.0
// These properties DO NOT get their default values in RCP. The all show 0
/// Dimensions for box spawning (width, height, depth)
public var BoxDimensions: SIMD3<Float> = SIMD3(2.0, 2.0, 2.0)
/// Dimensions for plane spawning (width, depth)
public var PlaneDimensions: SIMD2<Float> = SIMD2(2.0, 2.0)
/// Track if we've already spawned copies
public var HasSpawned: Bool = false
public init() {
}
}