Blender to Reality Composer Pro 2.0 to SwiftUI + RealityKit visionOS Best Practices

Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0.

I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later).

I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes:

I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations:

https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6

  1. When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible?

  2. As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation:

import bpy
import os

# Set the directory where you want to save the USD files
output_directory = “/path/to/export”

# Ensure the directory exists
if not os.path.exists(output_directory):
    os.makedirs(output_directory)

# Function to export current scene as USD
def export_scene_as_usd(output_path, start_frame, end_frame):
    bpy.context.scene.frame_start = start_frame
    bpy.context.scene.frame_end = end_frame
    
    # Export the scene as a USD file
    bpy.ops.wm.usd_export(
        filepath=output_path,
        export_animation=True
    )

# Save the current scene name
original_scene = bpy.context.scene.name

# Iterate through each action and export it as a USD file
for action in bpy.data.actions:
    # Create a new scene for each action
    bpy.context.window.scene = bpy.data.scenes[original_scene].copy()
    new_scene = bpy.context.scene

    # Link the action to all relevant objects
    for obj in new_scene.objects:
        if obj.animation_data is not None:
            obj.animation_data.action = action
    
    # Determine the frame range for the action
    start_frame, end_frame = action.frame_range
    
    # Export the scene as a USD file
    output_path = os.path.join(output_directory, f"{action.name}.usdc")
    export_scene_as_usd(output_path, int(start_frame), int(end_frame))

    # Delete the temporary scene to free memory
    bpy.data.scenes.remove(new_scene)

print("Export completed.")
  1. I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually.

  2. I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations.

  3. I apply materials/textures to each so they appear the same, using Shader Graph.

  4. Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code.

Two questions:

  1. Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines?
  2. If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode?

I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI.

Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!

Hi there!

I'm actually running into the same issue.

I don't know how to have several animations for 1 model.

Did you find a solution to this?

Thanks a lot

Blender to Reality Composer Pro 2.0 to SwiftUI + RealityKit visionOS Best Practices
 
 
Q