I'm porting over some code that uses ARKit to Swift 6 (with Complete Strict Concurrency Checking enabled).
Some methods on ARSCNViewDelegate, namely Coordinator.renderer(_:didAdd:for:) among at least one other is causing a consistent crash. On Swift 5 this code works absolutely fine.
The above method consistently crashes with _dispatch_assert_queue_fail. My assumption is that in Swift 6 a trap has been inserted by the compiler to validate that my downstream code is running on the main thread.
In Implementing a Main Actor Protocol That’s Not @MainActor, Quinn “The Eskimo!” seems to address scenarios of this nature with 3 proposed workarounds yet none of them seem feasible here.
For #1, marking ContentView.addPlane(renderer:node:anchor:) nonisolated and using @preconcurrency import ARKit compiles but still crashes :(
For #2, applying @preconcurrency to the ARSCNViewDelegate conformance declaration site just yields this warning: @preconcurrency attribute on conformance to 'ARSCNViewDelegate' has no effect
For #3, as Quinn recognizes, this is a non-starter as ARSCNViewDelegate is out of our control.
The minimal reproducible set of code is below. Simply run the app, scan your camera back and forth across a well lit environment and the app should crash within a few seconds. Switch over to Swift Language Version 5 in build settings, retry and you'll see the current code works fine.
import ARKit
import SwiftUI
struct ContentView: View {
@State private var arViewProxy = ARSceneProxy()
private let configuration: ARWorldTrackingConfiguration
@State private var planeFound = false
init() {
configuration = ARWorldTrackingConfiguration()
configuration.worldAlignment = .gravityAndHeading
configuration.planeDetection = [.horizontal]
}
var body: some View {
ARScene(proxy: arViewProxy)
.onAddNode { renderer, node, anchor in
addPlane(renderer: renderer, node: node, anchor: anchor)
}
.onAppear {
arViewProxy.session.run(configuration)
}
.onDisappear {
arViewProxy.session.pause()
}
.overlay(alignment: .top) {
if !planeFound {
Text("Slowly move device horizontally side to side to calibrate")
} else {
Text("Plane found!")
.bold()
.foregroundStyle(.green)
}
}
}
private func addPlane(renderer: SCNSceneRenderer, node: SCNNode, anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor,
let device = renderer.device,
let planeGeometry = ARSCNPlaneGeometry(device: device)
else { return }
planeFound = true
planeGeometry.update(from: planeAnchor.geometry)
let material = SCNMaterial()
material.isDoubleSided = true
material.diffuse.contents = UIColor.white.withAlphaComponent(0.65)
planeGeometry.materials = [material]
let planeNode = SCNNode(geometry: planeGeometry)
node.addChildNode(planeNode)
}
}
struct ARScene {
private(set) var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)?
private let proxy: ARSceneProxy
init(proxy: ARSceneProxy) {
self.proxy = proxy
}
func onAddNode(
perform action: @escaping (SCNSceneRenderer, SCNNode, ARAnchor) -> Void
) -> Self {
var view = self
view.onAddNodeAction = action
return view
}
}
extension ARScene: UIViewRepresentable {
func makeUIView(context: Context) -> ARSCNView {
let arView = ARSCNView()
arView.delegate = context.coordinator
arView.session.delegate = context.coordinator
proxy.arView = arView
return arView
}
func updateUIView(_ uiView: ARSCNView, context: Context) {
context.coordinator.onAddNodeAction = onAddNodeAction
}
func makeCoordinator() -> Coordinator {
Coordinator()
}
}
extension ARScene {
class Coordinator: NSObject, ARSCNViewDelegate, ARSessionDelegate {
var onAddNodeAction: ((SCNSceneRenderer, SCNNode, ARAnchor) -> Void)?
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
onAddNodeAction?(renderer, node, anchor)
}
}
}
@MainActor
class ARSceneProxy: NSObject, @preconcurrency ARSessionProviding {
fileprivate var arView: ARSCNView!
@objc dynamic var session: ARSession {
arView.session
}
}
Any help is greatly appreciated!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Thank you again for pushing the web forward in VisionOS 2, super exciting!
The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences.
Samples such as this one are not supported:
https://immersive-web.github.io/webxr-samples/immersive-ar-session.html
In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything.
Is this the expected behavior at this time? Any developer preview(s) we could try?
After upgrading to Xcode 16 my app, which utilizes imported project files from my iPad's Reality Composer app, now has two issues that I have found so far. I am using an ARView as a UIViewRepresentable with SwiftUI. (Prior to upgading to Xcode 16 everything worked well.)
First, there are now several duplicate rcp_export.usdz resources in the "Copy Bundle Resources" build phase section. Even though each file is in a separate folder with a unique UUID, it was causing a compile error saying there are duplicate files. I was able to open the RC project folder and delete the older rcp_project versions which now allows the app to compile. I mention it as it may or may not be related to the second issue.
Second, Xcode isn't generating the project code for rcproject, so when I call the RCProject.loadSceneAsync function I am getting an error that says "Cannot find 'RCProject' in scope"
We got very excited when we saw support for the PSVR2 on WWDC!
Particularly interesting is WebXR to us, so we got the controllers to give it a try.
Unfortunately they only register as gamepads in the navigator but not as XRInputDevice's
We went through the experimental flags and didn't find something that is directly related. Is there a flag we missed? If not, when do you have PSVR2 support planned for WebXR?
As mentioned in https://forums.developer.apple.com/forums/thread/756736?answerId=810096022#810096022
Is there any update about the full support to WebXR AR Module, which should enable immersive-ar mode?
Are the features such as DOM overlays and WebGPU bindings on the roadmap?
Is it possible to capture stereoscopic video either internally or externally or via airplay for debugging purposes?
Thanks
I am working on adding synchronized physical properties to EntityEquipment in TableTopKit, allowing seamless coordination during GroupActivities sessions between players.
Current Approach and Limitations
I have tried setting EntityEquipment's state to DieState and treating it as a TossableRepresentation object. This approach achieves basic physical properties synchronized across players. However, it has several limitations:
No Collision Detection Between Dice: Multiple dice do not collide with each other.
Shape Limitations: Custom shapes, like parallelepipeds, cannot be configured.
Below is my existing code for Base Entity Equipment without physical properties:
struct CubeWithPhysics: EntityEquipment {
let id: ID
let entity: Entity
var initialState: BaseEquipmentState
init(id: ID, entity: Entity) {
self.id = id
self.entity = entity
initialState = .init(parentID: .tableID, pose: .init(position: .zero, rotation: .zero), entity: self.entity)
}
}
I’d appreciate any guidance on the recommended approach to adding synchronized physical properties to EntityEquipment.
Ever since updating to Xcode 16 my AR app doesn't compile, because Xcode doesn't recognize the .rcproject files used to load the AR experiences in iOS app. The .rcproject files were authored in Reality Composer on iPadOS.
The expected behavior is described in this official Apple documentation article: https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file
How do I submit a ticket to Apple?
In Beta 1,2, and 3, we could pick up and inspect entities, bringing them closer while moving them outside of the bounds of a volume.
As of Beta 4, these entities are now clipped by the bounds of the volume. I'm not sure if this is a bug or an intended change, but I files a Feedback report (FB19005083). The release notes don't mention a change in behavior–at least not that I can find.
Is this an intentional change or a bug?
Here is a video that shows the issue.
https://youtu.be/ajBAaSxLL2Y
In the previous versions of visionOS 26, I could move these entities out of the volume and inspect them close up. Releasing would return them to the volume. Now they are clipped as soon as they reach the end of the volume.
I haven't had a chance to test with windows or with the SwiftUI modifier version of manipulation.
That title would have made a great WWDC Sessions. Unfortunately, it seems like nothing is new in Reality Composer Pro this year. I've noticed at all versions of the Xcode Beta this summer have shipped with Reality Composer Pro version 2.0. There have been slight bumps in the build number. I haven't found any new features or seen any documentation to indicate that anything has changed.
So the question is, what is the state of Reality Composer Pro? Should we continue to use this tool or start doing everything in code? A huge number of Sample Projects use Reality Composer Pro, so it seems like Apple is still using it even if they didn't update it this year.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
I downloaded Xcode 16 and updated my macOS to 15, but I keep getting this error when trying to build the game in simulator or in device
[xrsimulator] Exception thrown: The operation couldn’t be completed. (realitytool.RKAssetsCompiler.RKAssetsCompilerError error 3.)
Hi all,
Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described.
The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs.
These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded().
Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described.
Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
This post documents an issue I reported in feedback FB19610114 and see if anyone knows of a workaround. Here is a copy of the feedback.
Short version
Manipulation (SwiftUI OR RealityKit) fails to translate entities after changing rooms. By changing rooms, I mean a human wearing an Apple Vision Pro leaving one room and entering another room. Once this issue occurs, it impacts all apps that use these features. A device restart is the only solution I have to fix it.
Feedback FB19610114
This is an odd one. I'm using the new Manipulation Component in visionOS 26. Most of the time this works well. Sometime it stops working and when it does the only way to get it working again is to reboot the headset.
When this happens, I can continue to rotate and scale items, but translation no longer works. It is as if the item is stuck to a fixed point in the parent scene (window, volume, etc). When this bug occurs, it affects every app across the entire operating system that is using manipulation, including the RealityKit component AND the SwiftUI version. This is not limited to one app and is not limited to apps that I am working on. Once this error occurs, it affects literally any application across the operating system that is using this API, including apps from Apple.
I won't speculate on the cause of this, but I do know of one way where I can always get it to happen.
Here is how to reproduce it:
Make an Xcode project with a single entity that uses the Manipulation Component. There is no need to customize the configuration of this component. The default implementation will work.
Build and run this app on device. You can keep running from device or quit and launch the app like normal on device.
Open the app and manipulate the entity - it should work as expected.
Physically walk into another room. It is vital that you leave the current room that you are in and enter a different room entirely.
Use the digital crown to recenter your view and bring your window or volume to you.
Test the manipulation on the entity again - it should still be working as expected at this point.
Physically, move yourself and your headset into the original room where you started.
Use the digital crown to recenter your view and bring your window or volume to you.
Test the manipulation on the entity again - you should now see the issue.
When I follow the steps above, then 100% of the time manipulation translation stops working at this point. It will impact any application using this API. The only way to fix it is to restart my headset.
A few points to keep in mind
It does not matter if an app is actively being run from Xcode.
When this occurs, it impacts every single app, not just one.
When this occurs, rotation and scaling continue to work, but the entity/view cannot be translated.
This impacts BOTH the SwiftUI version and the RealityKit version.
When this occurs, the only way to "fix" it is to reboot the device.
I am attempting to use the Barcode Detection enterprise API. I have the necessary entitlements and license file. I'm following the sample code online, and whenever I attempt to run the barcode detection using arKitSession.run I get the following error message:
ar_barcode_detection_provider_t <0x300d82130>: Failed to run provider with transient error code: 1
It obviously isn't running the barcode detection, even though it's running in an immersive space in mixed mode. Any idea what might be going on?
Topic:
Spatial Computing
SubTopic:
ARKit
Starting in visionOS 26, users can snap windows to surfaces. These windows are locked in place and are later restored by visionOS. We can access the snapped data with surfaceSnappingInfo docs.
Users can also lock a free-floating (unsnapped) window from a context menu in the window controls.
Is there a way to tell when a window has been locked without being snapped to a surface?
Has Roomplan been abandoned? Two years have gone by without comments from Apple on improvements. Are the improvements behind the scenes? Is there going to be any major updates?
It's all about notifications to trigger actions from RCP's new Timeline system. From Compose interactive 3D content in Reality Composer Pro I am actually starting to confuse why there was need to use Entity.applyTapForBehaviors in code to trigger content in Behaviors Component. Simply because in Behaviors Component, we have chosen OnTap to allow a "Tap Notification" to trigger our action (on a selected target object).
Then I guess by selecting OnCollision this trigger, I should write something like CollisionEvent.entityA.applyCollisionForBehaviors, which we don't have. And ofc the collision on my object won't trigger this action (because I only did things in RCP not in code).
Ignoring this post has pointed out we could use Behaviors Component's OnNotification to trigger something for now.
I found that I could still use OnTap trigger but actually put my code Entity.applyTapForBehaviors under my subscribed collision's begin event. That actually works better than OnCollision
So what is the design principles here? And how could I trigger a collision notification to let my Behaviors Component's OnCollision actually works?
The issue reproducible with empty project. When you run it and tap "Open immersive space" it takes a couple of minutes to respond. The issue only reproducible on real device with debugger attached. Reproducible other developers too (not specific to my environment). Issue doesn't exists in Xcode 16.
Afer initial long delay subsequent opens works fine.
Console logs:
nw_socket_copy_info [C1:2] getsockopt TCP_INFO failed [102: Operation not supported on socket]
nw_socket_copy_info getsockopt TCP_INFO failed [102: Operation not supported on socket]
Failed to set dependencies on asset 9303749952624825765 because NetworkAssetManager does not have an asset entity for that id.
void * _Nullable NSMapGet(NSMapTable * _Nonnull, const void * _Nullable): map table argument is NULL
PSO compilation completed for driver shader copyFromBufferToTexture so=0 sbpr=256 sbpi=16384 ss=(64, 64, 1) p=70 sc=1 ds=0 dl=0 do=(0, 0, 0) in 1997
XPC connection interrupted
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
Failed to load item AXCodeItem<0x14706f250> [Rank:6000] SpringBoardUIServices [AXBundle name:/System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle/SpringBoardUIServices] [Platforms and Targets:{ iOS = SpringBoardUIServices; } Framework] [Excluded: (null)]. error: Error Domain=AXLoading Code=0 "URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle" UserInfo={NSLocalizedDescription=URL does not exist: file:///System/Library/AccessibilityBundles/SpringBoardUIServices.axbundle}
[b30780-MRUIFeedbackTypeButtonWithBackgroundTouchDown] Playback timed out before completion (after 3111 ms)
Failed to set dependencies on asset 7089614247973236977 because NetworkAssetManager does not have an asset entity for that id.
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience.
That works really well.
I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook.
Am I missing something, or is it no longer possible to export .reality files?
Thanks.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
QuickLook
Reality Composer
Reality Composer Pro
I’ve been having some issues removing anchors. I can add anchors with no issue. They will be there the next time I run the scene. I can also get updates when ARKit sends them. I can remove anchors, but not all the time. The method I’m using is to call removeAnchor() on the data provider.
worldTracking.removeAnchor(forID: uuid)
// Yes, I have also tried `removeAnchor(_ worldAnchor: WorldAnchor)`
This works if there are more than one anchor in a scene. When I’m down to one remaining anchor, I can remove it. It seems to succeed (does not raise an error) but the next time I run the scene the removed anchor is back. This only happens when there is only one remaining anchor.
do {
// This always run, but it doesn't seem to "save" the removal when there is only one anchor left.
try await worldTracking.removeAnchor(forID: uuid)
} catch {
// I have never seen this block fire!
print("Failed to remove world anchor \(uuid) with error: \(error).")
}
I posted a video on my website if you want to see it happening.
https://stepinto.vision/labs/lab-051-issues-with-world-tracking/
Here is the full code. Can you see if I’m doing something wrong? Is this a bug?
struct Lab051: View {
@State var session = ARKitSession()
@State var worldTracking = WorldTrackingProvider()
@State var worldAnchorEntities: [UUID: Entity] = [:]
@State var placement = Entity()
@State var subject : ModelEntity = {
let subject = ModelEntity(
mesh: .generateSphere(radius: 0.06),
materials: [SimpleMaterial(color: .stepRed, isMetallic: false)])
subject.setPosition([0, 0, 0], relativeTo: nil)
let collision = CollisionComponent(shapes: [.generateSphere(radius: 0.06)])
let input = InputTargetComponent()
subject.components.set([collision, input])
return subject
}()
var body: some View {
RealityView { content in
guard let scene = try? await Entity(named: "WorldTracking", in: realityKitContentBundle) else { return }
content.add(scene)
if let placementEntity = scene.findEntity(named: "PlacementPreview") {
placement = placementEntity
}
} update: { content in
for (_, entity) in worldAnchorEntities {
if !content.entities.contains(entity) {
content.add(entity)
}
}
}
.modifier(DragGestureImproved())
.gesture(tapGesture)
.task {
try! await setupAndRunWorldTracking()
}
}
var tapGesture: some Gesture {
TapGesture()
.targetedToAnyEntity()
.onEnded { value in
if value.entity.name == "PlacementPreview" {
// If we tapped the placement preview cube, create an anchor
Task {
let anchor = WorldAnchor(originFromAnchorTransform: value.entity.transformMatrix(relativeTo: nil))
try await worldTracking.addAnchor(anchor)
}
} else {
Task {
// Get the UUID we stored on the entity
let uuid = UUID(uuidString: value.entity.name) ?? UUID()
do {
try await worldTracking.removeAnchor(forID: uuid)
} catch {
print("Failed to remove world anchor \(uuid) with error: \(error).")
}
}
}
}
}
func setupAndRunWorldTracking() async throws {
if WorldTrackingProvider.isSupported {
do {
try await session.run([worldTracking])
for await update in worldTracking.anchorUpdates {
switch update.event {
case .added:
let subjectClone = subject.clone(recursive: true)
subjectClone.isEnabled = true
subjectClone.name = update.anchor.id.uuidString
subjectClone.transform = Transform(matrix: update.anchor.originFromAnchorTransform)
worldAnchorEntities[update.anchor.id] = subjectClone
print("🟢 Anchor added \(update.anchor.id)")
case .updated:
guard let entity = worldAnchorEntities[update.anchor.id] else {
print("No entity found to update for anchor \(update.anchor.id)")
return
}
entity.transform = Transform(matrix: update.anchor.originFromAnchorTransform)
print("🔵 Anchor updated \(update.anchor.id)")
case .removed:
worldAnchorEntities[update.anchor.id]?.removeFromParent()
worldAnchorEntities.removeValue(forKey: update.anchor.id)
print("🔴 Anchor removed \(update.anchor.id)")
if let remainingAnchors = await worldTracking.allAnchors {
print("Remaining Anchors: \(remainingAnchors.count)")
}
}
}
} catch {
print("ARKit session error \(error)")
}
}
}
}
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0.
I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later).
I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes:
I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations:
https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6
When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible?
As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation:
import bpy
import os
# Set the directory where you want to save the USD files
output_directory = “/path/to/export”
# Ensure the directory exists
if not os.path.exists(output_directory):
os.makedirs(output_directory)
# Function to export current scene as USD
def export_scene_as_usd(output_path, start_frame, end_frame):
bpy.context.scene.frame_start = start_frame
bpy.context.scene.frame_end = end_frame
# Export the scene as a USD file
bpy.ops.wm.usd_export(
filepath=output_path,
export_animation=True
)
# Save the current scene name
original_scene = bpy.context.scene.name
# Iterate through each action and export it as a USD file
for action in bpy.data.actions:
# Create a new scene for each action
bpy.context.window.scene = bpy.data.scenes[original_scene].copy()
new_scene = bpy.context.scene
# Link the action to all relevant objects
for obj in new_scene.objects:
if obj.animation_data is not None:
obj.animation_data.action = action
# Determine the frame range for the action
start_frame, end_frame = action.frame_range
# Export the scene as a USD file
output_path = os.path.join(output_directory, f"{action.name}.usdc")
export_scene_as_usd(output_path, int(start_frame), int(end_frame))
# Delete the temporary scene to free memory
bpy.data.scenes.remove(new_scene)
print("Export completed.")
I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually.
I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations.
I apply materials/textures to each so they appear the same, using Shader Graph.
Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code.
Two questions:
Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines?
If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode?
I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI.
Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Reality Composer Pro
visionOS