Reality Composer Pro

RSS for tag

Leverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps

Posts under Reality Composer Pro tag

119 Posts

Post

Replies

Boosts

Views

Activity

Apple's Choice: USDZ over Other 3D File Formats like GLTF
Hello Dev Community, I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice. USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility. Here are some of my questions toward the use of USDZ: Why did Apple choose USDZ over other 3D file formats like GLTF? What benefits does USDZ bring to Apple's AR and 3D content ecosystem? Are there any limitations of USDZ compared to other file formats? Could factors like compatibility, security, or integration ease have influenced Apple's decision? I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
4
1
6k
Oct ’24
Exporting .reality files from Reality Composer Pro
I've been using the MacOS XCode Reality Composer to export interactive .reality files that can be hosted on the web and linked to, triggering QuickLook to open the interactive AR experience. That works really well. I've just downloaded XCode 15 Beta which ships with the new Reality Composer Pro and I can't see a way to export to .reality files anymore. It seems that this is only for building apps that ship as native iOS etc apps, rather than that can be viewed in QuickLook. Am I missing something, or is it no longer possible to export .reality files? Thanks.
3
2
1.9k
Jul ’25
How can I simultaneously apply the drag gesture to multiple entities?
I wanted to drag EntityA while also dragging EntityB independently. I've tried to separate them by entity but it only recognizes the latest drag gesture RealityView { content, attachments in ... } .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } ) .gesture( DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } ) also tried using the simultaneously but didn't work too, maybe i'm missing something .gesture( DragGesture() .targetedToEntity(EntityA) .onChanged { value in ... } .simultaneously(with: DragGesture() .targetedToEntity(EntityB) .onChanged { value in ... } )
1
1
634
Mar ’25
Anchor an Reality scene on an image anchor
Developing a prototype Vision Pro app and would like to render a 3D scene made from Reality Composer Pro on an image anchor in a RealityView. But I have no luck so far to make it work and need some guidance to move on. I got the image file stored in the assets like below: And from below is the source codes: import SwiftUI import RealityKit import RealityKitContent struct AnchorView: View { @State var imageEntity: Entity = { let anchorEntity = AnchorEntity(.image(group: "AR Resources", name: "reanchor")) return anchorEntity }() var body: some View { RealityView { content in do { // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { imageEntity.addChild(scene) content.add(imageEntity) } } catch { print("Error occurs when adding reality view content: \(error)") } } } }
3
0
1.2k
3w
Animations exported from Blender does not shoe in Reality Composer Pro
I made an animation in Blender using geometry nodes that I exported to USDC file (then I used Reality Converter to convert to USDZ) and I can see the animation when viewing from the finder but does not play after importing to RCP. Any idea how I can play the animation? Or can the animation be accessed through Xcode? Thanks!
4
0
1k
Apr ’25
Blender to Reality Composer Pro 2.0 to SwiftUI + RealityKit visionOS Best Practices
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0. I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later). I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes: I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations: https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6 When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible? As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation: import bpy import os # Set the directory where you want to save the USD files output_directory = “/path/to/export” # Ensure the directory exists if not os.path.exists(output_directory): os.makedirs(output_directory) # Function to export current scene as USD def export_scene_as_usd(output_path, start_frame, end_frame): bpy.context.scene.frame_start = start_frame bpy.context.scene.frame_end = end_frame # Export the scene as a USD file bpy.ops.wm.usd_export( filepath=output_path, export_animation=True ) # Save the current scene name original_scene = bpy.context.scene.name # Iterate through each action and export it as a USD file for action in bpy.data.actions: # Create a new scene for each action bpy.context.window.scene = bpy.data.scenes[original_scene].copy() new_scene = bpy.context.scene # Link the action to all relevant objects for obj in new_scene.objects: if obj.animation_data is not None: obj.animation_data.action = action # Determine the frame range for the action start_frame, end_frame = action.frame_range # Export the scene as a USD file output_path = os.path.join(output_directory, f"{action.name}.usdc") export_scene_as_usd(output_path, int(start_frame), int(end_frame)) # Delete the temporary scene to free memory bpy.data.scenes.remove(new_scene) print("Export completed.") I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually. I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations. I apply materials/textures to each so they appear the same, using Shader Graph. Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code. Two questions: Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines? If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode? I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI. Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
4
2
1.1k
Apr ’25
Reality Composer Pro 2.0 shader graphs can't be loaded on visionOS 1
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface: On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity. However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console: If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2. Is it a known issue that Reality Composer 2 can't be used with visionOS 1? Is this intentional behavior? I've submitted feedback as FB14828873, including a sample project and repro steps. If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]." Thank you.
7
0
1.4k
May ’25
How to trigger actions by OnCollision in Behaviors Component
It's all about notifications to trigger actions from RCP's new Timeline system. From Compose interactive 3D content in Reality Composer Pro I am actually starting to confuse why there was need to use Entity.applyTapForBehaviors in code to trigger content in Behaviors Component. Simply because in Behaviors Component, we have chosen OnTap to allow a "Tap Notification" to trigger our action (on a selected target object). Then I guess by selecting OnCollision this trigger, I should write something like CollisionEvent.entityA.applyCollisionForBehaviors, which we don't have. And ofc the collision on my object won't trigger this action (because I only did things in RCP not in code). Ignoring this post has pointed out we could use Behaviors Component's OnNotification to trigger something for now. I found that I could still use OnTap trigger but actually put my code Entity.applyTapForBehaviors under my subscribed collision's begin event. That actually works better than OnCollision So what is the design principles here? And how could I trigger a collision notification to let my Behaviors Component's OnCollision actually works?
1
2
583
Mar ’25
ShaderGraphMaterial with Occlusion Surface Output fails to load on iOS and macOS
A ShaderGraphMaterial with an Occlusion Surface Output generated with RealityComposer 2 fails to load on iOS 18 and macOS 15 with the following error: RealityFoundation.ShaderGraphMaterial.LoadError.invalidTypeFound (https://developer.apple.com/documentation/realitykit/shadergraphmaterial/loaderror/invalidtypefound) This happens with both https://developer.apple.com/documentation/shadergraph/realitykit/occlusion-surface-(realitykit) and https://developer.apple.com/documentation/shadergraph/realitykit/shadow-receiving-occlusion-surface-(realitykit) RealityView { content in do { let bgEntity = ModelEntity(mesh: .generateCone(height: 0.5, radius: 0.1), materials: [SimpleMaterial(color: .red, isMetallic: true)]) bgEntity.position.z = -0.2 content.add(bgEntity) let occlusionMaterial = try await ShaderGraphMaterial(named: "/Root/OcclusionMaterial", from: "OcclusionMaterial") let testEntity = ModelEntity(mesh: .generateSphere(radius: 0.4), materials: [occlusionMaterial]) content.add(testEntity) content.cameraTarget = testEntity } catch { print("Shader Graph Load Error:") dump(error) } } .realityViewCameraControls(.orbit) .edgesIgnoringSafeArea(.all) Feedback ID: FB15081296
1
1
712
Jan ’25
Xcode 16 / Reality Composer Pro 2 segmentation fault issue
After installing MacOS Sequoia, Xcode 16 & RealityComposer Pro 2 my Apple Vision Pro projects (worked perfectly with Xcode 15) started to give me Tool terminated by signal 'Segmentation fault: 11' error while compiling RealityKitContent assets. This happens only when I try to build project with .usdz models exported from Blender, but when I try with sample models from apple website it works nice with no errors. Is there any solution?
2
0
610
Dec ’24
Reality Composer Pro timelines management, but using code
Is it possible to manage the behavior of timeline totally from code? I am exploring the Compose interactive 3D content in Reality Composer Pro sample project after seeing the related video, but the example shows only the use of Behaviors from RCP to activate timelines actions. I was wondering if it is possible to, somehow, retrieve some kind of timeline controller that allows me access to its informations just like the AnimationPlaybackController does with single animations. What I would like to achieve is being able to play/pause/retrieve timestamp from them in order to allow synchronization between different users on SharePlay
1
0
593
Oct ’24
Protobuf Error Crash
We are developing a mixed reality app for Vision Pro using Reality Composer Pro, but we're consistently encountering a Protobuf-related crash whenever we enter the immersive space. Our Reality Composer Pro package is quite complex, with numerous objects. Could the complexity of the package be contributing to the issue, or could something else be at play? No errors are being flagged in the code or Reality Composer itself. Here’s the error log: [libprotobuf FATAL /Library/Caches/com.apple.xbs/Sources/REKit/ThirdParty/protobuf/src/google/protobuf/io/zero_copy_stream_impl_lite.cc:276] CHECK failed: (count) >= (0): libc++abi: terminating due to uncaught exception of type google::protobuf::FatalException: CHECK failed: (count) >= (0): Any insight on what might be causing this would be appreciated.
3
1
753
Oct ’24
[Unity & Xcode(ARKit, RealityKit) & visionOS] Is it possible to combine a project made with Unity and a project made with Xcode into one app?
Hi, I’m working on a portfolio project for Vision Pro these days. I have two projects and each of projects are made with Unity and made with Xcode(using ARKit and RealityKit tracking feature). Is it able to combine these two projects in an app? For example, using the buttons made with SwiftUI in a Reality Composer Pro, jumping to a scene in Unity, and back from a scene in Unity to a scene in Reality Composer Pro in an app.
1
0
860
Oct ’24
Attaching VideoMaterial to DockingRegion
I have a VideoMaterial inside a RealityView and want to attach this to a DockingRegion inside an immersive environment. It appears that adding the VideoMaterial entity as a child of the docking region somewhat works, but there are no lighting effects (specular, diffuse) from the playing video. So essentially, how can you add a VideoMaterial to a DockingRegion and achieve the same reflections/behavior as using AVPlayerViewController. The latter is not an option as I need custom controls.
2
1
556
Dec ’24
[Reality Composer Pro] Is it able playing timeline via Xcode without adding Behaviors component to entities?
Hi! Im making project with Xcode and Reality Composer Pro. I'm trying to play timeline in Reality Composer Pro using codes without setting Behaviors on entities. And I also tried to send notification from Xcode to entities in Reality Composer Pro to play timeline(I already set "OnNotification" with Behaviors component). But it's not working well, and I couldn't figure out any problems. Are there solutions about it?
1
1
920
Oct ’24
ktx compression method
I've been working on generating a KTX format cubemap using the xcrun realitytool image --generate-cube-map command, but I'm encountering an issue with the file size. The cubemap file ends up being around 128 MB(2k), which is too large for my needs. I'm hoping to get some advice on a few points: Is there any way to reduce the file size of the KTX cubemap generated with this command? I’d appreciate any tips on compression settings or alternative approaches that could help shrink the file size while retaining good quality. Is there any documentation available for this command? I've been exploring on my own, but a comprehensive guide would be helpful. Are there alternative methods for converting textures to the KTX format? If anyone has experience with other tools or workflows that work well for this, please share!
0
0
270
Oct ’24
[ARKit] Is it possible remembering certain room using Room Tracking?
Hi! I'm making content using Room Tracking for vision pro these days. So I searched information about it. Here the links I visited. But I could not found the info I wanted to know Apple ARKit Create enhanced spatial computing experiences with ARKit RoomTrackingProvider I wanna know that if it's possible remembering room structure that recognized before and adding contents in certain world anchor in the room space when user entered the room again? For example, a developer can save the room structure, room info (with room ID) and world anchor of the room with Room Tracking feature. After this, the developer can add entities via Xcode and Reality Composer Pro in certain position of the room to show contents to users when users enter the room. So users can see the contents whenever they visit the room. Is this possible? If there are example codes or projects about it, please let me know.
1
0
719
Nov ’24