Reality Composer Pro

RSS for tag

Prototype and produce content for AR experiences using Reality Composer Pro.

Learn More

Post

Replies

Boosts

Views

Activity

Multi-platform app for visionOS and iOS: How to include 3D models for both?
I created an app for visionOS, using Reality Composer Pro. Now I want to turn this app into a multi-platform app for iOS as well. RCP files are not supported on iOS, however. So I tried to use the "old" Reality Composer instead, but that doesn't seem to work either. Xcode 15 does not include it anymore, and I read online that files created with Xcode 14's Reality Composer cannot be included in Xcode 15 files. Also, Xcode 14 does not run on my M3 Mac with Sonoma. That's a bummer. What is the recommended way to include 3D content in apps that support visionOS AND iOS?! (I also read that a solution might be using USDZ for both. But how would that workflow look like? Are there samples out there that support both platforms? Please note that I want to setup the anchors myself, using code. I just need the composing tool to the create 3D content that will be placed on these anchors.)
1
0
412
Jun ’24
live updates of spatial image metadata changes
Session 10166 goes into wonderful detail about the construction of spatial photos, and the various parameters that define the relationship between left and right images. The session provides everything I need to know to combine a left and right frame, and create a spatial image output. But I'd like to do a live preview on the Vision Pro. Change the baseline and see what it looks like. See the horizontal disparity/convergence adjustment and move the image back and forth. Cropping, and vertical alignment, would be easy to implement live. Horizontal disparity, and baseline length? I'm baffled. How would I create a Shader Graph to let me make these adjustments using sliders or similar affordances, and pipe the results to a Camera Index Switch? I already have a working stereography app, but the stereo parameters are not interactive at all. I could regenerate the spatial image after each change and refresh the display, but that is awfully clunky. What's a better way?
2
0
284
Jun ’24
How to ship SDK RealityKit entity components that can be using and applied within a customer's application?
We deliver an SDK that enables rich spatial computing experiences. We want to enable our customers to develop apps using Swift or RealityComposer Pro. Composer allows the creation of custom components from the add components button in the inspector panel. These source files are dropped into the RealityComposer package and directory. We would like to be able to have our customers import our SDK components into their applications RealityComposer package and have our components be visible to be applied by our customer into their scene compositions. How can we achieve this? We believe this will lead to a risk ecosystem components extensions for RealityComposer Pro.
4
0
329
Jun ’24
Object anchor not working with ARKit in iOS
With WWDC 24, I was excited to see that apple is bringing their APIs from Vision OS to iOS. I tried using the Object Anchoring component in Reality Composer Pro. Which this works with a Vision Pro, it looks like the entity will spawn at origin if we run the same on iOS and the object anchoring doesn't seem to work. Is this intended? Below is how I'm doing this. I added an Anchoring component and added the .referenceObject file I trained using CreateML. This is the code I'm using to load this scene in. // GrootView.swift // ARTest-New // // Created by Sravan Karuturi on 6/10/24. // import SwiftUI import RealityKit import Box struct GrootView: View { @StateObject private var grootVM = GrootViewModel() @State private var ent: Entity? = nil @State var anchor: Entity? = nil @State var wallAnchor: Entity? = nil @State var floorAnchor: Entity? = nil var body: some View { RealityView{ content in #if os(iOS) await content.setupWorldTracking() content.camera = .worldTracking #endif ent = try? await Entity(named: "Box", in: boxBundle) print(ent?.children) anchor = ent?.findEntity(named: "ObjectAnchor") wallAnchor = ent?.findEntity(named: "WallAnchor") floorAnchor = ent?.findEntity(named: "FloorAnchor") let updateSum = content.subscribe(to: SceneEvents.Update.self){ event in if let anc = anchor, anc.isAnchored { print("Found Item") } if let anc = floorAnchor, anc.isAnchored { print("Found Floor") } if let anc = wallAnchor, anc.isAnchored { print("Wall Anchor") } } content.add(ent!) } } } #Preview { GrootView() } While, something similar seems to work on visionOS, the same doesn't seem to work with iOS. When I run this app, we see all the children and the Found Item is printed constantly even when we don't have the item in the scene. Not really sure if this is just not supported yet on iOS ( I really hope that's not the case ) or if I messed up something somehow
2
1
287
Jun ’24
quick look configuration how to write a usdz with?
this week i was watching https://developer.apple.com/videos/play/wwdc2024/10105/ with the amazing "configuration" feature to change the color or mesh straight in quick look, but i tried a lot with goarounds but nothing bring me to success how do i write in the usda files? anytiome i overwrite the usda even with just a "{}" inside... Reality composer pro rejects the file to be open again where is the developer man in the tutorial writing the usda? how is the usda compressed in usdz? (none of the compressors i tried accepeted the modified usda file) this is the code it's suggested in the video #usda 1.0 ( defaultPrim = "iPhone" ) def Xform "iPhone" ( variants = { string Color = "Black_Titanium" } prepend variantSets = ["Color"] ) { variantSet "Color" = { "Black_Titanium" { } "Blue_Titanium" { } "Natural_Titanium" { } "White_Titanium" { } } } but i dont understand how to do it with my own files,
3
0
316
Jun ’24
custom usda question
hello i wanna play mp4 file in VideoMaterial avPlayer. so first i make to use reality composer pro. I created matterial using the sphere provided by default in Reality Composer Pro and exported it to usdz. and when i play mp4 file in sphere matterial, it's good play But i wanna custom created matterial (ex. shaper3d create 3d modeling) not good play. i make custom created matterial - it's curved matterial curved matterial in shaper3d and exported it to usdz. curved matterial in Reality Composer Pro Scene and exported it to usdz. when i play mp4 file in curved matterial, it's not good play -> not adjust screen play How can I adjust and display the video in a custom usda file?
0
0
175
3w
Potential problem when projecting massive 3D entity
Hi fellows, I am developing a conceptual prototype to explore how Apple Vision Pro can be applied in Real Estate selling. But I encounter an problem and to consult with the developer community. When rendering a massive 3D entity, sometimes it work fine but sometimes my Vision Pro device automatically reboot (all the running apps are shutdown and then Apple logo appears to reboot) while running the app. I have a 1:1 3D entity imported into the Reality Composer Pro, and I managed to develop a simple rendering Vision Pro app. It works great after I finally made it work and see the 1:1 building in front of me. You can see the full demo video for details: https://www.icloud.com/iclouddrive/0d5QA-sYSehmLF9rEMXsCUyDg#REPtt02_Demo_v1.1 But sometimes when rendering the 1:1 building, my Vision Pro device suddenly reboot. So far I have no clues on its root cause. It might be caused by the not-enough-memory or for safety concern? (since the 3D entity is big and might block a big area of the view) Can any one or even an official provide some guidance here and identify the root cause? Thx much!
0
0
201
3w
Add a modifier to a single model in the Reality Composer Pro scene
We can add many models in the Reality Composer Pro scene, but when I use RealityView to display and add modifiers in SwiftUI, the modifiers will have Effect, and I don't want to do this. I hope this modifier will be valid for a single model in the Reality Composer Pro scenario. May I ask how to add modifiers to a single model in the Reality Composer Pro scene?
2
0
262
3w
Apply sprite scene as material in Reality Composer pro
Hello, I’m trying to move my app into vision OS, my app is used for pilot to study the airplane system, is a 3d airplane cockpit build with scene kit and I use sprite scene to animate the cockpit instruments . Scenekit allow to apply as material a sprite scene , so I could animate easy all the different instruments and indication there, but I can’t find this option on reality compose pro , is this possible? any suggestions I can look into to animate and simulate instruments.
2
0
247
3w
Blender to Reality Composer Pro 2.0 to SwiftUI + RealityKit visionOS Best Practices
Hi, I'm very new to 3D and am currently porting a SwiftUI iOS app to visionOS 2.0. I saw WWDC24 feature Blender in multiple spatial videos, and have begun integrating Blender models and animations into my VisionOS app (I would also like to integrate skeletons and programmatic rigging, more on that later). I'm wondering if there are “Best Practices” for this workflow - from Blender to USD to RCP 2.0 to visionOS 2 in Xcode. I’ve cobbled together the following that has some obvious holes: I’ve been able to find some pre-rigged and pre-animated models online that can serve as a great starting point. As a reference, here is a free model from SketchFab - a simple rigged skeleton with 6 built in animations: https://sketchfab.com/3d-models/skeleton-character-low-poly-8856e0138f424d68a8e0b40e185951f6 When exporting to USD from Blender, I haven’t been able to export more than one animation per USD file. Is there a workflow to export multiple animations in a single USDC file, or is this just not possible? As a temporary workaround, here is a python script I’ve been using to loop through all Blender animations, and export a model for each animation: import bpy import os # Set the directory where you want to save the USD files output_directory = “/path/to/export” # Ensure the directory exists if not os.path.exists(output_directory): os.makedirs(output_directory) # Function to export current scene as USD def export_scene_as_usd(output_path, start_frame, end_frame): bpy.context.scene.frame_start = start_frame bpy.context.scene.frame_end = end_frame # Export the scene as a USD file bpy.ops.wm.usd_export( filepath=output_path, export_animation=True ) # Save the current scene name original_scene = bpy.context.scene.name # Iterate through each action and export it as a USD file for action in bpy.data.actions: # Create a new scene for each action bpy.context.window.scene = bpy.data.scenes[original_scene].copy() new_scene = bpy.context.scene # Link the action to all relevant objects for obj in new_scene.objects: if obj.animation_data is not None: obj.animation_data.action = action # Determine the frame range for the action start_frame, end_frame = action.frame_range # Export the scene as a USD file output_path = os.path.join(output_directory, f"{action.name}.usdc") export_scene_as_usd(output_path, int(start_frame), int(end_frame)) # Delete the temporary scene to free memory bpy.data.scenes.remove(new_scene) print("Export completed.") I have also been able to successfully export rigging armatures as a single Skeleton - each “bone” showing getting imported into Reality Composer Pro 2.0 when exporting/importing manually. I would like to have all of these animations available in a single scene to be used in a RealityView in visionOS - so I have placed all animation models in a RCP scene and created named Timeline Action animations for each, showing the correct model and hiding the rest when triggering specific animations. I apply materials/textures to each so they appear the same, using Shader Graph. Then in SwiftUI I use notifications (as shown here - https://forums.developer.apple.com/forums/thread/756978) to trigger each RCP Timeline Action animation from code. Two questions: Is there a better way than to have multiple models of the same skeleton - each with a different animation - in a scene to be able to trigger multiple animations? Or would this require recreating Blender animations using skeleton rigging and keyframes from within RCP Timelines? If I want to programmatically create custom animations and move parts of the skeleton/armatures - do I need to do this by defining custom components in RCP, using IKRig and define movement of each of the “bones” in Xcode? I’m looking for any tips/tricks/workflow from experienced engineers or 3D artists that can create a more efficient/optimized workflow using Blender, USD, RCP 2 and visionOS 2 with SwiftUI. Thanks so much, I appreciate any help! I am very excited about all the new tools that keep evolving to make spatial apps really fun to build!
1
2
244
2w
VisionOS Object Capture get entity position
Hello :) As title, I have used RCP with reference objects to capture items in real world. My next step is to detect how close the user finger is that object. I had tried to get the entity's relative position to root but found the position, somehow, is always the same regardless of how and where I move around the camera or the object. The entity has a child transform with a collision component, which is used to detect collision when the finger is closed enough to calculate the distance, but it fails as well... Any help will be appreciated, ty
1
0
183
1w