Reality Composer Pro

RSS for tag

Leverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps

Posts under Reality Composer Pro tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

[VisionOS] Entity spawning during the runtime won't respond to my gesture
I'm creating an immersive experience with RealityView (just consider it Fruit Ninja like experience). Saying I have some random generated fruits that were generated by certain criteria in System.update function. And I want to interact these generated fruits with whatever hand gesture. Well it simply doesn't work, the gesture.onChange function isn't fire as I expected. I put both InputTargetComponent and CollisionComponent to make it detectable in an immersive view. It works fine if I already set up these fruits in the scene with Reality Composer Pro before the app running. Here is what I did Firstly I load the fruitTemplate by: let tempScene = try await Entity(named: fruitPrefab.usda, in: realityKitContentBundle) fruitTemplate = tempScene.findEntity(named: "fruitPrefab") Then I clone it during the System.update(context) function. parent is an invisible object being placed in .zero in my loaded immersive scene let fruitClone = fruitTemplate!.clone(recursive: true) fruitClone.position = pos fruitClone.scale = scale parent.addChild(fruitClone) I attached my gesture to RealityView by .gesture(DragGesture(minimumDistance: 0.0) .targetedToAnyEntity() .onChanged { value in print("dragging") } .onEnded { tapEnd in print("dragging ends") } ) I was considering if the runtime-generated entity is not tracked by RealityView, but since I have added it as a child to a placeholder entity in the scene, it should be fine...right? Or I just needs to put a new AnchorEntity there? Thanks for any advice in advance. I've been tried it out for the whole day.
2
1
718
Jan ’24
How can I make a toggle button to hide or display specific entities within an immersive space?
How would I make a some simple toggle buttons to hide or show specific entities within a scene created in Reality Composer Pro? I'd imagine that within Reality Composer pro, all entities would already be in place, and then from Xcode I would be turning them on or off. Additionally I was curious about how I would go about swapping out colors / materials for specific entities.
1
0
418
Oct ’23
Convert Blender to .usdz
I have a blender project, for simplicity a black hole. The way that it is modeled is a sphere on top of a round plane, and then a bunch of effects on that. I have tried multiple ways: convert to USD from the file menu convert to obj and then import But all of them have resulted in just the body, not any effects. Does anybody know how to do this properly? I seem to have no clue except for going through the Reality Converter Pro (which I planned on going through already - but modeling it there)
0
1
442
Oct ’23
CrossPost w/ AOUSD forum: Autoplay stage metadata not being acknowledged RealityView (RCP Bundle)
The Apple documentation seems to say RealityKit should obey the autoplay metadata, but it doesn't seem to work. Is the problem with my (hand coded) USDA files, the Swift, or something else? Thanks in advance. I can make the animations run with an explicit call to run, but what have I done wrong to get the one cube to autoplay? https://github.com/carlynorama/ExploreVisionPro_AnimationTests import SwiftUI import RealityKit import RealityKitContent struct ContentView: View { @State var enlarge = false var body: some View { VStack { //A ModelEntity, not expected to autoplay Model3D(named: "cube_purple_autoplay", bundle: realityKitContentBundle) //An Entity, actually expected this to autoplay RealityView { content in if let cube = try? await Entity(named: "cube_purple_autoplay", in: realityKitContentBundle) { print(cube.components) content.add(cube) } } //Scene has one cube that should auto play, one that should not. //Neither do, but both will start (as expected) with click. RealityView { content in // Add the initial RealityKit content if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(scene) } } update: { content in // Update the RealityKit content when SwiftUI state changes if let scene = content.entities.first { if enlarge { for animation in scene.availableAnimations { scene.playAnimation(animation.repeat()) } } else { scene.stopAllAnimations() } let uniformScale: Float = enlarge ? 1.4 : 1.0 scene.transform.scale = [uniformScale, uniformScale, uniformScale] } } .gesture(TapGesture().targetedToAnyEntity().onEnded { _ in enlarge.toggle() }) VStack { Toggle("Enlarge RealityView Content", isOn: $enlarge) .toggleStyle(.button) }.padding().glassBackgroundEffect() } } } No autospin meta data #usda 1.0 ( defaultPrim = "transformAnimation" endTimeCode = 89 startTimeCode = 0 timeCodesPerSecond = 24 upAxis = "Y" ) def Xform "transformAnimation" () { def Scope "Geom" { def Xform "xform1" { float xformOp:rotateY.timeSamples = { ... } double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"] over "cube_1" ( prepend references = @./cube_base_with_purple_linked.usd@ ) { double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate"] } } With autoplay metadata #usda 1.0 ( defaultPrim = "autoAnimation" endTimeCode = 89 startTimeCode = 0 timeCodesPerSecond = 24 autoPlay = true playbackMode = "loop" upAxis = "Y" ) def Xform "autoAnimation" { def Scope "Geom" { def Xform "xform1" { float xformOp:rotateY.timeSamples = { ... } double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateY"] over "cube_1" ( prepend references = @./cube_base_with_purple_linked.usd@ ) { quatf xformOp:orient = (1, 0, 0, 0) float3 xformOp:scale = (2, 2, 2) double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:orient", "xformOp:scale"] } } } }
0
0
380
Oct ’23
Can I "experience" my AR project from Reality Composer Pro on an iPad, instead of the Vision Pro?
Hi all, I don't have a Vision Pro (yet), and I'm wondering if it is possible to preview my Reality Composer Pro project in AR using an iPad Pro or latest iPhones? I also am interested in teaching others - I'm also a college professor, and I don't believe that my students have Vision Pros either. I could always use the iOS versions, as I have done in the past, but the Pro version is much more capable and it would be great to be able to use it. Thanks for any comments on this!
2
0
388
Oct ’23
Change in behaviour on Sonoma with IPv6 traffic blocked by network extension
On Ventura - We have a network extension(Transparent Proxy) which blocks IPv6 traffic as below. override func handleNewFlow(_ flow: NEAppProxyFlow) -> Bool { //Ipv6 gets blocks by below code let error = NSError(domain: "", code: 0, userInfo: [NSLocalizedDescriptionKey : "Connection Refused"]) flow.closeReadWithError(error) flow.closeWriteWithError(error) On IPv6 enabled client machine, when a client application(Browser, curl, Teams etc), try to send HTTP/s requests, first they try to send the request over IPv6 and if it fails, they try with IPv4 (Happy eyeballs Algorithm) In our case, as network extension blocks IPv6 traffic, client applications will fail to establish connection over IPv6 and fallback to IPv4 as per Happy eyeballs Algorithm The above scenario works fine till MacOS Ventura. For Sonoma, this behaviour seems to have changed When our network extension blocks IPv6 traffic, client applications do not fallback to IPv4. They simply fail without trying IPv4. We tested with curl, Google chrome browser, Microsoft Teams. All these fail to load pages on Sonoma and they work fine on Ventura. Note : No change in our network extension code, curl and browser versions. Only change is MacOS version Please find attached screenshots with Ventura and with Sonoma, running curl One other difference seen here is the error code received by client applications with Ventura and Sonoma. On Ventura, when IPv6 is blocked, error is Network is down and client application establishes connection with IPv4. On Sonoma, error code is 22 : Invalid arguments and client application does not retry with IPv4. Curl_Ventura.jpg Curl_Sonoma.png
3
0
1k
Oct ’23
Transitioning from SceneKit to RealityKit - shadows and custom shaders
We have a content creation application that uses SceneKit for rendering. In our application, we have a 3D view (non-AR), and an AR "mode" the user can go into. Currently we use a SCNView and an ARSCNView to achieve this. Our application currently targets iOS and MacOS (with AR only on iOS). With VisionOS on the horizon, we're trying to bring the tech stack up to date, as SceneKit no longer seems to be supported, and isn't supported at all on VisionOS. We'd like to use RealityKit for 3D rendering on all platforms; MacOS, iOS and VisionOS, in non-AR and AR mode where appropriate. So far this hasn't been too difficult. The greatest challenge has been adding gesture support to replace the allowsCameraControl option on the SCNView, as no such option on ARView. However, now we get to control shading, we're hitting a bit of a roadblock. When viewing the scene in Non-AR mode, we would like to add a ground plane underneath the object that only displays a shadow - in other words, it's opacity would be determined by light contribution. I've had a dig through the CustomMaterial API and it seems extremely primitive - there doesn't seem any way to get light information for a particular fragment, unless I'm missing something? Additionally, we support a custom shader that we can apply as materials. This custom shader allows the properties of the material to vary depending on the light contribution, light incidence angle...etc. Looking at the CustomMaterial, the API seems to be defining a CustomMaterial, whereas as guess we want to customise the BRDF calculation. We achieve this in SceneKit using a series of shader modifiers hooked into the various SCNShaderModifierEntryPoint. On VisionOS of course the lack of support for CustomMaterial is a shame, but I would hope something similar can be achieved with RealityComposer? We can live with the lack of custom material, but the shadow catcher is a killer for adoption for us. I'd even accept a different limited features on VisionOS, as long as we can matching our existing feature set on existing platforms. What am I missing?
1
1
835
Oct ’23
How to set a size for an entity that is composed by a 3d model?
Hello Everyone, I'm facing a challenge related to resizing an entity built from a 3D model. Although I can manipulate the size of the mesh, the entity's overall dimensions seem to remain static and unchangeable. Here's a snippet of my code: let giftEntity = try await Entity(named: "gift") I've come across an operator that allows for scaling the entity. However, I'm uncertain about the appropriate value to employ, especially since the realityView is encapsulated within an HStack, which is further nested inside a ScrollView. Would anyone have experience or guidance on this matter? Any recommendations or resources would be invaluable. Thank you in advance for your assistance!
1
0
848
Oct ’23
How do load animated USDZs via Reality Composer Pro versus just doing it in code with raw files?
Creating an Entity and then changing between different animations (walk, run, jump, etc.) is pretty straightforward when you have individual USDZ files in your Xcode project, and then simply create an array of AnimationResource. However, I'm trying to do it via the new Reality Composer Pro app because the docs state it's much more efficient versus individual files, but I'm having a heck of a time figuring out how exactly to do it. Do you have one scene per USDZ file (does that erase any advantage over just loading individual files)? One scene with multiple entities? Something else all together? If I try one scene with multiple entities within it, when I try to change animation I always get "Cannot find a BindPoint for any bind path" logged in the console, and the animation never actually occurs. This is with the same files that animate perfectly when just creating an array of AnimationResource manually via individual/raw USDZ files. Anyone have any experience doing this?
0
1
738
Oct ’23
Load Reality Composer Pro Scenes in ios AR app
Hey everybody, I am quite new to developing on ios specifically in the AR section, and I have been struggling through documentation and can't find an answer for loading in reality composer pro scenes into an ios app. There is a good amount of documentation on loading it into a visionOS app but it I haven't found it totally applicable. In this code block below I have been able to get my reality composer scene loaded, but I am wanting the added functionality of reality composer pro when developing my scenes and can't figure out how to get those to show up. How would I edit this code to load my reality composer pro scene? My reality composer pro project came over to xcode as Package.realitycomposerpro when I drag and dropped it in, but I don't know how I'd access a scene in it and the specific objects in that scene for ios use. Thanks in advance! import RealityKit struct ContentView: View { var body: some View { ARViewContainer().edgesIgnoringSafeArea(.all) } } struct ARViewContainer: UIViewRepresentable { func loadRealityComposerScene(filename: String, fileExtension: String, sceneName: String) -> (Entity & HasAnchoring)? { guard let realitySceneURL = Bundle.main.url(forResource: filename, withExtension: fileExtension) else { return nil } let loadedAnchor = try? Entity.loadAnchor(contentsOf: realitySceneURL) return loadedAnchor } func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) // Load the AR Scene from ACT2.reality guard let anchor = loadRealityComposerScene(filename: "ACT2", fileExtension: "reality", sceneName: "Scene1") else { print("Failed to load the anchor from ACT2.reality") return arView } arView.scene.addAnchor(anchor) // Visualize Collisions for Debugging arView.debugOptions.insert(.showPhysics) return arView } func updateUIView(_ uiView: ARView, context: Context) {} } #Preview { ContentView() }
0
1
482
Oct ’23
Getting Child ModelEntity from Reality Composer Pro
Hi, I have a file in Reality Composer Pro that has a deep hierarchy. I've downloaded it from an asset store so I don't know how it is build. As you can see from the screenshot, I'm trying to access banana and banana_whole entities as ModelEntity but I'm not able to load them as ModelEntity in Xcode. I can load them as Entity and show them in visionOS Simulator but not as ModelEntity which I need to do to do some operations. What should I do?
2
0
624
Oct ’23