Reality Composer Pro

RSS for tag

Leverage the all new Reality Composer Pro, designed to make it easy to preview and prepare 3D content for your visionOS apps

Reality Composer Pro Documentation

Posts under Reality Composer Pro tag

195 Posts
Sort by:
Post not yet marked as solved
1 Replies
215 Views
I have a plane that is stereoscopic so represents to the user depth that is beyond the plane. I would like to have the options to render the depth buffer for the pixels or to not render any information into the depth for the plane. I cannot see any option in Shader Graph Material to affect the depth buffer during render. I also cannot see any way in RealityKit to not render to the depth buffer for an entity. I'm open to any suggestions.
Posted Last updated
.
Post not yet marked as solved
1 Replies
53 Views
extension Entity { func addPanoramicImage(for media: WRMedia) { let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } problem: case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
Posted
by big_white.
Last updated
.
Post not yet marked as solved
2 Replies
140 Views
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
Posted Last updated
.
Post not yet marked as solved
1 Replies
182 Views
The transparency in reality kit is not rendered properly from specific ordinal axes. It seems like it is a depth sorting issue where it is rejecting some transparent surfaces when it should not. Some view directions relative to specific ordinal axes are fine. I have not narrowed down which specific axes is the problem. This is true across particle systems and/or meshes. It is very easy to replicate this issues using multiple transparent meshes or particle systems. In the above gif you can see the problem in multiple instances, the fire and snow particles are sorted behind the terrain, which has transparency since it is a procedural blend of grass, rock, and ice, but it is correctly sorted in front of the opaque materials such the rocks and wood. In the above gif, it is two back to back grid meshes (since dual sided rendering is not supported) that have a custom surface shader to animate the mesh in a wave and also apply transperency. You can see in the distance, where the transparency seems to be rendered/overlapped correctely, but at the overlap approaches the screen (and crosses an ordinal axes) it renders black for the transparent portion of the surface, when the green of the mesh that is behind should be rendered. This is a blocking problem for the development of this demo.
Posted
by rngd.
Last updated
.
Post not yet marked as solved
1 Replies
137 Views
I'm trying to build a project with a moderately complex Reality Composer Pro project, but am unable to because my Mac mini (2023, 8GB RAM) keeps running out of memory. I'm wondering if there are any known memory leaks in realitytool, but basically the tool is taking up 20-30GB (!) memory during builds. I have a Mac Pro for content creation, which is why I didn't go for more RAM on the mini – it was supposed to just be a build machine for Apple Silicon compatibility, as my Pro is Intel. But, I'm kinda stuck here. I have a scene that builds fine, but any time I had a USD – in this case a tree asset – with lots of instances, or a lot of geometry, I run into the memory issue. I've tried greatly simplifying the model, but even a 2MB USD is resulting in the crash. I'm failing to see how adding a 2MB asset would cause the memory of realitytool to balloon so much during builds. If someone from Apple is willing to look, I can provide the scene – but it's proprietary so I can't just post it publicly here.
Posted
by Gregory_w.
Last updated
.
Post not yet marked as solved
0 Replies
141 Views
It seems that Vision OS doesn't yet support Blend Shapes, so I've created a character with body animations and skeletal animations for the mouth with vowels, all animations are in USDZ format in Reality Composer Pro. I would like to know if there is any way to play a body animation of the character and simultaneously multiple mouth animations one after the other without stopping the body animation. When I trigger the mouth animations, the body animation pauses, and when the mouth animations finish, the body animation resumes when I use blendLayerOffset 1 on the mouth animations. However, this is not what I want. I would like the body animation to continue while the mouth animations play simultaneously. Thank you!
Posted
by LuisNery.
Last updated
.
Post not yet marked as solved
1 Replies
222 Views
I wanted to show a progress of a certain part of the game using an entity that looks like a "pie chart", basically cylinder with a cut-out. And as progress is changed (0-100) the entity would be fuller. Is there a way to create this kind of model entity? I know there are ways to animated entities, warp them between meshes, but I was wondering if somebody knows how to achieve it in a simplest way possible? Maybe some kind of custom shader that would just change how the material is rendered? I do not need its physics body, just to show it. I know how to do it in UIKit and classic 2d UI Apple frameworks but here working with model entities it gets a bit tricky for me. Here is example of how it would look, examples are in 2d but you can imagine it being 3d cylinders with a cut-out. Thank you!
Posted
by darescore.
Last updated
.
Post marked as solved
1 Replies
159 Views
I setup an entity with a collision component on it. But it was hard to target the object for I tap gesture, until I increased the radius quite a bit. Now I am unsure if it is too large. Is there a way to visualize these components somehow, maybe even in a running scene? Also, I find it pretty confusing that the size is given in cm. This made me wonder if this cm setting is affected by the entity's size at all? In Unity, it's just (local) "units".
Posted
by waldgeist.
Last updated
.
Post not yet marked as solved
0 Replies
129 Views
I wanted to create a particle effect using particle images I copied from a Unity project. These images are PNGs with an alpha channel. In Unity, these look georgeous, but on visionOS, they look rather weird, since the alpha channel is not respected. All pixel which are not pitch black are full white. Is there a way to change this behavior?
Posted
by waldgeist.
Last updated
.
Post not yet marked as solved
3 Replies
866 Views
In my project, i want to use new shadergraphmaterial to do the stereoscopic render, i notice that there is a node called Camera Index Switch Node can do this. But when i tried it , i found that : It can only output Integer type value, when i change to float value , it change back again, i don't konw if it is a bug. 2. So i test this node with a IF node,i found that it output is weird. Below is zero should output,it is black but when i change to IF node,it is grey,it is neither 0 nor 1(My IF node result is TRUE result 1, FALSE result 0) I wanna ask if this is a bug, and if this is a correct way to do the stereoscopic render.
Posted
by bYsdTd.
Last updated
.
Post not yet marked as solved
4 Replies
203 Views
I am trying to implement a way to rotate a 3D model around its y axis, but this doesn't seem to work. What am I missing? The scene only contains one model entity. @State private var rotateBy:Double = 0.0 RealityView { content in do { let entity = try await Entity.init(named: "VinylScene", in: realityKitContentBundle) entity.scale = SIMD3<Float>(repeating: 0.6) content.add(entity) } catch { ProgressView() } } .gesture( DragGesture(minimumDistance: 0.0) .targetedToAnyEntity() .onChanged { value in let location3d = value.convert(value.location3D, from: .local, to: .scene) let startLocation = value.convert(value.startLocation3D, from: .local, to: .scene) let delta = location3d - startLocation rotateBy = Double(atan(delta.x * 200)) } )
Posted
by mdkBsenA.
Last updated
.
Post not yet marked as solved
0 Replies
144 Views
Through testing, I have been able to get 5.1 and 7.1 Dolby Atmos files created in Logic Pro to work in Reality Composer Pro and then in Vision Pro. However, 5.1.4 and 7.1.4 files crash when added. Can someone confirm that these are not supported?
Posted
by stevenmc.
Last updated
.
Post not yet marked as solved
0 Replies
148 Views
Hello. I have a model of a CD record and box, and I would like to change the artwork of it via an external image URL. My 3D knowledge is limited, but what I can say is that the RealityView contains the USDZ of the record, which in turn contains multiple materials: ArtBack, ArtFront, PlasticBox, CD. How do I target an artwork material and change it to another image? Here is the code so far. RealityView { content in do { let entity = try await Entity.init(named: "VinylScene", in: realityKitContentBundle) entity.scale = SIMD3<Float>(repeating: 0.6) content.add(entity) } catch { ProgressView() } }
Posted
by mdkBsenA.
Last updated
.
Post not yet marked as solved
1 Replies
256 Views
Is it possible to use an image sequence, .mov or sprite sheet as a node source for a custom material in Reality Composer Pro? I have noticed that in the particle emitter, the magic preset uses a 4x4 sprite sheet as a particle source. Can this be done within the shader graph for the diffuse or normal slot?
Posted
by stevenmc.
Last updated
.
Post marked as solved
2 Replies
275 Views
I am trying to make a shader for a disco ball lighting effect for my app. I want the light to reflect on the scene mesh. i was curious if anyone has pointers on how to do this in shader graph in reality composer pro or writing a surface shader. The effect rotates the dots as the ball spins. This is the effect in the apple clips that applies the effect to the scene mesh
Posted
by doomdave.
Last updated
.
Post not yet marked as solved
0 Replies
188 Views
Hello everyone, I have just started learning the development and learning of visionPro app. I have a scene called Scene, and inside it is an object called Sphere. I want to add a drag animation to this Sphere alone. I follow the code below to achieve it. But my Sphere cannot actually be dragged in the Apple simulator. What is the reason? struct ContentView: View { @State var enlarge = false @State var offset: Point3D = .zero @State var sphereEntity: Entity? var body: some View { RealityView { content in if let scene = try? await Entity(named: "Scene", in: realityKitContentBundle) { content.add(scene) sphereEntity = content.entities.first?.findEntity(named: "Sphere") sphereEntity?.components.set(InputTargetComponent(allowedInputTypes: .all)) } }.gesture(DragGesture().targetedToEntity(sphereEntity ?? Entity()).onChanged({ value in print(value.location3D) sphereEntity?.position = value.convert(value.location3D, from: .local, to: sphereEntity?.parent! ?? Entity()) })) .gesture(SpatialTapGesture().targetedToAnyEntity().onEnded({ _ in print("Ssssssss") })) .onAppear() { } } }
Posted
by cjlalala.
Last updated
.
Post not yet marked as solved
4 Replies
390 Views
Hi, I'm trying to have an entity (and some attachments to it) to rotate. If I add the entity to content, add the attachment as a child entity, and set the entity as InputTargetComponent, then when I add a gesture ONLY the entity rotates and NOT the attachments (added as child entities). If I add a parent entity with let parentEntity = ModelEntity(), add my entity to the parentEntity, then add the attachments to an entity (which is now a child of the ModelEntity) and set the ModelEntity as InputTargetComponent then the whole thing rotates (including attachments) I'm sure there must be a bug, why would it work only with an added ModelEntity? Anyway, bug or not a bug, the problem I have now is that it rotates around the axes of the ModelEntity, not my primary entity, which is what I want. Is there a way to set the ModelEntity axes to be the axes of my primary child entity so it rotates like I want? What call should I use to move the axes where would I find the axes of the first child entity which should be the focus of my app? Here is my code: var body: some View { RealityView { content, attachments in // Add the initial RealityKit content if let specimenentity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { let parentEntity = ModelEntity() parentEntity.addChild(specimenentity) content.add(parentEntity) let entityBounds = specimenentity.visualBounds(relativeTo: parentEntity) parentEntity.collision = CollisionComponent(shapes: [ShapeResource.generateBox(size: entityBounds.extents).offsetBy(translation: entityBounds.center)]) parentEntity.generateCollisionShapes (recursive: true) parentEntity.components.set(InputTargetComponent()) if let Left_Hemisphere = attachments.entity(for: "Left_Hemisphere") { //4. Position the Attachment and add it to the RealityViewContent Left_Hemisphere.position = [-0.5, 1, 0] specimenentity.addChild(Left_Hemisphere) } } } attachments: { Attachment(id: "Left_Hemisphere") { //2. Define the SwiftUI View Text("Left_Hemisphere") .font(.extraLargeTitle) .padding() .glassBackgroundEffect() } } .gesture( DragGesture() .targetedToAnyEntity() .onChanged { value in let entity = value.entity var orientation = Rotation3D(entity.orientation(relativeTo: nil)) var newOrientation: Rotation3D if (value.location.x >= lastGestureValueX) { newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .y)) } else { newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .y)) } entity.setOrientation(.init(newOrientation), relativeTo: nil) lastGestureValueX = value.location.x orientation = Rotation3D(entity.orientation(relativeTo: nil)) if (value.location.y >= lastGestureValueY) { newOrientation = orientation.rotated(by: .init(angle: .degrees(0.5), axis: .x)) } else { newOrientation = orientation.rotated(by: .init(angle: .degrees(-0.5), axis: .x)) } entity.setOrientation(.init(newOrientation), relativeTo: nil) lastGestureValueY = value.location.y } ) } }
Posted
by michelefu.
Last updated
.
Post not yet marked as solved
1 Replies
182 Views
I currently have an iOS app that transmits h264 code through wifi, uses videotoolbox to decode and displays it with MTKView, and I want to implement similar functions in visionOS. What should I do? MTKView is not available on visionOS
Posted
by yl_pang.
Last updated
.