RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

417 Posts
Sort by:
Post not yet marked as solved
2 Replies
144 Views
Hello, I'm currently building an app that implements the on-device object capture API to create 3D models. I have two concerns that I cannot find addressed anywhere on the internet: Can on-device object capture be performed by devices without LiDAR? I understand that depth data is necessary for making scale-accurate models - if there is an option to disable it, where would one specify that in code? Can models be exported to .obj instead of .usdz? From WWDC2021 at 3:00 it is mentioned that it is possible with the Apple Silicon API but what about with on-device scanning? I would be very grateful if anyone is knowledgeable enough to provide some insight. Thank you so much!
Posted Last updated
.
Post not yet marked as solved
0 Replies
103 Views
I'm trying to better understand how loading entities works. If I do this: RealityView { content in // Add the initial RealityKit content if let scene = try? await Entity(named: "RCP_Scene", in: realityKitContentBundle) { content.add(scene) } } It returns the root with the two objects I have in the scene (sphere_01 and sphere_02). If I add a drag gesture to this entity it works on the root and gets applied to both sphere_01 and sphere_02 together (they both indiviually have collision and input components set to allow gestures). How do I get individual control of sphere_01 and sphere_02? Is it possible to load the root scene, as I'm doing above, and have individual control?
Posted Last updated
.
Post not yet marked as solved
0 Replies
89 Views
Hi, I am investigating how to emit the following in my visionOS app. https://www.hiroakit.com/archives/1432 https://blog.terresquall.com/2020/01/getting-your-emission-maps-to-work-in-unity/ Right now, I'm trying various things with Shader Graph in Reality Composer Pro, but I can't tell from the official documentation and WWDC session videos what the individual functions and combined effects of Reality Composer Pro's Shader Graph nodes are, I am having a hard time understanding the effects of the individual functions and combinations of them. I have a feeling that such luminous materials and expressions are not possible in visionOS to begin with. If there is a way to achieve this, please let me know. Thanks.
Posted Last updated
.
Post not yet marked as solved
2 Replies
369 Views
Hi, What are the limitations and capabilities of visionOS? I cannot find answers to the questions I have. Let's say you have some USDZ files stored in a cloud service, there are so many of them that the app would be huge if you put them in assets. You want to fetch the one you are interested in and show it while an app is running. Is it possible to load USDZ files at runtime from the network? Is there a limit to how many objects can be visible at once? Let's say I am in an open space, with no walls. I want to place 100 3D objects somewhere in space. Is it possible? What if I placed 500, 1000? Is there a way to save the anchor point of the object? I want to open the app again and have an object in the same place I left it. I would like to arrange my space and have objects always in the same spots. How does the OS behave if objects are in different rooms? Is it possible to walk around, visit different rooms, and have objects anchored there? Would it behave like real objects? Is it possible to color a plane? Let's say there is a wall and it's black. I want this wall to be orange. Is it possible?
Posted
by Coderian.
Last updated
.
Post not yet marked as solved
7 Replies
200 Views
Hi, I'm trying to rotate an entity in VisionPro. Most of the code is the same as the Diorama code from WWDC23. The problem I'm having is that the rotiation occurs but the axis of the rotation is not the center of my object. It seems to be centered on the zero coordinate of the immersive space . How do I change the rotation3DEffect to tell it to rotate around the entity? Not the space? Is it even possible? This is the code, the rotation is at the end. var body: some View { @Bindable var viewModel = viewModel RealityView { content, _ in do { let entity = try await Entity(named: "DioramaAssembled", in: RealityKitContent.RealityKitContentBundle) viewModel.rootEntity = entity content.add(entity) viewModel.updateScale() // Offset the scene so it doesn't appear underneath the user or conflict with the main window. entity.position = SIMD3<Float>(0, 0, -2) subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: PointOfInterestComponent.self, { event in createLearnMoreView(for: event.entity) })) entity.generateCollisionShapes (recursive: true) entity.components.set(InputTargetComponent()) } catch { print("Error in RealityView's make: \(error)") } } .rotation3DEffect(.radians(currentrotateByX), axis: .y) .rotation3DEffect(.radians(currentrotateByY), axis: .x)
Posted
by michelefu.
Last updated
.
Post not yet marked as solved
0 Replies
106 Views
We are porting a iOS Unity AR app to native visionOS. Ideally, we want to re-use our AR models in both applications. These AR models are rather simple. But still, converting them manually would be time-consuming, especially when it gets to the shaders. Is anyone aware of any attempts to write conversion tools for this? Maybe in other ecosystems like Godot or Unreal, where folks also want to convert the proprietary Unity format to something else? I've seen there's an FBX converter, but this would not care for shaders or particles. I am basically looking for something like the Polyspatial-internal conversion tools, but without the heavy weight of all the rest of Unity. Alternatively, is there a way to export a Unity project to visionOS and then just take the models out of the Xcode project?
Posted
by waldgeist.
Last updated
.
Post marked as solved
1 Replies
265 Views
I have a custom material in Reality Composer. When I attach it to a cube and try loading the scene in XCode, the material cannot be cast to a ShaderGraphMaterial because it has been changed to a PhysicallyBasedMaterial. The material was always a Custom material, I did not change the type in Reality Composer. Does anyone know how to fix?
Posted Last updated
.
Post not yet marked as solved
1 Replies
249 Views
In a RealityView, I have scene loaded from Reality Composer Pro. The entity I'm interacting with has a PhysicallyBasedMaterial with a diffuse color. I want to change that color when on long press. I can get the entity and even get a reference to the material, but I can't seem to change anything about it. What is the best way to change the color of a material at runtime? var longPress: some Gesture { LongPressGesture(minimumDuration: 0.5) .targetedToAnyEntity() .onEnded { value in value.entity.position.y = value.entity.position.y + 0.01 if var shadow = value.entity.components[GroundingShadowComponent.self] { shadow.castsShadow = true value.entity.components.set(shadow) } if let model = value.entity.components[ModelComponent.self] { print("material", model) if let mat = model.materials.first { print("material", mat) // I have a material here but I can't set any properties? // mat.diffuseColor does not exist } } } } Here is the full code struct Lab5026: View { var body: some View { RealityView { content in if let root = try? await Entity(named: "GestureLab", in: realityKitContentBundle) { root.position = [0, -0.45, 0] if let subject = root.findEntity(named: "Cube") { subject.components.set(HoverEffectComponent()) subject.components.set(GroundingShadowComponent(castsShadow: false)) } content.add(root) } } .gesture(longPress.sequenced(before: dragGesture)) } var longPress: some Gesture { LongPressGesture(minimumDuration: 0.5) .targetedToAnyEntity() .onEnded { value in value.entity.position.y = value.entity.position.y + 0.01 if var shadow = value.entity.components[GroundingShadowComponent.self] { shadow.castsShadow = true value.entity.components.set(shadow) } if let model = value.entity.components[ModelComponent.self] { print("material", model) if let mat = model.materials.first { print("material", mat) // I have a material here but I can't set any properties? // mat.diffuseColor does not exist // PhysicallyBasedMaterial } } } } var dragGesture: some Gesture { DragGesture() .targetedToAnyEntity() .onChanged { value in let newPostion = value.convert(value.location3D, from: .global, to: value.entity.parent!) let limit: Float = 0.175 value.entity.position.x = min(max(newPostion.x, -limit), limit) value.entity.position.z = min(max(newPostion.z, -limit), limit) } .onEnded { value in value.entity.position.y = value.entity.position.y - 0.01 if var shadow = value.entity.components[GroundingShadowComponent.self] { shadow.castsShadow = false value.entity.components.set(shadow) } } } }
Posted Last updated
.
Post marked as solved
3 Replies
230 Views
Hello! I've been trying to create a custom USDZ viewer using the Vision Pro. Basically, I want to be able to load in a file and have a custom control system I can use to transform, playback animations, etc. I'm getting stuck right at the starting line however. As far as I can tell, the only way to access the file system through SwiftUI is to use the DocumentGroup struct to bring up the view. This requires implementing a file type through the FileDocument protocol. All of the resources I'm finding use text files as their example, so I'm unsure of how to implement USDZ files. Here is the FileDocument I've built so far: import SwiftUI import UniformTypeIdentifiers import RealityKit struct CoreUsdzFile: FileDocument { // we only support .usdz files static var readableContentTypes = [UTType.usdz] // make empty by default var content: ModelEntity = .init() // initializer to create new, empty usdz files init(initialContent: ModelEntity = .init()){ content = initialContent } // import or read file init(configuration: ReadConfiguration) throws { if let data = configuration.file.regularFileContents { // convert file content to ModelEntity? content = ModelEntity.init(mesh: data) } else { throw CocoaError(.fileReadCorruptFile) } } // save file wrapper func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper { let data = Data(content) return FileWrapper(regularFileWithContents: data) } } My errors are on conversion of the file data into a ModelEntity and the reverse. I'm not sure if ModelEntity is the correct typing here, but as far as I can tell .usdz files are imported as ModelEntities. Any help is much appreciated! Dylan
Posted
by dganim8s.
Last updated
.
Post marked as Apple Recommended
344 Views
How can i play a USDZ entity animation in reverse? I have tried to put a negative value to the speed as I was doing in SceneKit to make the animation reverse play but it did not work. here is my code: import SwiftUI import RealityKit struct ImmersiveView: View { @State var entity = Entity() @State var openDoor: Bool = true var body: some View { RealityView { content in if let mainDoor = try? await Entity(named: "Door.usdz") { if let frame = mainDoor.findEntity(named: "DoorFrame") { frame.position = [0, 0, -8] frame.orientation = simd_quatf(angle: (270 * (.pi / 180)), axis: SIMD3(x: 1, y: 0, z: 0)) content.add(frame) entity = frame.findEntity(named: "Door")! entity.components.set(InputTargetComponent(allowedInputTypes: .indirect)) entity.components.set(HoverEffectComponent()) let entityModel = entity.children[0] entityModel.generateCollisionShapes(recursive: true) } } } .gesture( SpatialTapGesture() .targetedToEntity(entity) .onEnded { value in print(value) if openDoor == true { let animController = entity.playAnimation(entity.availableAnimations[0], transitionDuration: 0 , startsPaused: true) animController.speed = 1.0 animController.resume() openDoor = false } else { let animController = entity.playAnimation(entity.availableAnimations[0], transitionDuration: 0 , startsPaused: true) animController.speed = -1.0 // it does not work to reverse animController.resume() openDoor = true } } ) } } The Door should open with first click which is already happening and close with second click which is not happening as it does not reverse play the animation
Posted
by ostoura.
Last updated
.
Post not yet marked as solved
1 Replies
360 Views
I have some strange behavior in my app. When I set the position to .zero you can see the sphere normally. But when I change it to any number it doesn't matter which and how small. The Sphere isn't visible or in the view. The RealityView import SwiftUI import RealityKit import RealityKitContent struct TheSphereOfDoomRV: View { @StateObject var viewModel: SphereViewModel = SphereViewModel() let sphere = SphereEntity(radius: 0.25, materials: [SimpleMaterial(color: .red, isMetallic: true)], name: "TheSphere") var body: some View { RealityView { content, attachments in content.add(sphere) } update: { content, attachments in sphere.scale = SIMD3<Float>(x: viewModel.scale, y: viewModel.scale, z: viewModel.scale) } attachments: { VStack { Text("The Sphere of Doom is one of the most powerful Objects. You can interact with him in every way you can imagine ").multilineTextAlignment(.center) Button { } label: { Text("Play Video!") } }.tag("description") }.modifier(GestureModifier()).environmentObject(viewModel) } } SphereEntity: import Foundation import RealityKit import RealityKitContent class SphereEntity: Entity { private let sphere: ModelEntity @MainActor required init() { sphere = ModelEntity() super.init() } init(radius: Float, materials: [Material], name: String) { sphere = ModelEntity(mesh: .generateSphere(radius: radius), materials: materials) sphere.generateCollisionShapes(recursive: false) sphere.components.set(InputTargetComponent()) sphere.components.set(HoverEffectComponent()) sphere.components.set(CollisionComponent(shapes: [.generateSphere(radius: radius)])) sphere.name = name super.init() self.addChild(sphere) self.position = .zero // .init(x: Float, y: Float, z: Float) and [Float, Float, Float] doesn't work ... } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
118 Views
Hello, I've been trying to render these models in a VisionOS app using RealityKit's Model3D API. The heart seem to appear dark all the time. Any thoughts on why this would happen? Color.clear .overlay { Model3D(named: modelName, bundle: realityKitContentBundle) { model in model.resizable() .scaledToFit() .rotation3DEffect( Rotation3D( eulerAngles: .init(angles: orientation, order: .xyz) ) ) .frame(depth: modelDepth) .offset(z: -modelDepth / 2) .accessibilitySortPriority(1) } placeholder: { ProgressView() .offset(z: -modelDepth * 0.75) } } .dragRotation(yawLimit: .degrees(120), pitchLimit: .degrees(20)) .offset(z: modelDepth)
Posted Last updated
.
Post not yet marked as solved
0 Replies
106 Views
I am working on an AR app on iOS. I found this issue and can't find a quick solution at this moment nor do I find any insight into what is happening. Context: The AR app contains a 3D model of a sphere that is cut in half. The models are created with a 3D modeling software. Additional context: The sphere model is placed inside the environment. When the user enters the sphere then the sphere materials are set to video materials containing this jungle-like content visible in the video. The vertical center of the sphere is the floor on which the user moves and looks around. Each ARPlaneAnchor is connected to an ARAnchorEntity with its ModelEntity (Plane) for visualization and sphere placement. The bug (video): https://www.youtube.com/shorts/58860U1IkhM As the user moves inside the sphere parts of the video material start to show the square plane camera feed parts (background). What has been tried: changing material type on the floor plane (physically based material seems a little less bad) changing the culling of the materials (no effect). the issue is not related to z-fighting. When a solid material for the floor plane is used then the flickering is not visible. When a solid material with alpha is used then both are visible (alpha material and flickering background) editing the .usdz file (does not work at all) (changing shadows and other properties) checked .usdz file with usdz tools (usdzconvert - all tests have passed and fixed opacity did not help) Changing the video type (.mp4 to .mov) Google & ChatGPT Similar issues: How do I eliminate flickering ... Plane entity's grounding shadow flicking in RealityKit AR flickering Observations: The flickering background is always tied to the floor plane (can provide screenshots). This seems to highlight either a point of contact with the 3D floor plane or something else. The flickering happens at some certain angle and position. It does not happen all the time in all positions which is weird. This problem didn't happen with the old sphere 3D file. The difference is a new 3D-generated floor plane WHICH IS DISABLED. It seems that even if it is disabled it is still being used somehow. I had a similar issue where the floor planes would change color (the color tone would go more light and dark). This issue was solved by disabling automatic shadow rendering. Shadow rendering inside an object does not seem to work properly. The main difference is that the previous issue changed the color brightness, not color transparency (translucent), and that the whole plane color was changed and not some part of it. Logging all the available planes shows only the expected planes (1-3, of which 1-2 two are floor planes and 1 is an image plane). Any ideas, solutions, or feedback is welcome. SO Post: https://stackoverflow.com/questions/78139934/realitykit-plane-flickering-bug-in-model-with-videomaterial Thank you for your time.
Posted
by MJ_111.
Last updated
.
Post not yet marked as solved
0 Replies
118 Views
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene. Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
Posted Last updated
.
Post not yet marked as solved
1 Replies
113 Views
I have a volumetric window that I am using to display 3D content. The issue I have is that the rotation of the 3D models will rotate when the user moves the window. I want the rotation across the Y-axis to remain fixed when the user repositions the window. Is that possible? Also, is there a way to visually debug the walls of the 3D volume window?
Posted Last updated
.
Post not yet marked as solved
0 Replies
164 Views
I'm trying to add dynamic shadows by adding a directional light to the scene. I implemented a POC based on the latest documentation. Basically, the way shadows are being rendered in RealityKit is by a adding a ModelEntity into an AnchorEntity with a target of type planes. The result is that I'm getting shadows that are terribly flickering. I'd add that in SceneKit, there are many more shadow-related properties that let you tweak the look and feel of the shadows, and it's not hard to get a decent shadow there. I'm wondering if having accurate dynamic shadows is possible in RealityKit and if not, if there's a plan to fix it in the next RealityKit version.
Posted
by nativ18.
Last updated
.
Post not yet marked as solved
4 Replies
270 Views
In the WWDC talk "Enhance your spatial computing app with RealityKit." we see how to create a portal effect with RealityKit. In the "Encounter Dinosaurs" experience on Vision Pro there is a similar portal, except this portal allows entities to stick out of the portal. Using the provided example code, I have been unable to replicate this effect. With the example code, anything that sticks out of the portal gets clipped. How do I get entities to stick out of the portal in a way similar to the "Encounter Dinosaurs" experience? I am familiar with the old way of using OcclusionMaterial to create portals, but if the camera gets between the OcclusionMaterial and the entity (such as walking behind the portal), this can break the effect, and I was unable to break the effect in the "Encounter Dinosaurs" experience. If it helps at all: I have noticed that if you look from the edge of the portal very closely, the rocks will not stick out the way that the dinosaurs do; The rocks get clipped. Therefore, the dinosaurs are somehow being rendered differently.
Posted
by CodeName.
Last updated
.
Post not yet marked as solved
2 Replies
209 Views
We have been using attachment.bounds.extents to determine the size of a RealityView attachment at run time. It has been working fine until VisionOS 1.1 update. I wonder if we are doing something wrong as the release notes suggest some visual bounds calculation issue was fixed with the latest release. The funny thing is we did not have an issue before. Below is how we access to height value: let height = attachmentEntity.attachment.bounds.extents.y Previously it returned the correct value. Now it returns 0. I wonder if anyone else is having the same issue.
Posted Last updated
.
Post not yet marked as solved
1 Replies
154 Views
I've been trying to animate the OpacityComponent to fade in/out entities in my scene. I've tried animating the component with an AnimationResource as well as tried animating with a custom System. Both worked fine in the simulator, but failed on device. AnimationResource: When I animated the opacity of an entity using an animation with an opacity bind target, the entity would not change opacity until I physically looked away from the object. It's almost as if the device keeps an entity visible for as long as you keep looking at it, but once you look away it plays the animation. System: I created a custom system that manually changes the opacity over time, however, on device the gradual fade of the entity doesn't work. Instead, the entity literally pops in/out of view instead of fading. Can someone explain exactly how this component is supposed to be used? The simulator plays the animations exactly the way I would expect, but on device it's completely different. Edit: I'm trying to change the opacity of entities with a VideoMaterial added to a ModelComponent. The fade animations are performed at certain points in the video that are triggered by an AVPlayer time boundary observer.
Posted Last updated
.