RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

452 Posts
Sort by:
Post marked as solved
1 Replies
152 Views
Context https://developer.apple.com/forums/thread/751036 I found some sample code that does the process I described in my other post for ModelEntity here: https://www.youtube.com/watch?v=TqZ72kVle8A&ab_channel=ZackZack At runtime I'm loading: Immersive scene in a RealityView from Reality Compose Pro with the robot model baked into the file (not remote - asset in project) A Model3D view that pulls in the robot model from the web url A RemoteObjectView (RealityView) which downloads the model to temp, creates a ModelEntity, and adds it to the content of the RealityView Method 1 above is fine, but Methods 2 + 3 load the model with a pure black texture for some reason. Ideal state is Methods 2 + 3 look like the Method 1 result (see screenshot). Am I doing something wrong? e.g. I shouldn't use multiple Reality Views at once? Screenshot Code struct ImmersiveView: View { var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) { content.add(immersiveContentEntity) // Add an ImageBasedLight for the immersive content guard let resource = try? await EnvironmentResource(named: "ImageBasedLight") else { return } let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25) immersiveContentEntity.components.set(iblComponent) immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity)) // Put skybox here. See example in World project available at // https://developer.apple.com/ } } Model3D(url: URL(string: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz")!) SkyboxView() // RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/retrotv/tv_retro.usdz") RemoteObjectView(remoteURL: "https://developer.apple.com/augmented-reality/quick-look/models/vintagerobot2k/robot_walk_idle.usdz") } }
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.3k Views
I'm creating a custom scanning solution for iOS and using RealityKit Object Capture PhotogrammetrySession API to build a 3D model. I'm finding the data I'm sending to it is ignoring the depth and not building the model to scale. The documentation is a little light on how to format the depth so I'm wondering if someone could take a look at some example files I send to the PhotogrammetrySession. Would you be able to tell me what I'm not doing correctly? https://drive.google.com/file/d/1-GoeR_KMhX_i7-y8M8ElDRrySasOdoHy/view?usp=sharing Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
239 Views
When the dinosaur protrudes from the portal in the Encounter Dinosaurs app, it appears to be lit by the real room lighting, just like any other RealityKit content is by default. When the dinosaur is inside of the portal, it appears to be lit by the virtual environment, and the two light sources seem to be smoothly blended between at the plane of the portal. How is this done? ImageBasedLightReceiverComponent allows the IBL to be changed on a per-entity basis, but the actual lightning calculation shader code seems to be a black box, and I have not seen a way to specify which IBL texture is used on a per-fragment basis.
Posted
by GiantSox.
Last updated
.
Post not yet marked as solved
3 Replies
281 Views
Hello, I would like to change the aspect (scale, texture, color) of a 3D element (Model Entity) when I hovered it with my eyes. What should I do If I want to create a request for this feature? And how would I know if it will ever be considered or when it will appear?
Posted Last updated
.
Post not yet marked as solved
2 Replies
225 Views
extension Entity { func addPanoramicImage(for media: WRMedia) { let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } problem: case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
Posted
by big_white.
Last updated
.
Post not yet marked as solved
2 Replies
476 Views
I am generating a USD file with RealityKit and ARKit. I want to, within the same app, open the USD file and display it as if it was a USDZ file. For the user to see, without having to send it to another device. Is this possible? Thanks
Posted Last updated
.
Post not yet marked as solved
1 Replies
184 Views
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit. As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/ I think the process should be: Download the file from the web and store in temporary storage with the FileManager API Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file Add the entity to content in the Reality View. I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect. Is there any official guidance or a code sample for this use case?
Posted Last updated
.
Post marked as solved
1 Replies
151 Views
Hello, I am doing to load model from bundle and it is loaded successfully. Now I am scaling model using GestureExtension from apple demo code. (https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures?changes=_8) @State private var selectedEntityName : String = "" @State private var modelEntity: ModelEntity? var body: some View { contentView .task { do { modelEntity = try await ModelEntity.loadArcadeMachine() } catch { fatalError(error.localizedDescription) } } } @ViewBuilder private var contentView: some View { if let modelEntity { RealityView { content, attachments in modelEntity.position = SIMD3<Float>(x: 0, y: -0.3, z: -5) print(modelEntity.transform.scale) modelEntity.transform.scale = [0.006, 0.006, 0.006] content.add(modelEntity) if let percentTextAttachment = attachments.entity(for: "percentage") { percentTextAttachment.position = [0, 50, 0] modelEntity.addChild(percentTextAttachment) } } update: { content, attachments in // I want here to get updated scaling value and it is showing in RealityView attachmnt text. } attachments: { Attachment(id: "percentage") { Text("\(modelEntity.name) \(modelEntity.scale * 100) %") .font(.system(size: 5000)) .background(.red) } } // This method am using for gesture support .installGestures() } else { ProgressView() } } } Below code from GestureExtension let state = EntityGestureState.shared guard canScale, !state.isDragging else { return } let entity = value.entity if !state.isScaling { state.isScaling = true state.startScale = entity.scale } let magnification = Float(value.magnification) entity.scale = state.startScale * magnification state.magnifyValue = magnification magnifyScale = Double(magnification) print("Entity Name ::::::: \(entity.name)") print("Scale ::::::: \(entity.scale)") print("Magnification ::::::: \(magnification)") print("StartScale ::::::: \(state.startScale)") > This "magnification" value I need to use in RealityView class. How can i Do it? Could you please guide it. }
Posted Last updated
.
Post not yet marked as solved
1 Replies
324 Views
I have a plane that is stereoscopic so represents to the user depth that is beyond the plane. I would like to have the options to render the depth buffer for the pixels or to not render any information into the depth for the plane. I cannot see any option in Shader Graph Material to affect the depth buffer during render. I also cannot see any way in RealityKit to not render to the depth buffer for an entity. I'm open to any suggestions.
Posted Last updated
.
Post marked as solved
3 Replies
245 Views
I am developing an app in mixed immersive native app on Vision Pro. In my RealityView, I add my scene by content.add(mainGameScene). Normally the anchored position (original coord) should be the device position but on the ground (with y == 0 on the ground). At least this is how I understand the RealityViewContent works. So if I place something at position (0, 0, -1.0), the object should be in the front of you but on the floor (z axis is pointing backwards) However recently I load a different scene and I add that with same code, content.add(mainGameScene), something has changed, my scene randomly anchored on the floor or ceiling, according to the places I stand or sit. When I open Visualizations of my anchoring point, I could see that anchor point I am using is on the ceiling. The correct one (around my foots) is left over there. How could I switch to the correct anchored position? Or does any setting can change the behavior of default RealityViewContent?
Posted
by milanowth.
Last updated
.
Post not yet marked as solved
0 Replies
133 Views
I'm currently developing an application where the models present inside a volumetric window may exceed the clipping boundaries of the window. ( Which I currently understand to be a maximum of 2m ) Because of this, as models move through the clipping boundaries, the interior of the models becomes visible. If possible, I'd like to cap these interiors with a solid fill so as to make them more visually appealing. However, as far as I can tell, I'm quite limited in how I might be able to achieve this when using RealityKit on VisionOS. Some approaches I've seen to accomplish similar effects seem to use multiple passes of model geometries rendering into stencil buffers and using that to inform whether or not a cap should be drawn. However, afiact, if I have opted into using a RealityView and RealityKit, I don't have the level of control over my render pipeline such that I can render ModelEntities and also have multiple rendering passes over the set of contained entities to render into a stencil buffer that I then provide to a separate set of "capping planes" ( how I currently imagine I might accomplish this effect ). Alternatively ( due to the nature of the models I'm using ) I considered using a height map to construct an approximation of a surface cap, but how I might use a shader to construct a height map of rendered entities seems similarly difficult using the VisionOS RealityView pipeline. It is not obvious to me how I could use a ShaderGraphMaterial to render to an arbitrary image buffer that I might then pass to other functions to use as an input; ShaderGraphMaterial seems biased to the fact that all image inputs and outputs are either literal files or the actual rendered buffer. Would anyone out there have already created an effect like this that might have some advice? Or, potentially correct any misunderstandings I have with regards to accessing the Metal pipeline for RealityView or using ShaderGraphMaterial to construct a height map?
Posted
by netshade.
Last updated
.
Post not yet marked as solved
0 Replies
389 Views
I'm developing a 3D scanner works on a iPad(6th gen, 12-inch). Photogrammetry with ObjectCaptureSession was successful, but other trials are not. I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto. It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails. and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed. the settings are: camera: back Lidar camera, image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32 photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true I wonder iPad supports Photogrammetry with PhotogrammetrySamples I've already tested some sample codes provided by apple: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture What should I do to make Photogrammetry successful?
Posted Last updated
.
Post not yet marked as solved
1 Replies
270 Views
The transparency in reality kit is not rendered properly from specific ordinal axes. It seems like it is a depth sorting issue where it is rejecting some transparent surfaces when it should not. Some view directions relative to specific ordinal axes are fine. I have not narrowed down which specific axes is the problem. This is true across particle systems and/or meshes. It is very easy to replicate this issues using multiple transparent meshes or particle systems. In the above gif you can see the problem in multiple instances, the fire and snow particles are sorted behind the terrain, which has transparency since it is a procedural blend of grass, rock, and ice, but it is correctly sorted in front of the opaque materials such the rocks and wood. In the above gif, it is two back to back grid meshes (since dual sided rendering is not supported) that have a custom surface shader to animate the mesh in a wave and also apply transperency. You can see in the distance, where the transparency seems to be rendered/overlapped correctely, but at the overlap approaches the screen (and crosses an ordinal axes) it renders black for the transparent portion of the surface, when the green of the mesh that is behind should be rendered. This is a blocking problem for the development of this demo.
Posted
by rngd.
Last updated
.
Post not yet marked as solved
1 Replies
217 Views
I'm trying to build a project with a moderately complex Reality Composer Pro project, but am unable to because my Mac mini (2023, 8GB RAM) keeps running out of memory. I'm wondering if there are any known memory leaks in realitytool, but basically the tool is taking up 20-30GB (!) memory during builds. I have a Mac Pro for content creation, which is why I didn't go for more RAM on the mini – it was supposed to just be a build machine for Apple Silicon compatibility, as my Pro is Intel. But, I'm kinda stuck here. I have a scene that builds fine, but any time I had a USD – in this case a tree asset – with lots of instances, or a lot of geometry, I run into the memory issue. I've tried greatly simplifying the model, but even a 2MB USD is resulting in the crash. I'm failing to see how adding a 2MB asset would cause the memory of realitytool to balloon so much during builds. If someone from Apple is willing to look, I can provide the scene – but it's proprietary so I can't just post it publicly here.
Posted
by Gregory_w.
Last updated
.
Post not yet marked as solved
5 Replies
1.4k Views
What is the most efficient way to use a MTLTexture (created procedurally at run-time) as a RealityKit TextureResource? I update the MTLTexture per-frame using regular Metal rendering, so it’s not something I can do offline. Is there a way to wrap it without doing a copy? A specific example would be great. Thank you!
Posted Last updated
.
Post not yet marked as solved
2 Replies
303 Views
Hello, I have started using the DragRotationModifier from the Hello World demo project by Apple. I have run into a bug that I can't seem to figure out for the life of me where everything seems to work fine for about 3-5 seconds of moment before it starts rapidly spinning for some reason. I took a video but it looks like I am unable to post any link to outside sources like imgur or youtube so ill try to describe it as best I can: Basically I can spin the sample USDZ Nike Airforce from the Apple sample objects perfectly, but after about 3-5 seconds it seems to rapidly snap between different other rotations and the rotation where the "cursor" is. A couple of additional notes, this only happens when the finger pinch/drag gesture is interacting with the object and this spin only affects the Yaw rotation axis of the object. I created an "Imported Model Entity" wrapper that does some additional stuff when importing a USDZ model similar to the Hello World demo. Then, within a RealityView I create an instance of this ImportedModelEntity and attach the Drag Rotation Modifier to the view like this: RealityView { content in let modelEntity = await ImportedModelEntity(configuration: modelViewModel.modelConfiguration) content.add(modelEntity) self.modelEntity = modelEntity content.add(BoundsVisualizer(bounds: [0.6, 0.6, 0.6])) //Scale object to half of the size of Volume view let bounds = content.convert(geometry.frame(in: .local), from: .local, to: content) let minExtent = bounds.extents.min() modelViewModel.modelConfiguration.scale = minExtent } update: { content in modelEntity?.update(configuration: modelViewModel.modelConfiguration) } .if(modelEntity != nil) { view in view.dragRotation( pitchLimit: .degrees(90), targetEntity: modelEntity!, sensitivity: 10, axRotateClockwise: axRotateClockwise, axRotateCounterClockwise: axRotateCounterClockwise) } For reference here is my ImportedModelEntity: import Foundation import RealityKit class ImportedModelEntity: Entity { // MARK: - Sub-entities private var model: ModelEntity = ModelEntity() private let rotator = Entity() // MARK: - Internal state // MARK: - Initializers @MainActor required init() { super.init() } init( configuration: Configuration ) async { super.init() if(configuration.modelURL == nil) { fatalError("Provided modelURL is NOT valid!!") } //Load the custom model on main thread // DispatchQueue.main.async { do { let input: ModelEntity? = try await ModelEntity(contentsOf: configuration.modelURL!) guard let model = input else { return } self.model = model // let material = SimpleMaterial(color: .green, isMetallic: false) // model.model?.materials = [material] //Add input components model.components.set(InputTargetComponent()) model.generateCollisionShapes(recursive: true) // Add Hover Effect model.components.set(HoverEffectComponent()) //self.model.components.set(GroundingShadowComponent(castsShadow: configuration.castsShadow)) // Add Rotator self.addChild(rotator) // Add Model to rotator rotator.addChild(model) } catch is CancellationError { // The entity initializer can throw this error if an enclosing // RealityView disappears before the model loads. Exit gracefully. return } catch let error { print("Failed to load model: \(error)") } // } update(configuration: configuration) } //MARK: - Update Handlers func update( configuration: Configuration) { rotator.orientation = configuration.rotation move(to: Transform( scale: SIMD3(repeating: configuration.scale), rotation: orientation, translation: configuration.position), relativeTo: parent) } } Any help is greatly appreciated!
Posted Last updated
.
Post not yet marked as solved
1 Replies
237 Views
SwiftUI in visionOS has a modifier called preferredSurroundingsEffect that takes a SurroundingsEffect. From what I can tell, there is only a single effect available: .systemDark. ImmersiveSpace(id: "MyView") { MyView() .preferredSurroundingsEffect(.systemDark) } I'd like to create another effect to tint the color of passthrough video if possible. Does anyone know how to create custom SurroundingsEffects?
Posted Last updated
.