RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

Posts under RealityKit tag

200 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Failed to load 12K Panorama photo,Request help to solve, loading 5.7K is normal to read the image texture
extension Entity { func addPanoramicImage(for media: WRMedia) { let subscription=TextureResource.loadAsync(named:"image_20240425_201630").sink( receiveCompletion: { switch $0 { case .finished: break case .failure(let error): assertionFailure("(error)") } }, receiveValue: { [weak self] texture in guard let self = self else { return } var material = UnlitMaterial() material.color = .init(texture: .init(texture)) self.components.set(ModelComponent( mesh: .generateSphere(radius: 1E3), materials: [material] )) self.scale *= .init(x: -1, y: 1, z: 1) self.transform.translation += SIMD3(0.0, -1, 0.0) } ) components.set(Entity.WRSubscribeComponent(subscription: subscription)) } problem: case .failure(let error): assertionFailure("(error)") Thread 1: Fatal error: Error Domain=MTKTextureLoaderErrorDomain Code=0 "Image decoding failed" UserInfo={NSLocalizedDescription=Image decoding failed, MTKTextureLoaderErrorKey=Image decoding failed}
2
0
438
May ’24
Download a USDZ file from web and load as entity at runtime in Reality Kit
I can't find a way to download a USDZ at runtime and load it into a Reality View with Reality kit. As an example, imagine downloading one of the 3D models from this Apple Developer page: https://developer.apple.com/augmented-reality/quick-look/ I think the process should be: Download the file from the web and store in temporary storage with the FileManager API Load the entity from the temp file location using Entity.init (I believe Entity.load is being deprecated in Swift 6 - throws up compiler warning) - https://developer.apple.com/documentation/realitykit/loading-entities-from-a-file Add the entity to content in the Reality View. I'm doing this at runtime on vision os in the simulator. I can get this to work with textures using slightly different APIs so I think the logic is sound but in that case I'm creating the entity with a mesh and material. Not sure if file size has an effect. Is there any official guidance or a code sample for this use case?
1
0
445
May ’24
How to get updated scale value in RealityKit in vision pro
Hello, I am doing to load model from bundle and it is loaded successfully. Now I am scaling model using GestureExtension from apple demo code. (https://developer.apple.com/documentation/realitykit/transforming-realitykit-entities-with-gestures?changes=_8) @State private var selectedEntityName : String = "" @State private var modelEntity: ModelEntity? var body: some View { contentView .task { do { modelEntity = try await ModelEntity.loadArcadeMachine() } catch { fatalError(error.localizedDescription) } } } @ViewBuilder private var contentView: some View { if let modelEntity { RealityView { content, attachments in modelEntity.position = SIMD3<Float>(x: 0, y: -0.3, z: -5) print(modelEntity.transform.scale) modelEntity.transform.scale = [0.006, 0.006, 0.006] content.add(modelEntity) if let percentTextAttachment = attachments.entity(for: "percentage") { percentTextAttachment.position = [0, 50, 0] modelEntity.addChild(percentTextAttachment) } } update: { content, attachments in // I want here to get updated scaling value and it is showing in RealityView attachmnt text. } attachments: { Attachment(id: "percentage") { Text("\(modelEntity.name) \(modelEntity.scale * 100) %") .font(.system(size: 5000)) .background(.red) } } // This method am using for gesture support .installGestures() } else { ProgressView() } } } Below code from GestureExtension let state = EntityGestureState.shared guard canScale, !state.isDragging else { return } let entity = value.entity if !state.isScaling { state.isScaling = true state.startScale = entity.scale } let magnification = Float(value.magnification) entity.scale = state.startScale * magnification state.magnifyValue = magnification magnifyScale = Double(magnification) print("Entity Name ::::::: \(entity.name)") print("Scale ::::::: \(entity.scale)") print("Magnification ::::::: \(magnification)") print("StartScale ::::::: \(state.startScale)") > This "magnification" value I need to use in RealityView class. How can i Do it? Could you please guide it. }
1
0
348
May ’24
Default RealityViewContent has multiple anchors?
I am developing an app in mixed immersive native app on Vision Pro. In my RealityView, I add my scene by content.add(mainGameScene). Normally the anchored position (original coord) should be the device position but on the ground (with y == 0 on the ground). At least this is how I understand the RealityViewContent works. So if I place something at position (0, 0, -1.0), the object should be in the front of you but on the floor (z axis is pointing backwards) However recently I load a different scene and I add that with same code, content.add(mainGameScene), something has changed, my scene randomly anchored on the floor or ceiling, according to the places I stand or sit. When I open Visualizations of my anchoring point, I could see that anchor point I am using is on the ceiling. The correct one (around my foots) is left over there. How could I switch to the correct anchored position? Or does any setting can change the behavior of default RealityViewContent?
3
0
390
Apr ’24
Capping Clipped Models in a Volumetric Window
I'm currently developing an application where the models present inside a volumetric window may exceed the clipping boundaries of the window. ( Which I currently understand to be a maximum of 2m ) Because of this, as models move through the clipping boundaries, the interior of the models becomes visible. If possible, I'd like to cap these interiors with a solid fill so as to make them more visually appealing. However, as far as I can tell, I'm quite limited in how I might be able to achieve this when using RealityKit on VisionOS. Some approaches I've seen to accomplish similar effects seem to use multiple passes of model geometries rendering into stencil buffers and using that to inform whether or not a cap should be drawn. However, afiact, if I have opted into using a RealityView and RealityKit, I don't have the level of control over my render pipeline such that I can render ModelEntities and also have multiple rendering passes over the set of contained entities to render into a stencil buffer that I then provide to a separate set of "capping planes" ( how I currently imagine I might accomplish this effect ). Alternatively ( due to the nature of the models I'm using ) I considered using a height map to construct an approximation of a surface cap, but how I might use a shader to construct a height map of rendered entities seems similarly difficult using the VisionOS RealityView pipeline. It is not obvious to me how I could use a ShaderGraphMaterial to render to an arbitrary image buffer that I might then pass to other functions to use as an input; ShaderGraphMaterial seems biased to the fact that all image inputs and outputs are either literal files or the actual rendered buffer. Would anyone out there have already created an effect like this that might have some advice? Or, potentially correct any misunderstandings I have with regards to accessing the Metal pipeline for RealityView or using ShaderGraphMaterial to construct a height map?
0
0
226
Apr ’24
Photogrammetry failed with crash(Assert: in line 417)
I'm developing a 3D scanner works on a iPad(6th gen, 12-inch). Photogrammetry with ObjectCaptureSession was successful, but other trials are not. I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto. It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails. and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed. the settings are: camera: back Lidar camera, image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32 photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true I wonder iPad supports Photogrammetry with PhotogrammetrySamples I've already tested some sample codes provided by apple: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture What should I do to make Photogrammetry successful?
2
1
488
Apr ’24
realitytool fails with killed-9 on any moderately complex Reality Composer Pro project
I'm trying to build a project with a moderately complex Reality Composer Pro project, but am unable to because my Mac mini (2023, 8GB RAM) keeps running out of memory. I'm wondering if there are any known memory leaks in realitytool, but basically the tool is taking up 20-30GB (!) memory during builds. I have a Mac Pro for content creation, which is why I didn't go for more RAM on the mini – it was supposed to just be a build machine for Apple Silicon compatibility, as my Pro is Intel. But, I'm kinda stuck here. I have a scene that builds fine, but any time I had a USD – in this case a tree asset – with lots of instances, or a lot of geometry, I run into the memory issue. I've tried greatly simplifying the model, but even a 2MB USD is resulting in the crash. I'm failing to see how adding a 2MB asset would cause the memory of realitytool to balloon so much during builds. If someone from Apple is willing to look, I can provide the scene – but it's proprietary so I can't just post it publicly here.
2
0
349
Apr ’24
Weird spinning glitch from Apple's Demo "DragRotationModifier"
Hello, I have started using the DragRotationModifier from the Hello World demo project by Apple. I have run into a bug that I can't seem to figure out for the life of me where everything seems to work fine for about 3-5 seconds of moment before it starts rapidly spinning for some reason. I took a video but it looks like I am unable to post any link to outside sources like imgur or youtube so ill try to describe it as best I can: Basically I can spin the sample USDZ Nike Airforce from the Apple sample objects perfectly, but after about 3-5 seconds it seems to rapidly snap between different other rotations and the rotation where the "cursor" is. A couple of additional notes, this only happens when the finger pinch/drag gesture is interacting with the object and this spin only affects the Yaw rotation axis of the object. I created an "Imported Model Entity" wrapper that does some additional stuff when importing a USDZ model similar to the Hello World demo. Then, within a RealityView I create an instance of this ImportedModelEntity and attach the Drag Rotation Modifier to the view like this: RealityView { content in let modelEntity = await ImportedModelEntity(configuration: modelViewModel.modelConfiguration) content.add(modelEntity) self.modelEntity = modelEntity content.add(BoundsVisualizer(bounds: [0.6, 0.6, 0.6])) //Scale object to half of the size of Volume view let bounds = content.convert(geometry.frame(in: .local), from: .local, to: content) let minExtent = bounds.extents.min() modelViewModel.modelConfiguration.scale = minExtent } update: { content in modelEntity?.update(configuration: modelViewModel.modelConfiguration) } .if(modelEntity != nil) { view in view.dragRotation( pitchLimit: .degrees(90), targetEntity: modelEntity!, sensitivity: 10, axRotateClockwise: axRotateClockwise, axRotateCounterClockwise: axRotateCounterClockwise) } For reference here is my ImportedModelEntity: import Foundation import RealityKit class ImportedModelEntity: Entity { // MARK: - Sub-entities private var model: ModelEntity = ModelEntity() private let rotator = Entity() // MARK: - Internal state // MARK: - Initializers @MainActor required init() { super.init() } init( configuration: Configuration ) async { super.init() if(configuration.modelURL == nil) { fatalError("Provided modelURL is NOT valid!!") } //Load the custom model on main thread // DispatchQueue.main.async { do { let input: ModelEntity? = try await ModelEntity(contentsOf: configuration.modelURL!) guard let model = input else { return } self.model = model // let material = SimpleMaterial(color: .green, isMetallic: false) // model.model?.materials = [material] //Add input components model.components.set(InputTargetComponent()) model.generateCollisionShapes(recursive: true) // Add Hover Effect model.components.set(HoverEffectComponent()) //self.model.components.set(GroundingShadowComponent(castsShadow: configuration.castsShadow)) // Add Rotator self.addChild(rotator) // Add Model to rotator rotator.addChild(model) } catch is CancellationError { // The entity initializer can throw this error if an enclosing // RealityView disappears before the model loads. Exit gracefully. return } catch let error { print("Failed to load model: \(error)") } // } update(configuration: configuration) } //MARK: - Update Handlers func update( configuration: Configuration) { rotator.orientation = configuration.rotation move(to: Transform( scale: SIMD3(repeating: configuration.scale), rotation: orientation, translation: configuration.position), relativeTo: parent) } } Any help is greatly appreciated!
3
0
406
Apr ’24
When using ARImageTrackingConfiguration, the entity in the scene keeps shaking
When isAutoFocusEnabled is set to true, the entity in the scene keeps shaking. No focus when isAutoFocusEnabled is set to false. How to set up to solve this problem. override func viewDidLoad() { super.viewDidLoad() arView.session.delegate = self guard let arCGImage = UIImage(named: "111", in: .main, with: .none)?.cgImage else { return } let arReferenceImage = ARReferenceImage(arCGImage, orientation: .up, physicalWidth: CGFloat(0.1)) let arImages: Set<ARReferenceImage> = [arReferenceImage] let configuration = ARImageTrackingConfiguration() configuration.trackingImages = arImages configuration.maximumNumberOfTrackedImages = 1 configuration.isAutoFocusEnabled = false arView.session.run(configuration) } func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { anchors.compactMap { $0 as? ARImageAnchor }.forEach { let anchor = AnchorEntity(anchor: $0) let mesh = MeshResource.generateBox(size: 0.1, cornerRadius: 0.005) let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true) let model = ModelEntity(mesh: mesh, materials: [material]) model.transform.translation.y = 0.05 anchor.children.append(model) arView.scene.addAnchor(anchor) } }
0
0
328
Apr ’24
Loading USDZ files with SwiftUI
Hello! I've been trying to create a custom USDZ viewer using the Vision Pro. Basically, I want to be able to load in a file and have a custom control system I can use to transform, playback animations, etc. I'm getting stuck right at the starting line however. As far as I can tell, the only way to access the file system through SwiftUI is to use the DocumentGroup struct to bring up the view. This requires implementing a file type through the FileDocument protocol. All of the resources I'm finding use text files as their example, so I'm unsure of how to implement USDZ files. Here is the FileDocument I've built so far: import SwiftUI import UniformTypeIdentifiers import RealityKit struct CoreUsdzFile: FileDocument { // we only support .usdz files static var readableContentTypes = [UTType.usdz] // make empty by default var content: ModelEntity = .init() // initializer to create new, empty usdz files init(initialContent: ModelEntity = .init()){ content = initialContent } // import or read file init(configuration: ReadConfiguration) throws { if let data = configuration.file.regularFileContents { // convert file content to ModelEntity? content = ModelEntity.init(mesh: data) } else { throw CocoaError(.fileReadCorruptFile) } } // save file wrapper func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper { let data = Data(content) return FileWrapper(regularFileWithContents: data) } } My errors are on conversion of the file data into a ModelEntity and the reverse. I'm not sure if ModelEntity is the correct typing here, but as far as I can tell .usdz files are imported as ModelEntities. Any help is much appreciated! Dylan
4
0
1.4k
Apr ’24
AVPlayerViewController in immersive spaces is not possible
I'm trying to get a similar experience to Apple TV's immersive videos, but I cannot figure out how to present the AVPlayerViewController controls detached from the video. I am able to use the same AVPlayer in a window and projected on a VideoMaterial, but I can't figure out how to just present the controls, while displaying the video only in the 3D entity, without having a 2D projection in any view. Is this even possible?
0
0
343
Apr ’24
ARKit ARWorldTrackingConfiguration doesn't recognize faces
Hi all, I am trying to use ARWorldTrackingConfiguration to find any faces in my scene. However when I query the scene, using the same type of query one would use in ARFaceTrackingConfiguration, I don't get an Entity back. Here's my code: var entityCollection : Set<Entity> = [] let faceEntity = scene.performQuery(query1).first { $0.components[SceneUnderstandingComponent.self]?.entityType == .face } Every single time faceEntity returns as empty. Any help/pointers would be appreciated!
1
0
394
Apr ’24
Create RealityKit entity that simulates circular progress bar
I wanted to show a progress of a certain part of the game using an entity that looks like a "pie chart", basically cylinder with a cut-out. And as progress is changed (0-100) the entity would be fuller. Is there a way to create this kind of model entity? I know there are ways to animated entities, warp them between meshes, but I was wondering if somebody knows how to achieve it in a simplest way possible? Maybe some kind of custom shader that would just change how the material is rendered? I do not need its physics body, just to show it. I know how to do it in UIKit and classic 2d UI Apple frameworks but here working with model entities it gets a bit tricky for me. Here is example of how it would look, examples are in 2d but you can imagine it being 3d cylinders with a cut-out. Thank you!
1
0
490
Apr ’24
In Immersive mode, App enters background if user doesn't look at View
This is easy to replicate with the ObjectPlacement sample app. Just run the app, position the View behind you and enjoy building a tower with the blocks. Eventually the App will enter the background and exit Immersive Space. This is actually a big problem because while you can ignore the change of scenePhase to .background this removes the chance you have to knowing the user pinched the circle X button to close the view. You can run the Hello World app, enter Immersive Space and then close the View. Immersive Space stays up and you can't get the View back. As such you need to close Immersive Space if the user closes the View (like ObjectPlacement does). Obviously if you do that and the user doesn't look at the View then it exits Immersive Space. Either option is a bad user experience. Being able to hide the View while in Immersive Space (such that the user can't close it) would be a good option. Unfortunately while you can hide all the content, the bar and circle X button are still visible. A second option would be to have the View not go into the background if the user doesn't look at it iff they are in Immersive Space.
0
0
371
Apr ’24
Access persona on VisionOS
I am planning to build a VisionOS app and need to get access to the persona (avatar). I have not found any information regarding integration possibilities in the docs. Does anyone know if and how I can access the user's persona? Other applications like Zoom and Teams for VisionOS use the persona, so I think it is basically possible. Apparently (if it's not fake) there is also a chess game with integrated persona: https://www.youtube.com/watch?v=mMzK8C3t14I Any help is very welcome, thanks.
1
0
413
Apr ’24