RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

244 Posts
Sort by:
Post not yet marked as solved
0 Replies
14 Views
How can I access the exact child node's name that the user tapped on? So for example, if the user taps on the Humerus_l node I could print it's name to the console. This is how my 3D USDZ model and its Scene graph looks like. I can also convert it into a .rcproject file but I couldn't make it work with that either.
Posted
by Pistifeju.
Last updated
.
Post not yet marked as solved
1 Replies
56 Views
I tried to display a Scene loaded from a .rcproject file in ARView with .nonAR mode. But could not display it on my iPhone 8 which is actual device. I have confirmed that the simulator display the scene properly. If the camera mode is set to .ar, the scene is displayed in actual device. I am puzzled as to why scene loaded from .rcproject file does not show up with my actual device. If anyone has had the similar experience or has an idea of the cause, I would appreciate it if you could help me. Thank you for taking the time to read this post. struct ARViewContainer: UIViewRepresentable {     func makeUIView(context: Context) -> ARView { // if cameramode is ".ar", work properly         let arview = ARView(frame: .zero, cameraMode: .nonAR)         Sample.loadMySceneAsync { (result) in             do {                 let myScene = try result.get()                 arview.scene.anchors.append(myScene)             } catch {                 print("Failed to load myScene")             }         }                  let camera = PerspectiveCamera()         let cameraAnchor = AnchorEntity(world: [0, 0.2, 0.5])         cameraAnchor.addChild(camera)         arView.scene.addAnchor(cameraAnchor)         return arview     }     func updateUIView(_ uiView: ARView, context: Context) {} }
Posted
by polaris_.
Last updated
.
Post not yet marked as solved
0 Replies
51 Views
I'm new in Reality Composer world and I'm doing a project where I need to load in the same App more than 1 reality composer project but it doesn't work. For exemple, I have 2 different RCproject, pro1 and pro2, that stand out according to the framed image that acts as an anchor for the scenario. I put them in Xcode and I add them in the contentView in this way // Load the "Box" scene from the Reality File let boxAnchor = try! pro1.loadMenu() let boxAnchor2 = try! pro2.loadScene() // add the boxAnchor to the scene arView.scene.anchors.append(boxAnchor) arView.scene.anchors.append(boxAnchor2) when I start the project on the ipad it installs the app and it works, it recognizes the image I use as an anchor and loads the correct project but after the first interaction it does nothing. If I change the framed image with the one connected to pro2 the app loads the right project but, again, after the first interaction it does nothing. While I use the app from my Ipad pro I have the following error as output in Xcode: "World tracking performance is being affected by resource constraints [1]" However, the app continues to be active and every time I change the framed image, the project I view also changes and, moreover, these keep the state they were in, thus allowing me to interact with the objects always only for a single tap and then freeze. Is there a solution to make the ipad select only the RC project requested when I frame a certain image? As an alternative solution I had thought of creating an initial menu that would make me choose which project to use, thus creating a different ContentView for each of them in order to show the right project through the user's choice and no longer through the framed image. is this a possible solution? Thanks in advance for your attention and any answers.
Posted Last updated
.
Post not yet marked as solved
0 Replies
79 Views
I am trying to run the application on the iPad M1 Device pinned to the Xcode debugger. The scheme has the Replay data enabled with the film prerecorded from the very same iPad M1. Errors: 2022-09-19 19:21:52.790061+0100 ARPathFinder[1373:320166] ⛔️⛔️⛔️ ERROR [MOVReaderInterface]: Error Domain=com.apple.AppleCV3DMOVKit.readererror Code=9 "CVAUserEvent: Error Domain=com.apple.videoeng.streamreaderwarning Code=0 "Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'." UserInfo={NSLocalizedDescription=Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'.}" UserInfo={NSLocalizedDescription=CVAUserEvent: Error Domain=com.apple.videoeng.streamreaderwarning Code=0 "Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'." UserInfo={NSLocalizedDescription=Cannot grab metadata. Unknown metadata stream 'CVAUserEvent'.}} ⛔️⛔️⛔️ 2022-09-19 19:21:54.103813+0100 ARPathFinder[1373:320166] [Session] ARSession <0x104f77ec0>: did fail with error: Error Domain=com.apple.arkit.error Code=101 "Required sensor unavailable." UserInfo={NSLocalizedDescription=Required sensor unavailable., NSLocalizedFailureReason=A required sensor is not available on this device.} Any help will be appreciated, thanks.
Posted
by jmgawe.
Last updated
.
Post not yet marked as solved
3 Replies
154 Views
Hi, please let me know if I should rather file feedback for this, but I figured it's worth to flag it one way or an another: Test Xcode Version: 14.0 beta 6 (14A5294g) 1. Project »Altering RealityKit Rendering with Shader Functions« This project crashes right away when running it on a device (iOS 15 and 16). Screenshot: 2. Project »Altering RealityKit Rendering with Shader FunctionsUsing object capture assets in RealityKit« Suffers from pretty bad performance when run on a device – barely scratching 20-25fps on an iPhone 12 Pro. iPhone XS even less. Screenshot: As these are official sample project I feel like they should work flawlessly out of the box. Best Arthur
Posted Last updated
.
Post not yet marked as solved
4 Replies
483 Views
Im noticing about 450MB of memory footprint when loading a simple 2MB USDZ model. To eliminate any mis-use of the frameworks on my part, I built a basic RealityKit app using Xcode's Augmented Reality App, with no code changes at all. Im still seeing 450MB in Xcode gauges(so in debug mode) When looking at memgraph, Im seeing IOAccelerator and IOSurface regions have 194MB and 131MB of dirty memory respectively. Is this all camera-related memory? In the hopes of reducing compute & memory, I tried disabling various rendering options on ARView as follows: arView.renderOptions = [             .disableHDR,             .disableDepthOfField,             .disableMotionBlur,             .disableFaceMesh,             .disablePersonOcclusion,             .disableCameraGrain,             .disableAREnvironmentLighting         ] This brought it down to 300MB which is still quite a lot. When I configure ARView.cameraMode to be nonAR its still 113MB Im running this on iPhone 13 Pro Max which could explain some of the large allocations, but would still like to see opportunities to reduce the foot print. When I use QLPreviewController same model (~2MB) takes only 27MB in Xcode-gauges. Any ideas on reducing this memory footprint while using ARKit
Posted
by srnlwm.
Last updated
.
Post not yet marked as solved
0 Replies
88 Views
Working in the real work sharing and creating new 3d models often results in dissimilar pivot points of each model. This results in difficulty adding and manipulating models within an active AR scene. Having an ability to change the anchor point on the fly would provide a world of difference in user experience.
Posted Last updated
.
Post marked as solved
6 Replies
167 Views
I've been trying to run multiple photogrammetry sessions in parallel in different threads but I keep getting this error [Photogrammetry] queueNextProcessingBatchIfNeeded(): Already running a job... not starting new one. This happens even though the session.isProcessing returns false. Is there someone out there that can help with this issue please ? Thanks a lot
Posted
by Sadafiou.
Last updated
.
Post not yet marked as solved
0 Replies
79 Views
I put two objects on same place and then added the click on both the objects. On click I added the "usdz animation". After that I added the one behavior. In that behavior I hide the one object. So when I click on that object the click is working in reality composer but not working in AR.
Posted
by saurabhbu.
Last updated
.
Post not yet marked as solved
4 Replies
152 Views
I am building a simple SwiftUI Augmented Reality app that allows to display a 3D model from a list of models. ScrollView { LazyVGrid(columns: Array(repeating: item, count: 3), spacing: 3) {     ForEach(items, id: \.self) { item in         ZStack {             NavigationLink(destination: ItemDetailsView(item: item))  ) { ListItemCell(item: item, itemUUID: "\(item.uuid)", imageURL: item.imageURL)  }                 } .aspectRatio(contentMode: .fill)             }         } } .navigationTitle("\(list.listName)") When I tap on a ListItemCell, I load a UIViewRepresentable to display the AR model. func makeUIView(context: Context) -> ARView {         // Create arView for VR with background color         let arView = ARView(frame: .zero, cameraMode: .nonAR, automaticallyConfigureSession: false)         arView.environment.background = .color(UIConfiguration.realityKit3DpreviewBackgroundColor)                  // Set world anchor         let worldAnchor = AnchorEntity(world: .zero)         arView.scene.addAnchor(worldAnchor)         // Load 3D model         loadAsset(at: worldAnchor, context: context)         // Setup camera         setupCamera(on: worldAnchor)         // Add spot lights         setupLighting(on: worldAnchor)         // Install gestures for 3D model         setupGesturesForARView(arView, context: context)         // Add camera to coordinator for interactions         context.coordinator.perspectiveCamera = preview3DViewModel.perspectiveCamera         return arView     } Load Asset: private func loadAsset(at worldAnchor: AnchorEntity, context: Context) {         var cancellable: AnyCancellable? = nil         cancellable = Entity.loadAsync(named: "item")             .sink(receiveCompletion: { completion in                 if case let .failure(error) = completion {                     print("Unable to load model: \(error)")                 }                 cancellable?.cancel()             }) { model in                 if let item = model.findEntity(named: "rootModel") as? ItemEntity {                     //... do stuff                     // Add item to ARView                     worldAnchor.addChild(item)                 }                 cancellable?.cancel()             }     } Everything works fine but I have a memory leak issue. When I profile the app or check the debug memory graph, everytime I load an asset using the Entity.loadAsync method, it leaks a series of blocks as per the screenshot. The memory footprint grows as I load more models. Whether I load models in real AR or non-AR, it leaks. In AR it eventually crashes when there are too many models and throws an EXC_BAD_ACCESS error on the @main function. I have tried the following to no effect: loading a simple Apple provided .reality model instead of mine (to check if something was wrong with mine) loading from a function in the coordinator loading from a function in a viewModel as an ObservedObject loading during the UIViewRepresentable init loading during the onAppear {} event in the SwiftUI view Could you help? Thanks
Posted Last updated
.
Post not yet marked as solved
1 Replies
156 Views
Hi, In our workflow, we have 3D objects with the type of gltf provided to the iOS via API endpoints. Using a different types of files is not an option here. Inside the iOS, the glb file is converted to a scn file using a third-party framework. The nodes in the converted scn file look as expected and similar to the original glb file. In the second step, the scn file should be converted to a USDZ file to use in RealityKit. As far as I know, there is only one way to convert scn to USDZ inside the iOS app. which is using an undocumented approach ( assign a usdz format to URL and using write(to:options:delegate:progressHandler:)) let path = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] .appendingPathComponent("model.usdz") sceneView.scene.write(to: path) In the converted USDZ file, all the one-sided materials (transparent) materials are changed to a not transparent material. We tested on a car object, and all the window glasses became not transparent after converting. On the other hand, if we use the Reality converter app in macOS to convert glb file directly to USDZ file, every node converts as expected, and it works so fine. Is there any workaround to fix that issue? Or any upcoming update to let us use glb file in the app or successfully convert it to USDZ inside the iOS app? Thanks
Posted Last updated
.
Post not yet marked as solved
1 Replies
112 Views
I created a scene in Reality Composer which includes lots of different models. Then I load scene and try to load model and place them separately, the following is my UITapGestureHandler: guard let loadModel = loadedScene.findEntity(named: selectedPlant.selectedModel) else{ return } loadModel.setPosition(SIMD3(0.0,0.0,0.0), relativeTo: nil) My confusion is, when you use .findentity and place this model to the detected plane, it cannot be retrieved again: I try to call this again to place a second model after placing the first one, and .findentity returns nil. Does anyone know the mechanism behind it? I thought loading the scene will create a template in memory, but in contrary, it seems like only creating a sort of list and pop out entity for every call.
Posted
by daiyukun.
Last updated
.
Post not yet marked as solved
4 Replies
146 Views
Hello, On iOS 16 when I'm retrieving an existing material from a model entity and update it's blending property to .transparent(opacity: …) the color or baseColor texture get's removed after reassigning the updated material. My usecase is that I want to fade in a ModelEntity through a CustomSystem and therefore need to repeatedly reassign the opacity value. I've tested this with UnlitMaterial and PhysicallyBasedMaterial – both suffer from this issue. On iOS 15 this works as expected. Please let me know if there is any workaround, as this seems to me like a major regression and ideally I need this to work once iOS 16 gets released to the public. The radar number including a sample project is: FB11420976 Thank you!
Posted Last updated
.
Post marked as solved
5 Replies
2.5k Views
Hello. Are there any tutorials or guides to follow on developing AR apps? From what I see in the documentation it is mostly a reference. As someone new to developing AR apps for iOS I was wondering if there is some documentation that gives an overview of the general approach and structure of AR apps. Thanks, Val
Posted
by vlttnv.
Last updated
.
Post not yet marked as solved
1 Replies
193 Views
Is it possible to replace the model from one ModelEntity onto another ModelEntity without breaking the animations? I have a ModelEntity object that I would like to apply animations to, but this ModelEntity is dynamically generated without animations. (baseEntity) So for each animation, I load an individual usdz with using ModelEntity.loadModel (animEntity). I add the animEntity as a child to their parentEntity. Then I add the materials from the baseEntity onto the animEntity. animEntity.model?.materials = baseMaterials parentEntity.addChild(animEntity, preservingWorldTransform: false) This does work, but causes texture / model issues, due to the baseEntity having a completely different mesh than the animEntity. Is there a better way to accomplish this? Other implementations I have tried: animEntity.model?.mesh = MeshResource.generate(from: baseModel.mesh.contents) animEntity.model!.mesh.replace(with: baseEntity.model!.mesh.contents) animEntity.model?.mesh.replaceAsync(with: baseModel.mesh.contents) animEntity.components[ModelComponent.self] = baseEntity.components[ModelComponent.self]
Posted Last updated
.
Post not yet marked as solved
3 Replies
644 Views
I am just starting to learn AR. Thanks for the help. I am trying to bind large objects to a certain location in an open area. I tried to bind using an image, an object in a reality composer. After snapping, when moving, objects do not remain in the same place. ARGeoTrackingConfiguration is not available in my region. If you scan the world around you and then define it, then with a rainy day or the slightest change in the area (for example, mowing the lawn), the terrain will not be determined. What do you advise?
Posted Last updated
.
Post not yet marked as solved
1 Replies
151 Views
Hey there, as I'm learning how to use my 3D modeling skills to create AR experiences, I'm wondering about WebAR for quite some time now. I want to create AR Quick Look assets, which can be summoned by just scanning a QR code or clicking a button on a website. Least friction possible. My question: Can I make AR Quick Look load the 3D model by only using a specific URL or HTML? Thanks for you help!
Posted Last updated
.
Post marked as solved
5 Replies
305 Views
I was using Reality Converter on my MacBook Pro 13" mid 2012 with Catalina 10.15.7. Recently, I had to format my drive, reinstalled the OS and now when I download Reality Converter from Apple and try to open it, it says that the app requieres macOS 12.0. The last version of macOS I can install is Catalina 10.15.7, and I know there's a version of Reality Converter that can run on it because I was using it before. Is there a way to download a previous version of Reality Converter that runs on Catalina? Or a way to open the one I downloaded despite not having macOS 12.0? Thanks!
Posted
by gabicool.
Last updated
.