RealityKit

RSS for tag

Simulate and render 3D content for use in your augmented reality apps using RealityKit.

RealityKit Documentation

Posts under RealityKit tag

248 Posts
Sort by:
Post not yet marked as solved
0 Replies
99 Views
So I have a ARView using realitykit. I am reusing the ARView. I have a Entity that has animations (stored in .usdz file). I play the animation with the following line of code. hummingbird = try! Entity.load(named: "bird") for animation in hummingbird.availableAnimations {             hummingbird.playAnimation(animation.repeat(duration: 120.0)) } However I noticed there is a memory leak. using Instruments I found it was at the playAnimation line. I have no clue how to fix this. At the end of the ARView I do this: hummingbird.stopAllAnimations(recursive: true) hummingbird = nil I thought that should be enough. But it isn't. In the image there are 2 instances. That from running the same arview 2 times. Basically my set up is startVC->ARView->backToStartVC->backToSameARView (with new configuration). And so on. Any idea would be great. And if you have any question or need clarification. Please ask.
Posted
by bhavin84.
Last updated
.
Post not yet marked as solved
1 Replies
160 Views
Hi, I get nicely creased USD models from my 3D designer which are super small, as higher meshes can be generated from them and even normals can be derived from these models. Question is: Does RealityKit automatically … • … generate the normals from the crease information? • … subdivide for these models (do I have to/can I set a subdivision level)? • … generate subdivision in the GPU as the model get's closer to the camera or should I even force create multiple detail levels? All the best Christoph
Posted Last updated
.
Post not yet marked as solved
1 Replies
206 Views
I updated my iPhone 12 Pro to iOS 16 beta, and the motion capture feature in ARKit seems stop functioning. I tried my own custom app (MoCáp) and BodyDetection sample code from Apple developer site, and they both don’t work. Anyone have the same issue?
Posted
by paperli.
Last updated
.
Post not yet marked as solved
0 Replies
125 Views
Hi, I noticed that the face painting sample is creating cgImages from the PencilKit and then generating texture from those again. I do something similar In my app that currently uses SceneKit and I would like to port to RealityKit. In my code I draw into a CGImage (draw shape, masked image and shadows) and then convert that CGImage to a texture. I would like to optimize that as there is a very noticable latency with this process. Would it help if made small CGImages (for the local changes) which are then converted to a texture and use DrawQueues to lay those over the existing images? Or what is the most efficient way to get my changes onto a texture in near real time? All the best Christoph
Posted Last updated
.
Post not yet marked as solved
2 Replies
278 Views
Im noticing about 450MB of memory footprint when loading a simple 2MB USDZ model. To eliminate any mis-use of the frameworks on my part, I built a basic RealityKit app using Xcode's Augmented Reality App, with no code changes at all. Im still seeing 450MB in Xcode gauges(so in debug mode) When looking at memgraph, Im seeing IOAccelerator and IOSurface regions have 194MB and 131MB of dirty memory respectively. Is this all camera-related memory? In the hopes of reducing compute & memory, I tried disabling various rendering options on ARView as follows: arView.renderOptions = [             .disableHDR,             .disableDepthOfField,             .disableMotionBlur,             .disableFaceMesh,             .disablePersonOcclusion,             .disableCameraGrain,             .disableAREnvironmentLighting         ] This brought it down to 300MB which is still quite a lot. When I configure ARView.cameraMode to be nonAR its still 113MB Im running this on iPhone 13 Pro Max which could explain some of the large allocations, but would still like to see opportunities to reduce the foot print. When I use QLPreviewController same model (~2MB) takes only 27MB in Xcode-gauges. Any ideas on reducing this memory footprint while using ARKit
Posted
by srnlwm.
Last updated
.
Post not yet marked as solved
2 Replies
235 Views
Hi, I'm working with RealityKit and Reality Composer. When I build a scene in Reality Composer, place the experience in Xcode and try the app out, the 3D models appears fine. When I place my hand over it, it doesn't recognise that my hand is in front of it and shows through. When place an object in front of it, it also doesn't recognise whether it is in front or behind the object. How do I fix this?
Posted
by dev365b.
Last updated
.
Post marked as solved
1 Replies
165 Views
Will it not be great to port RealityKit to the iPad Pro with M1 and capture objects on it? I think there is no info related to it, but, am I missing something? Is there a plan to move it to iOS soon? Best!
Posted
by heltena.
Last updated
.
Post not yet marked as solved
1 Replies
192 Views
I'm using the newest Geospatial API from ARCore, and trying to built it with SwiftUI and RealityKit. I have all SDK and api key set up properly, all coordinates and accuracy info are updated every frame properly. But Whenever I use GARSession.creatAnchor method, it returns a GARAnchor. I used GARAnchor's transform property to create an ARAnchor(transform: GARAnchor.transform), then I created AnchorEntity with this ARAnchor, and add AnchorEntity to ARView.scene. However, the model never showed up. I have checked coordinates, altitude, still no luck at all. So is there anyone could help me out? Thank you so much. do { let garAnchor = try parent.garSession.createAnchor(coordinate: CLLocationCoordinate2D(latitude: xx.xxxxxxxxx, longitude: xx.xxxxxxx), altitude: xx, eastUpSouthQAnchor: simd_quatf(ix: 0, iy: 0, iz: 0, r: 0)) if garAnchor.hasValidTransform && garAnchor.trackingState == .tracking { let arAnchor = ARAnchor(transform: garAnchor.transform) let anchorEntity = AnchorEntity(anchor: arAnchor) let mesh = MeshResource.generateSphere(radius: 2) let material = SimpleMaterial(color: .red, isMetallic: true) let sephere = ModelEntity(mesh: mesh, materials: [material]) anchorEntity.addChild(sephere) parent.arView.scene.addAnchor(anchorEntity) print("Anchor has valid transform, and anchor is tracking") } else { print("Anchor has invalid transform") } } catch { print("Add garAnchor failed: \(error.localizedDescription)") } }
Posted Last updated
.
Post not yet marked as solved
1 Replies
164 Views
Although the model disappeared, the memory did not decrease. How to delete memory occupied by the model? // MARK: === viewDidLoad   override func viewDidLoad() {     super.viewDidLoad()           arView.renderOptions = [.disableMotionBlur, .disableDepthOfField, .disableCameraGrain, .disableHDR]     arView.environment.sceneUnderstanding.options.insert(.occlusion)           rootAnchor = AnchorEntity(plane: .horizontal)     arView.scene.addAnchor(rootAnchor)           var cancellable: AnyCancellable? = nil     cancellable = Entity.loadAsync(contentsOf: Bundle.main.url(forResource: "vyygabbj_afr", withExtension: "usdz")!)       .sink(receiveCompletion: { error in         DispatchQueue.main.async {          cancellable?.cancel()          cancellable = nil         }       }, receiveValue: { [weak self] ey in         guard let self = self else { return }         self.modelEy = ModelEntity()         self.modelEy.addChild(ey)                   self.rootAnchor.addChild(self.modelEy)         ey.availableAnimations.forEach {           ey.playAnimation($0.repeat())         }                   DispatchQueue.main.async {           cancellable?.cancel()           cancellable = nil         }     })   }   override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?)   {     modelEy.removeFromParent()   }
Posted
by caopengxu.
Last updated
.
Post not yet marked as solved
0 Replies
117 Views
I use Entity.loadAsync to load the USDZ. The camera is stuck for a moment when loading the model. var cancellable: AnyCancellable? = nil     cancellable = Entity.loadAsync(contentsOf: Bundle.main.url(forResource: "vyygabbj_afr", withExtension: "usdz")!)       .sink(receiveCompletion: { error in         DispatchQueue.main.async {          cancellable?.cancel()          cancellable = nil         }       }, receiveValue: { [weak self] ey in         guard let self = self else { return }         self.modelEy = ModelEntity()         self.modelEy.addChild(ey)         self.rootAnchor.addChild(self.modelEy)         ey.availableAnimations.forEach {           ey.playAnimation($0.repeat())         }         DispatchQueue.main.async {           cancellable?.cancel()           cancellable = nil         }     })
Posted
by caopengxu.
Last updated
.
Post marked as solved
1 Replies
843 Views
Hello! When I expand the AnchorEntity in the following way, I see a shadow falling from the object: let anchorEntity = AnchorEntity (plane: .any) anchorEntity.addChild (parentEntity) but in my scenario it is necessary to work with ARAnchors in the session (I am implementing a multi-user session and it is necessary to operate on CollaborationData, which, as I understand it, is currently impossible without binding to ARAnchors) and I am trying to write the code like this: let anchorEntity = AnchorEntity (plane: .any) anchorEntity.anchoring = AnchoringComponent (anchor) anchorEntity.addChild (parentEntity) or easier let anchorEntity = AnchorEntity (anchor: anchor) anchorEntity.addChild (parentEntity) then there is no shadow from the placed object. I figured that the bottom of my object (AnchorEntity) could be placed below the level of the detected flatness and tried to add a few centimeters on the Y axis when calculating the placement point: let newTranslate = SIMD4 <Float> (x: transform.columns.3.x, y: transform.columns.3.y + 0.10, z: transform.columns.3.z, w: transform.columns.3.w) let anchorTransform = simd_float4x4 (columns: (transform.columns.0, transform.columns.1, transform.columns.2, newTranslate)) let anchor = ARAnchor(name: arAnchorName, transform: anchorTransform) self.arSession.add(anchor: anchor) This actually lifts my model up 10 centimeters when placed, but there is still no shadow. Thanks for any help!
Posted Last updated
.
Post not yet marked as solved
1 Replies
385 Views
I'm aware that you can create a simple scene in RC, import it to Xcode, and then add additional 3D objects with OcclusionMaterial applied to them using RealityKit. However, I would like to accomplish that the other way around, i.e. exporting a .reality-file (or usdz?) from Xcode with OcclusionMaterial applied into RC. Is that possible? If so, what's the workflow?
Posted
by mixao.
Last updated
.
Post marked as solved
1 Replies
174 Views
I’m loading a USDZ model using Entity.loadAsync(contentsOf:) I’d like to get the dimensions of the model and I find that visualBounds(relativeTo: nil).extents returns dimensions larger than the actual dimensions while I see the correct dimensions when viewing the USDZ in Blender or when instantiating it as a MDLAsset(url:). What is the method to get the actual dimensions from an Entity? Thanks Thanks
Posted
by spiff.
Last updated
.
Post not yet marked as solved
2 Replies
1.7k Views
Has anyone been able to access multiple animations stored in a USDZ file through the RealityKit APIs?
Posted Last updated
.