Post not yet marked as solved
Hi everyone, is it possible to use the ARKit Replay Data option for XCUITests? If not, this would be a great feature for automation.
Thanks!
Post not yet marked as solved
I have to make 80 3D models every day.
this takes a lot of time.
It is so comfortable if I could make multiple 3D models with one execution because all models become made while I am sleeping.
Is there a way?
If so, please tell me how.
Post not yet marked as solved
So I have a ARView using realitykit. I am reusing the ARView. I have a Entity that has animations (stored in .usdz file). I play the animation with the following line of code.
hummingbird = try! Entity.load(named: "bird")
for animation in hummingbird.availableAnimations {
hummingbird.playAnimation(animation.repeat(duration: 120.0))
}
However I noticed there is a memory leak. using Instruments I found it was at the playAnimation line.
I have no clue how to fix this. At the end of the ARView I do this:
hummingbird.stopAllAnimations(recursive: true)
hummingbird = nil
I thought that should be enough. But it isn't.
In the image there are 2 instances. That from running the same arview 2 times. Basically my set up is
startVC->ARView->backToStartVC->backToSameARView (with new configuration). And so on.
Any idea would be great. And if you have any question or need clarification. Please ask.
Post not yet marked as solved
It is possible with SceneKit, but I haven’t found any way for RealityKit.
Post not yet marked as solved
Hi,
I get nicely creased USD models from my 3D designer which are super small, as higher meshes can be generated from them and even normals can be derived from these models.
Question is: Does RealityKit automatically …
• … generate the normals from the crease information?
• … subdivide for these models (do I have to/can I set a subdivision level)?
• … generate subdivision in the GPU as the model get's closer to the camera or should I even force create multiple detail levels?
All the best
Christoph
Post not yet marked as solved
I updated my iPhone 12 Pro to iOS 16 beta, and the motion capture feature in ARKit seems stop functioning. I tried my own custom app (MoCáp) and BodyDetection sample code from Apple developer site, and they both don’t work. Anyone have the same issue?
Post not yet marked as solved
Hi,
I noticed that the face painting sample is creating cgImages from the PencilKit and then generating texture from those again. I do something similar In my app that currently uses SceneKit and I would like to port to RealityKit.
In my code I draw into a CGImage (draw shape, masked image and shadows) and then convert that CGImage to a texture.
I would like to optimize that as there is a very noticable latency with this process.
Would it help if made small CGImages (for the local changes) which are then converted to a texture and use DrawQueues to lay those over the existing images?
Or what is the most efficient way to get my changes onto a texture in near real time?
All the best
Christoph
Post not yet marked as solved
Im noticing about 450MB of memory footprint when loading a simple 2MB USDZ model.
To eliminate any mis-use of the frameworks on my part, I built a basic RealityKit app using Xcode's Augmented Reality App, with no code changes at all.
Im still seeing 450MB in Xcode gauges(so in debug mode)
When looking at memgraph, Im seeing IOAccelerator and IOSurface regions have 194MB and 131MB of dirty memory respectively.
Is this all camera-related memory?
In the hopes of reducing compute & memory, I tried disabling various rendering options on ARView as follows:
arView.renderOptions = [
.disableHDR,
.disableDepthOfField,
.disableMotionBlur,
.disableFaceMesh,
.disablePersonOcclusion,
.disableCameraGrain,
.disableAREnvironmentLighting
]
This brought it down to 300MB which is still quite a lot.
When I configure ARView.cameraMode to be nonAR its still 113MB
Im running this on iPhone 13 Pro Max which could explain some of the large allocations, but would still like to see opportunities to reduce the foot print.
When I use QLPreviewController same model (~2MB) takes only 27MB in Xcode-gauges.
Any ideas on reducing this memory footprint while using ARKit
Post not yet marked as solved
Hi, I'm working with RealityKit and Reality Composer. When I build a scene in Reality Composer, place the experience in Xcode and try the app out, the 3D models appears fine. When I place my hand over it, it doesn't recognise that my hand is in front of it and shows through. When place an object in front of it, it also doesn't recognise whether it is in front or behind the object. How do I fix this?
Post not yet marked as solved
Can you place an Augmented Reality Anchor in a private apple indoor map?
https://register.apple.com/resources/indoor/program/indoor_maps
https://developer.apple.com/augmented-reality/tools/
Will it not be great to port RealityKit to the iPad Pro with M1 and capture objects on it?
I think there is no info related to it, but, am I missing something? Is there a plan to move it to iOS soon?
Best!
Post not yet marked as solved
I'm using the newest Geospatial API from ARCore, and trying to built it with SwiftUI and RealityKit. I have all SDK and api key set up properly, all coordinates and accuracy info are updated every frame properly. But Whenever I use GARSession.creatAnchor method, it returns a GARAnchor. I used GARAnchor's transform property to create an ARAnchor(transform: GARAnchor.transform), then I created AnchorEntity with this ARAnchor, and add AnchorEntity to ARView.scene. However, the model never showed up. I have checked coordinates, altitude, still no luck at all. So is there anyone could help me out? Thank you so much.
do {
let garAnchor = try parent.garSession.createAnchor(coordinate: CLLocationCoordinate2D(latitude: xx.xxxxxxxxx, longitude: xx.xxxxxxx), altitude: xx, eastUpSouthQAnchor: simd_quatf(ix: 0, iy: 0, iz: 0, r: 0))
if garAnchor.hasValidTransform && garAnchor.trackingState == .tracking {
let arAnchor = ARAnchor(transform: garAnchor.transform)
let anchorEntity = AnchorEntity(anchor: arAnchor)
let mesh = MeshResource.generateSphere(radius: 2)
let material = SimpleMaterial(color: .red, isMetallic: true)
let sephere = ModelEntity(mesh: mesh, materials: [material])
anchorEntity.addChild(sephere)
parent.arView.scene.addAnchor(anchorEntity)
print("Anchor has valid transform, and anchor is tracking")
} else {
print("Anchor has invalid transform")
}
} catch {
print("Add garAnchor failed: \(error.localizedDescription)")
}
}
Post not yet marked as solved
Although the model disappeared, the memory did not decrease.
How to delete memory occupied by the model?
// MARK: === viewDidLoad
override func viewDidLoad() {
super.viewDidLoad()
arView.renderOptions = [.disableMotionBlur, .disableDepthOfField, .disableCameraGrain, .disableHDR]
arView.environment.sceneUnderstanding.options.insert(.occlusion)
rootAnchor = AnchorEntity(plane: .horizontal)
arView.scene.addAnchor(rootAnchor)
var cancellable: AnyCancellable? = nil
cancellable = Entity.loadAsync(contentsOf: Bundle.main.url(forResource: "vyygabbj_afr", withExtension: "usdz")!)
.sink(receiveCompletion: { error in
DispatchQueue.main.async {
cancellable?.cancel()
cancellable = nil
}
}, receiveValue: { [weak self] ey in
guard let self = self else { return }
self.modelEy = ModelEntity()
self.modelEy.addChild(ey)
self.rootAnchor.addChild(self.modelEy)
ey.availableAnimations.forEach {
ey.playAnimation($0.repeat())
}
DispatchQueue.main.async {
cancellable?.cancel()
cancellable = nil
}
})
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?)
{
modelEy.removeFromParent()
}
Post not yet marked as solved
I use Entity.loadAsync to load the USDZ.
The camera is stuck for a moment when loading the model.
var cancellable: AnyCancellable? = nil
cancellable = Entity.loadAsync(contentsOf: Bundle.main.url(forResource: "vyygabbj_afr", withExtension: "usdz")!)
.sink(receiveCompletion: { error in
DispatchQueue.main.async {
cancellable?.cancel()
cancellable = nil
}
}, receiveValue: { [weak self] ey in
guard let self = self else { return }
self.modelEy = ModelEntity()
self.modelEy.addChild(ey)
self.rootAnchor.addChild(self.modelEy)
ey.availableAnimations.forEach {
ey.playAnimation($0.repeat())
}
DispatchQueue.main.async {
cancellable?.cancel()
cancellable = nil
}
})
Hello!
When I expand the AnchorEntity in the following way, I see a shadow falling from the object:
let anchorEntity = AnchorEntity (plane: .any)
anchorEntity.addChild (parentEntity)
but in my scenario it is necessary to work with ARAnchors in the session (I am implementing a multi-user session and it is necessary to operate on CollaborationData, which, as I understand it, is currently impossible without binding to ARAnchors) and I am trying to write the code like this:
let anchorEntity = AnchorEntity (plane: .any)
anchorEntity.anchoring = AnchoringComponent (anchor)
anchorEntity.addChild (parentEntity)
or easier
let anchorEntity = AnchorEntity (anchor: anchor)
anchorEntity.addChild (parentEntity)
then there is no shadow from the placed object.
I figured that the bottom of my object (AnchorEntity) could be placed below the level of the detected flatness and tried to add a few centimeters on the Y axis when calculating the placement point:
let newTranslate = SIMD4 <Float> (x: transform.columns.3.x, y: transform.columns.3.y + 0.10, z: transform.columns.3.z, w: transform.columns.3.w)
let anchorTransform = simd_float4x4 (columns: (transform.columns.0, transform.columns.1, transform.columns.2, newTranslate))
let anchor = ARAnchor(name: arAnchorName, transform: anchorTransform)
self.arSession.add(anchor: anchor)
This actually lifts my model up 10 centimeters when placed, but there is still no shadow.
Thanks for any help!
Post not yet marked as solved
I'm aware that you can create a simple scene in RC, import it to Xcode, and then add additional 3D objects with OcclusionMaterial applied to them using RealityKit.
However, I would like to accomplish that the other way around, i.e. exporting a .reality-file (or usdz?) from Xcode with OcclusionMaterial applied into RC. Is that possible? If so, what's the workflow?
I’m loading a USDZ model using Entity.loadAsync(contentsOf:)
I’d like to get the dimensions of the model and I find that visualBounds(relativeTo: nil).extents returns dimensions larger than the actual dimensions while I see the correct dimensions when viewing the USDZ in Blender or when instantiating it as a MDLAsset(url:). What is the method to get the actual dimensions from an Entity?
Thanks
Thanks
Post not yet marked as solved
Do I need an iPhone Pro instead of an iPhone for "Object capture"?
Does "Object capture" use a LIDAR sensor?
What is the best camera product for "Object capture"?
Post not yet marked as solved
Has anyone been able to access multiple animations stored in a USDZ file through the RealityKit APIs?