Post not yet marked as solved
I’m implementing my first Component Entity System and am having an issue. I have a requirement that some component properties be dynamic. I do not want to create a subclass that conforms to HasExampleComponent, so this was my approach. My issue is that even though the entity contains the property I can’t cast it to HasExampleComponent.
When I create the entity I set the component like this:
entity.components[ExampleComponent.self] = .init()
I'd appreciate a template for a ECS with component properties that can be updated from the app.
Thanks
public struct ExampleComponent: Component {
public var value = 0
}
public protocol HasExampleComponent: Entity {
var value: Int
}
public class ExampleSystem: System {
private static let query = EntityQuery(where: .has(ExampleComponent.self))
public required init(scene: Scene) {}
public func update(context: SceneUpdateContext) {
context.scene.performQuery(Self.query).forEach { entity in
// this won’t work because entity doesn’t conform to HasExampleComponent
entity.value += 1
}
}
}
extension Entity {
@available (iOS 15.0, *)
public var value: Int? {
get { components[RotatingComponent.self].value ?? 0}
set { components[RotatingComponent.self].value = newValue }
}
}
Post not yet marked as solved
Hi everyone, is it possible to use the ARKit Replay Data option for XCUITests? If not, this would be a great feature for automation.
Thanks!
Post not yet marked as solved
I have to make 80 3D models every day.
this takes a lot of time.
It is so comfortable if I could make multiple 3D models with one execution because all models become made while I am sleeping.
Is there a way?
If so, please tell me how.
Post not yet marked as solved
So I have a ARView using realitykit. I am reusing the ARView. I have a Entity that has animations (stored in .usdz file). I play the animation with the following line of code.
hummingbird = try! Entity.load(named: "bird")
for animation in hummingbird.availableAnimations {
hummingbird.playAnimation(animation.repeat(duration: 120.0))
}
However I noticed there is a memory leak. using Instruments I found it was at the playAnimation line.
I have no clue how to fix this. At the end of the ARView I do this:
hummingbird.stopAllAnimations(recursive: true)
hummingbird = nil
I thought that should be enough. But it isn't.
In the image there are 2 instances. That from running the same arview 2 times. Basically my set up is
startVC->ARView->backToStartVC->backToSameARView (with new configuration). And so on.
Any idea would be great. And if you have any question or need clarification. Please ask.
Post not yet marked as solved
I would like to add a floor to an Entity I created from a RoomPlan USDZ file. Here's my approach:
Recursively traverse the Entity's children to get all of its vertices.
Find the minimum and maximum X, Y and Z values and use those to create a plane.
Add the plane as a child of the room's Entity.
The resulting plane has the correct size, but not the correct orientation. Here's what it looks like:
The coordinate axes you see show the world origin. I rendered them with this option:
arView.debugOptions = [.showWorldOrigin]
That world origin matches the place and orientation where I started scanning my room.
I have tried many things to align the floor with the room, but nothing has worked. I'm not sure what I'm doing wrong. Here's my recursive function that gets the vertices (I'm pretty sure this function is correct since the floor has the correct size):
func getVerticesOfRoom(entity: Entity, _ transformChain: simd_float4x4) {
let modelEntity = entity as? ModelEntity
guard let modelEntity = modelEntity else {
// If the Entity isn't a ModelEntity, skip it and check if we can get the vertices of its children
let updatedTransformChain = entity.transform.matrix * transformChain
for currEntity in entity.children {
getVerticesOfRoom(entity: currEntity, updatedTransformChain)
}
return
}
// Below we get the vertices of the ModelEntity
let updatedTransformChain = modelEntity.transform.matrix * transformChain
// Iterate over all instances
var instancesIterator = modelEntity.model?.mesh.contents.instances.makeIterator()
while let currInstance = instancesIterator?.next() {
// Get the model of the current instance
let currModel = modelEntity.model?.mesh.contents.models[currInstance.model]
// Iterate over the parts of the model
var partsIterator = currModel?.parts.makeIterator()
while let currPart = partsIterator?.next() {
// Iterate over the positions of the part
var positionsIterator = currPart.positions.makeIterator()
while let currPosition = positionsIterator.next() {
// Transform the position and store it
let transformedPosition = updatedTransformChain * SIMD4<Float>(currPosition.x, currPosition.y, currPosition.z, 1.0)
modelVertices.append(SIMD3<Float>(transformedPosition.x, transformedPosition.y, transformedPosition.z))
}
}
}
// Check if we can get the vertices of the children of the ModelEntity
for currEntity in modelEntity.children {
getVerticesOfRoom(entity: currEntity, updatedTransformChain)
}
}
And here's how I call it and create the floor:
// Get the vertices of the room
getVerticesOfRoom(entity: roomEntity, roomEntity.transform.matrix)
// Get the min and max X, Y and Z positions of the room
var minVertex = SIMD3<Float>(Float.greatestFiniteMagnitude, Float.greatestFiniteMagnitude, Float.greatestFiniteMagnitude)
var maxVertex = SIMD3<Float>(-Float.greatestFiniteMagnitude, -Float.greatestFiniteMagnitude, -Float.greatestFiniteMagnitude)
for vertex in modelVertices {
if vertex.x < minVertex.x { minVertex.x = vertex.x }
if vertex.y < minVertex.y { minVertex.y = vertex.y }
if vertex.z < minVertex.z { minVertex.z = vertex.z }
if vertex.x > maxVertex.x { maxVertex.x = vertex.x }
if vertex.y > maxVertex.y { maxVertex.y = vertex.y }
if vertex.z > maxVertex.z { maxVertex.z = vertex.z }
}
// Compose the corners of the floor
let upperLeftCorner: SIMD3<Float> = SIMD3<Float>(minVertex.x, minVertex.y, minVertex.z)
let lowerLeftCorner: SIMD3<Float> = SIMD3<Float>(minVertex.x, minVertex.y, maxVertex.z)
let lowerRightCorner: SIMD3<Float> = SIMD3<Float>(maxVertex.x, minVertex.y, maxVertex.z)
let upperRightCorner: SIMD3<Float> = SIMD3<Float>(maxVertex.x, minVertex.y, minVertex.z)
// Create the floor's ModelEntity
let floorPositions: [SIMD3<Float>] = [upperLeftCorner, lowerLeftCorner, lowerRightCorner, upperRightCorner]
var floorMeshDescriptor = MeshDescriptor(name: "floor")
floorMeshDescriptor.positions = MeshBuffers.Positions(floorPositions)
// Positions should be specified in CCWISE order
floorMeshDescriptor.primitives = .triangles([0, 1, 2, 2, 3, 0])
let simpleMaterial = SimpleMaterial(color: .gray, isMetallic: false)
let floorModelEntity = ModelEntity(mesh: try! .generate(from: [floorMeshDescriptor]), materials: [simpleMaterial])
guard let floorModelEntity = floorModelEntity else {
return
}
// Add the floor as a child of the room
roomEntity.addChild(floorModelEntity)
Can you think of a transformation that I could apply to the vertices or the plane to align them?
Thanks for any help.
Post not yet marked as solved
It is possible with SceneKit, but I haven’t found any way for RealityKit.
Post not yet marked as solved
I updated my iPhone 12 Pro to iOS 16 beta, and the motion capture feature in ARKit seems stop functioning. I tried my own custom app (MoCáp) and BodyDetection sample code from Apple developer site, and they both don’t work. Anyone have the same issue?
Post not yet marked as solved
Hi,
I noticed that the face painting sample is creating cgImages from the PencilKit and then generating texture from those again. I do something similar In my app that currently uses SceneKit and I would like to port to RealityKit.
In my code I draw into a CGImage (draw shape, masked image and shadows) and then convert that CGImage to a texture.
I would like to optimize that as there is a very noticable latency with this process.
Would it help if made small CGImages (for the local changes) which are then converted to a texture and use DrawQueues to lay those over the existing images?
Or what is the most efficient way to get my changes onto a texture in near real time?
All the best
Christoph
Post not yet marked as solved
Hi,
I get nicely creased USD models from my 3D designer which are super small, as higher meshes can be generated from them and even normals can be derived from these models.
Question is: Does RealityKit automatically …
• … generate the normals from the crease information?
• … subdivide for these models (do I have to/can I set a subdivision level)?
• … generate subdivision in the GPU as the model get's closer to the camera or should I even force create multiple detail levels?
All the best
Christoph
Will it not be great to port RealityKit to the iPad Pro with M1 and capture objects on it?
I think there is no info related to it, but, am I missing something? Is there a plan to move it to iOS soon?
Best!
Post not yet marked as solved
Hi, I'm working with RealityKit and Reality Composer. When I build a scene in Reality Composer, place the experience in Xcode and try the app out, the 3D models appears fine. When I place my hand over it, it doesn't recognise that my hand is in front of it and shows through. When place an object in front of it, it also doesn't recognise whether it is in front or behind the object. How do I fix this?
Post not yet marked as solved
I use Entity.loadAsync to load the USDZ.
The camera is stuck for a moment when loading the model.
var cancellable: AnyCancellable? = nil
cancellable = Entity.loadAsync(contentsOf: Bundle.main.url(forResource: "vyygabbj_afr", withExtension: "usdz")!)
.sink(receiveCompletion: { error in
DispatchQueue.main.async {
cancellable?.cancel()
cancellable = nil
}
}, receiveValue: { [weak self] ey in
guard let self = self else { return }
self.modelEy = ModelEntity()
self.modelEy.addChild(ey)
self.rootAnchor.addChild(self.modelEy)
ey.availableAnimations.forEach {
ey.playAnimation($0.repeat())
}
DispatchQueue.main.async {
cancellable?.cancel()
cancellable = nil
}
})
Post not yet marked as solved
Although the model disappeared, the memory did not decrease.
How to delete memory occupied by the model?
// MARK: === viewDidLoad
override func viewDidLoad() {
super.viewDidLoad()
arView.renderOptions = [.disableMotionBlur, .disableDepthOfField, .disableCameraGrain, .disableHDR]
arView.environment.sceneUnderstanding.options.insert(.occlusion)
rootAnchor = AnchorEntity(plane: .horizontal)
arView.scene.addAnchor(rootAnchor)
var cancellable: AnyCancellable? = nil
cancellable = Entity.loadAsync(contentsOf: Bundle.main.url(forResource: "vyygabbj_afr", withExtension: "usdz")!)
.sink(receiveCompletion: { error in
DispatchQueue.main.async {
cancellable?.cancel()
cancellable = nil
}
}, receiveValue: { [weak self] ey in
guard let self = self else { return }
self.modelEy = ModelEntity()
self.modelEy.addChild(ey)
self.rootAnchor.addChild(self.modelEy)
ey.availableAnimations.forEach {
ey.playAnimation($0.repeat())
}
DispatchQueue.main.async {
cancellable?.cancel()
cancellable = nil
}
})
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?)
{
modelEy.removeFromParent()
}
Post not yet marked as solved
I'm using the newest Geospatial API from ARCore, and trying to built it with SwiftUI and RealityKit. I have all SDK and api key set up properly, all coordinates and accuracy info are updated every frame properly. But Whenever I use GARSession.creatAnchor method, it returns a GARAnchor. I used GARAnchor's transform property to create an ARAnchor(transform: GARAnchor.transform), then I created AnchorEntity with this ARAnchor, and add AnchorEntity to ARView.scene. However, the model never showed up. I have checked coordinates, altitude, still no luck at all. So is there anyone could help me out? Thank you so much.
do {
let garAnchor = try parent.garSession.createAnchor(coordinate: CLLocationCoordinate2D(latitude: xx.xxxxxxxxx, longitude: xx.xxxxxxx), altitude: xx, eastUpSouthQAnchor: simd_quatf(ix: 0, iy: 0, iz: 0, r: 0))
if garAnchor.hasValidTransform && garAnchor.trackingState == .tracking {
let arAnchor = ARAnchor(transform: garAnchor.transform)
let anchorEntity = AnchorEntity(anchor: arAnchor)
let mesh = MeshResource.generateSphere(radius: 2)
let material = SimpleMaterial(color: .red, isMetallic: true)
let sephere = ModelEntity(mesh: mesh, materials: [material])
anchorEntity.addChild(sephere)
parent.arView.scene.addAnchor(anchorEntity)
print("Anchor has valid transform, and anchor is tracking")
} else {
print("Anchor has invalid transform")
}
} catch {
print("Add garAnchor failed: \(error.localizedDescription)")
}
}
Post not yet marked as solved
Im noticing about 450MB of memory footprint when loading a simple 2MB USDZ model.
To eliminate any mis-use of the frameworks on my part, I built a basic RealityKit app using Xcode's Augmented Reality App, with no code changes at all.
Im still seeing 450MB in Xcode gauges(so in debug mode)
When looking at memgraph, Im seeing IOAccelerator and IOSurface regions have 194MB and 131MB of dirty memory respectively.
Is this all camera-related memory?
In the hopes of reducing compute & memory, I tried disabling various rendering options on ARView as follows:
arView.renderOptions = [
.disableHDR,
.disableDepthOfField,
.disableMotionBlur,
.disableFaceMesh,
.disablePersonOcclusion,
.disableCameraGrain,
.disableAREnvironmentLighting
]
This brought it down to 300MB which is still quite a lot.
When I configure ARView.cameraMode to be nonAR its still 113MB
Im running this on iPhone 13 Pro Max which could explain some of the large allocations, but would still like to see opportunities to reduce the foot print.
When I use QLPreviewController same model (~2MB) takes only 27MB in Xcode-gauges.
Any ideas on reducing this memory footprint while using ARKit
I’m loading a USDZ model using Entity.loadAsync(contentsOf:)
I’d like to get the dimensions of the model and I find that visualBounds(relativeTo: nil).extents returns dimensions larger than the actual dimensions while I see the correct dimensions when viewing the USDZ in Blender or when instantiating it as a MDLAsset(url:). What is the method to get the actual dimensions from an Entity?
Thanks
Thanks
Post not yet marked as solved
Can you place an Augmented Reality Anchor in a private apple indoor map?
https://register.apple.com/resources/indoor/program/indoor_maps
https://developer.apple.com/augmented-reality/tools/
Post not yet marked as solved
Do I need an iPhone Pro instead of an iPhone for "Object capture"?
Does "Object capture" use a LIDAR sensor?
What is the best camera product for "Object capture"?
Post not yet marked as solved
I have a framework which imports RealityKit. Minimum deployment target for the framework is iOS 13 and the app which uses the framework has iOS 12 as minimum deployment.
RealityKit and RealityFoundation is set to optional in the build phases of the framework and the framework is optional in the app as well.
But on launching the app in iOS 12 simulator I keep getting the following crash.
DYLD, can't resolve symbol _$s10RealityKit10HasPhysicsMp
If I change the minimum deployment of the framework to iOS 15, the same code works in iOS 12 simulator. No issues.
Also if I try with any other module like SwiftUI, which has min iOS 13 deployment, I don't find the issue and app launches without any issue.
Similarly, if I use ARKit instead of RealityKit, it works fine. No issues. Although ARKit min deployment is iOS 11 but it is also not available in simulators.
So why does weak linking RealityKit not work when the parent framework targets iOS 13 and iOS 14, but works alright when targeting iOS 15?
*
*
X
code version: 13.3.1**