Post not yet marked as solved
How can I setup in lower resolution video in our ARSessions? Thanks in advance.
Post not yet marked as solved
Use the ModelIO
print("Processing is complete!")
let objPath = <#Your OBJ Path#>
/// usdz file
let modelAsset = MDLAsset(url: URL(fileURLWithPath: outputFilename))
modelAsset.loadTextures()
do {
try modelAsset.export(to:URL(fileURLWithPath: objPath))
print("Exported to OBJ!")
}
catch {
print(error)
}
The exported obj is missing most of the information compared to the original usdz.
Post not yet marked as solved
I would like to add a floor to an Entity I created from a RoomPlan USDZ file. Here's my approach:
Recursively traverse the Entity's children to get all of its vertices.
Find the minimum and maximum X, Y and Z values and use those to create a plane.
Add the plane as a child of the room's Entity.
The resulting plane has the correct size, but not the correct orientation. Here's what it looks like:
The coordinate axes you see show the world origin. I rendered them with this option:
arView.debugOptions = [.showWorldOrigin]
That world origin matches the place and orientation where I started scanning my room.
I have tried many things to align the floor with the room, but nothing has worked. I'm not sure what I'm doing wrong. Here's my recursive function that gets the vertices (I'm pretty sure this function is correct since the floor has the correct size):
func getVerticesOfRoom(entity: Entity, _ transformChain: simd_float4x4) {
let modelEntity = entity as? ModelEntity
guard let modelEntity = modelEntity else {
// If the Entity isn't a ModelEntity, skip it and check if we can get the vertices of its children
let updatedTransformChain = entity.transform.matrix * transformChain
for currEntity in entity.children {
getVerticesOfRoom(entity: currEntity, updatedTransformChain)
}
return
}
// Below we get the vertices of the ModelEntity
let updatedTransformChain = modelEntity.transform.matrix * transformChain
// Iterate over all instances
var instancesIterator = modelEntity.model?.mesh.contents.instances.makeIterator()
while let currInstance = instancesIterator?.next() {
// Get the model of the current instance
let currModel = modelEntity.model?.mesh.contents.models[currInstance.model]
// Iterate over the parts of the model
var partsIterator = currModel?.parts.makeIterator()
while let currPart = partsIterator?.next() {
// Iterate over the positions of the part
var positionsIterator = currPart.positions.makeIterator()
while let currPosition = positionsIterator.next() {
// Transform the position and store it
let transformedPosition = updatedTransformChain * SIMD4<Float>(currPosition.x, currPosition.y, currPosition.z, 1.0)
modelVertices.append(SIMD3<Float>(transformedPosition.x, transformedPosition.y, transformedPosition.z))
}
}
}
// Check if we can get the vertices of the children of the ModelEntity
for currEntity in modelEntity.children {
getVerticesOfRoom(entity: currEntity, updatedTransformChain)
}
}
And here's how I call it and create the floor:
// Get the vertices of the room
getVerticesOfRoom(entity: roomEntity, roomEntity.transform.matrix)
// Get the min and max X, Y and Z positions of the room
var minVertex = SIMD3<Float>(Float.greatestFiniteMagnitude, Float.greatestFiniteMagnitude, Float.greatestFiniteMagnitude)
var maxVertex = SIMD3<Float>(-Float.greatestFiniteMagnitude, -Float.greatestFiniteMagnitude, -Float.greatestFiniteMagnitude)
for vertex in modelVertices {
if vertex.x < minVertex.x { minVertex.x = vertex.x }
if vertex.y < minVertex.y { minVertex.y = vertex.y }
if vertex.z < minVertex.z { minVertex.z = vertex.z }
if vertex.x > maxVertex.x { maxVertex.x = vertex.x }
if vertex.y > maxVertex.y { maxVertex.y = vertex.y }
if vertex.z > maxVertex.z { maxVertex.z = vertex.z }
}
// Compose the corners of the floor
let upperLeftCorner: SIMD3<Float> = SIMD3<Float>(minVertex.x, minVertex.y, minVertex.z)
let lowerLeftCorner: SIMD3<Float> = SIMD3<Float>(minVertex.x, minVertex.y, maxVertex.z)
let lowerRightCorner: SIMD3<Float> = SIMD3<Float>(maxVertex.x, minVertex.y, maxVertex.z)
let upperRightCorner: SIMD3<Float> = SIMD3<Float>(maxVertex.x, minVertex.y, minVertex.z)
// Create the floor's ModelEntity
let floorPositions: [SIMD3<Float>] = [upperLeftCorner, lowerLeftCorner, lowerRightCorner, upperRightCorner]
var floorMeshDescriptor = MeshDescriptor(name: "floor")
floorMeshDescriptor.positions = MeshBuffers.Positions(floorPositions)
// Positions should be specified in CCWISE order
floorMeshDescriptor.primitives = .triangles([0, 1, 2, 2, 3, 0])
let simpleMaterial = SimpleMaterial(color: .gray, isMetallic: false)
let floorModelEntity = ModelEntity(mesh: try! .generate(from: [floorMeshDescriptor]), materials: [simpleMaterial])
guard let floorModelEntity = floorModelEntity else {
return
}
// Add the floor as a child of the room
roomEntity.addChild(floorModelEntity)
Can you think of a transformation that I could apply to the vertices or the plane to align them?
Thanks for any help.
Post not yet marked as solved
Hi guys, in one of the video of WWDC 21 I saw this spectacular image of a green fluid over the sofa 🛋
i can’t find to much tutorial online, How could be done this? I was looking into the particle system of SceneKit but looks like not doing exactly the same think( is more for smoke and fire)
this fluid look more 3D.
what I should look for reproduce this fluid.. thanks
Looking for some help to start my research.
Post not yet marked as solved
I would like to start up a new RoomCaptureSession with data from the previous session, similar to using ARWorldTrackingConfiguration.initialWorldMap with ARSession. Is it possible to use RoomBuilder.capturedRoom(from:) to initialize RoomCaptureSession with a CapturedRoom? If not, I've tried initializing the underlying arSession with initialWorldMap, but I get an error indicating that the configuration is malformed. Would that be a reasonable work around? Ideally, I'd be able to get the existing detected objects back into the RoomCaptureSession by initializing with a CapturedRoom, but at least being able to localize the camera to the original World frame would be nice. Thank you!
I was using Reality Converter on my MacBook Pro 13" mid 2012 with Catalina 10.15.7.
Recently, I had to format my drive, reinstalled the OS and now when I download Reality Converter from Apple and try to open it, it says that the app requieres macOS 12.0.
The last version of macOS I can install is Catalina 10.15.7, and I know there's a version of Reality Converter that can run on it because I was using it before.
Is there a way to download a previous version of Reality Converter that runs on Catalina? Or a way to open the one I downloaded despite not having macOS 12.0?
Thanks!
Post not yet marked as solved
Hey guys,
Seems like simple question, but was not able to find a clear answer.
I am building an app (game like) and all 3D geometry is going to be created and modified at run time.
What framework should I use with SwiftUI ? SceneKit or RealityKit
Thanks
Post not yet marked as solved
Hi, When I import a USDZ from the RoomPlan demo code into Blender it results in no geometry. Xcode has no problem with the model, on the other hand, nor does Preview. Has anyone else had this issue? Apparently the Forum won't let me upload a model here.
Post not yet marked as solved
I am trying to configure the materials of a downloaded USDZ file and then save it in the Documents folder as a new USDZ file.
Currently, I am using the ModelEntity class but I have two problems:
I should modify the materials by using their names, but this class just seems that I can access the list of materials and eventually replace them but without knowing their name;
I have no idea how to save it in the local Documents folder as a new USDZ file in order to pass the URL to the AR QuickLook controller to display the modified file in AR.
Probably I should change my approach by using other classes.
Could anyone give me some advice on how to proceed to achieve my goal? I would really appreciate any help.
I am developing an app in SwiftUI where i can place an object on detected plane on screen tap. However i need to give screen location to implement
arView.raycast(from: CGPoint, allowing: ARRaycastQuery.Target, alignment:
ARRaycastQuery.TargetAlignment)
In UIKit i can easily get location with the handleTap function using the method below
arView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(handleTap(recognizer:))))
@objc func handleTap(recognizer: UITapGestureRecognizer){
let tapLocation = recognizer.location(in: arView) }
How can i implement this in swiftUI structs to place objects on plane where tapped?
Post not yet marked as solved
Hello, I am trying to get the transform matrix of an Entity after a translation like this:
uiView.scene.findEntity(named: "Model_Name")?.transform.matrix
I move the 3D model around the scene and i call this method again and the matrix is still the same. So i can't get the latest transform matrix of my 3D model.
This is the way I create an Entity
var anchorEntity = AnchorEntity()
anchorEntity = AnchorEntity(.plane(.any,
classification: .any,
minimumBounds: [0.1, 0.1]))
anchorEntity.addChild(MyEntity, preservingWorldTransform: true)
uiView.scene.addAnchor(anchorEntity)
I can get the latest position (X,Y,Z) of the Entity via EntityTranslationGestureRecognizer
guard let translationGesture = recognizer as? EntityTranslationGestureRecognizer else { return }
let position = translationGesture.location(in: translationGesture.entity)
Is there any way to get the latest transform matrix of an Entity because I want to create an ArAnchor in the latest position of my 3D model.
Post not yet marked as solved
When I create text with MeshResource.generateText this text always "watch" on started direction(the direction where was a phone when an app was run). How I can make text which always will be watched on my camera as it works in ARKit and SceneKit?
In the WWDC video the code includes a reference to the visualizer object.
var previewVisualizer: Visualizer!
I'd like to create my own ViewController using RoomCaptureSession and incorporate the visualizer. This doesn't seem to be available in the RoomPlan framework, have I missed something or has this been removed from the public interface?
Thanks
When using the RoomPlan UI (RoomCaptureView), one obtains the final result using
public func captureView(didPresent processedResult: CapturedRoom, error: Error?)
which then gets exported via
finalResults.export(to: url)
What is the best way to do this if only using RoomCaptureSession?
Should I just keep track if each CapturedRoom coming back in the delegate methods and use the final one?
Post not yet marked as solved
The RoomCaptureView seems to have a coaching controller analogous to the ARCoachingOverlayView. The content is available via
public func captureSession(_ session: RoomCaptureSession, didProvide instruction: RoomCaptureSession.Instruction)
Is the view that presents these instructions available if not using the RoomCaptureView?
Post not yet marked as solved
I am using realityKit and I want to stop and start a models animation depending on differing events like collisions and user interactions. I started by using a wrapper class to handle this logic, but this seemed to be cumbersome, and my app started crashing when I attempted to make more than one GameEntity.
Example:
class GameEntity: ObservableObject {
let model: ModelEntity
var currentMovementController: AnimationPlaybackController? = nil
// lots of other methods and properties ...
}
So now I am attempting to make a subclass of Entity where I will have an attribute for AnimationPlaybackController:
class ARXEntity: Entity {
var animationHandler: AnimationPlaybackController? = nil
// more methods ...
}
My question is: is it possible to use the AnimationPlaybackController on the entity level? If so, what is the right way to do it?
Post not yet marked as solved
This crash occurs after 10-15 times saving an ARWorldMap to disk + restoring it with an ARWorldtrackingconfiguration and planeDetection enabled. It is somehow connected to a corrupted worldmap as it also occurs when restarting the app and running a configuration with the worldMap. Another reason could be a limit for anchors/planeAnchors in one session?
Has anyone been able to fix a similar error? How can I locate the root cause?
I am very grateful of any help
Post not yet marked as solved
When opening the USDZ file in the SceneKit Document integrated in Xcode, you can display Identity and Transforms information in the Node column. I want to display this information in RealityKit or SceneKit. What should I do?
Post not yet marked as solved
hi there
Before getting into AR I have been really interested in creating dynamic maps and charts using Javascript libraries such as d3.js, leaflet.js, mapbox and others. Now that I am working with AR I want to see if I can use data to create visualizations `a la d3.js . Except that the charts would be live in space.
I am particularly interested in using stock market data from APIs such as https://polygon.io/ to then inform the design and transform aspects of custom charts.
I am creating a joke project to laugh while crypto currencies crash so it would be useful if I could import data live... and then parse and then inform, say, the movement of cubes or doughnut charts and then add custom reaction elements. Liquids, melting ice cream, rotten tomatoes and others, which I can create with clay and then scan or take detailed pictures. What I don't have is the stock market data feed.
This is a comedy-esque project but it's serious when it comes to the programming part. Thank you.
Post not yet marked as solved
I’m implementing my first Component Entity System and am having an issue. I have a requirement that some component properties be dynamic. I do not want to create a subclass that conforms to HasExampleComponent, so this was my approach. My issue is that even though the entity contains the property I can’t cast it to HasExampleComponent.
When I create the entity I set the component like this:
entity.components[ExampleComponent.self] = .init()
I'd appreciate a template for a ECS with component properties that can be updated from the app.
Thanks
public struct ExampleComponent: Component {
public var value = 0
}
public protocol HasExampleComponent: Entity {
var value: Int
}
public class ExampleSystem: System {
private static let query = EntityQuery(where: .has(ExampleComponent.self))
public required init(scene: Scene) {}
public func update(context: SceneUpdateContext) {
context.scene.performQuery(Self.query).forEach { entity in
// this won’t work because entity doesn’t conform to HasExampleComponent
entity.value += 1
}
}
}
extension Entity {
@available (iOS 15.0, *)
public var value: Int? {
get { components[RotatingComponent.self].value ?? 0}
set { components[RotatingComponent.self].value = newValue }
}
}