Post not yet marked as solved
Let's say I am able to successfully scan a room and I am returned the dollhouse model. With this model, am I able to add customizable markers to the 3D model to denote key features of the model?
Example: I want to mark a chair (for whatever reason). I scan my room, and I zoom in enough to hone in on the chair. I add a marker (colored) with the label "Antique chair", and when I zoom out, I can see the marker still on the (x,y,z) coordinate of the model where the 'chair' is.
Thanks!
Post not yet marked as solved
I would like to add a floor to an Entity I created from a RoomPlan USDZ file. Here's my approach:
Recursively traverse the Entity's children to get all of its vertices.
Find the minimum and maximum X, Y and Z values and use those to create a plane.
Add the plane as a child of the room's Entity.
The resulting plane has the correct size, but not the correct orientation. Here's what it looks like:
The coordinate axes you see show the world origin. I rendered them with this option:
arView.debugOptions = [.showWorldOrigin]
That world origin matches the place and orientation where I started scanning my room.
I have tried many things to align the floor with the room, but nothing has worked. I'm not sure what I'm doing wrong. Here's my recursive function that gets the vertices (I'm pretty sure this function is correct since the floor has the correct size):
func getVerticesOfRoom(entity: Entity, _ transformChain: simd_float4x4) {
let modelEntity = entity as? ModelEntity
guard let modelEntity = modelEntity else {
// If the Entity isn't a ModelEntity, skip it and check if we can get the vertices of its children
let updatedTransformChain = entity.transform.matrix * transformChain
for currEntity in entity.children {
getVerticesOfRoom(entity: currEntity, updatedTransformChain)
}
return
}
// Below we get the vertices of the ModelEntity
let updatedTransformChain = modelEntity.transform.matrix * transformChain
// Iterate over all instances
var instancesIterator = modelEntity.model?.mesh.contents.instances.makeIterator()
while let currInstance = instancesIterator?.next() {
// Get the model of the current instance
let currModel = modelEntity.model?.mesh.contents.models[currInstance.model]
// Iterate over the parts of the model
var partsIterator = currModel?.parts.makeIterator()
while let currPart = partsIterator?.next() {
// Iterate over the positions of the part
var positionsIterator = currPart.positions.makeIterator()
while let currPosition = positionsIterator.next() {
// Transform the position and store it
let transformedPosition = updatedTransformChain * SIMD4<Float>(currPosition.x, currPosition.y, currPosition.z, 1.0)
modelVertices.append(SIMD3<Float>(transformedPosition.x, transformedPosition.y, transformedPosition.z))
}
}
}
// Check if we can get the vertices of the children of the ModelEntity
for currEntity in modelEntity.children {
getVerticesOfRoom(entity: currEntity, updatedTransformChain)
}
}
And here's how I call it and create the floor:
// Get the vertices of the room
getVerticesOfRoom(entity: roomEntity, roomEntity.transform.matrix)
// Get the min and max X, Y and Z positions of the room
var minVertex = SIMD3<Float>(Float.greatestFiniteMagnitude, Float.greatestFiniteMagnitude, Float.greatestFiniteMagnitude)
var maxVertex = SIMD3<Float>(-Float.greatestFiniteMagnitude, -Float.greatestFiniteMagnitude, -Float.greatestFiniteMagnitude)
for vertex in modelVertices {
if vertex.x < minVertex.x { minVertex.x = vertex.x }
if vertex.y < minVertex.y { minVertex.y = vertex.y }
if vertex.z < minVertex.z { minVertex.z = vertex.z }
if vertex.x > maxVertex.x { maxVertex.x = vertex.x }
if vertex.y > maxVertex.y { maxVertex.y = vertex.y }
if vertex.z > maxVertex.z { maxVertex.z = vertex.z }
}
// Compose the corners of the floor
let upperLeftCorner: SIMD3<Float> = SIMD3<Float>(minVertex.x, minVertex.y, minVertex.z)
let lowerLeftCorner: SIMD3<Float> = SIMD3<Float>(minVertex.x, minVertex.y, maxVertex.z)
let lowerRightCorner: SIMD3<Float> = SIMD3<Float>(maxVertex.x, minVertex.y, maxVertex.z)
let upperRightCorner: SIMD3<Float> = SIMD3<Float>(maxVertex.x, minVertex.y, minVertex.z)
// Create the floor's ModelEntity
let floorPositions: [SIMD3<Float>] = [upperLeftCorner, lowerLeftCorner, lowerRightCorner, upperRightCorner]
var floorMeshDescriptor = MeshDescriptor(name: "floor")
floorMeshDescriptor.positions = MeshBuffers.Positions(floorPositions)
// Positions should be specified in CCWISE order
floorMeshDescriptor.primitives = .triangles([0, 1, 2, 2, 3, 0])
let simpleMaterial = SimpleMaterial(color: .gray, isMetallic: false)
let floorModelEntity = ModelEntity(mesh: try! .generate(from: [floorMeshDescriptor]), materials: [simpleMaterial])
guard let floorModelEntity = floorModelEntity else {
return
}
// Add the floor as a child of the room
roomEntity.addChild(floorModelEntity)
Can you think of a transformation that I could apply to the vertices or the plane to align them?
Thanks for any help.
Post not yet marked as solved
Hi all,
I have a requirement about converting usdz to iges file in swift app, I searched on google but didn't find any support library in this.
do you know a library that supports this?
Thanks all.
Post not yet marked as solved
Hi
Is this possible to have RoomCaptureSession and ARSession together, as we need feature points
When we finish a room scanning session it would be nice to be able to keep the arSession alive so that we can do fine-tuning without having all the overhead of room scanning. But if I call stop() and immediately start a separate WorldScanning session then the home axis will shift according to the new location and rotation of the device. Is there any way to continue the arSession and keep the same origin?
Post not yet marked as solved
/// An object to configure the capture process
public struct Configuration {
public var isCoachingEnabled: Bool
public init()
}
If I ask nicely would someone please add a few more features to this?
Specifically, I would LOVE to be able to just detect a ROOM with no objects.
Then I would also like to be able to pass a map of CapturedRoom.Object.Category objects to filter on or use for render.
So for example I would pass into Configuration something like RoomCategory collection for the visualizer and USDZ model file. :
[Category : SimpleMaterial]
.sink: return SimpleMaterial(color: .systemBlue,
roughness: roughness, isMetallic: false)
.toilet: return SimpleMaterial(color: .systemTeal,
roughness: roughness, isMetallic: false)
.bathtub: return SimpleMaterial(color: .systemGreen,
roughness: roughness, isMetallic: false)
So the above would override the default of rending ALL known objects to just render the three above and give them the color I've laid out.
While I'm at it, Beta 1 and Beta 2 changed a few things on this list. "screen" is a better name than "television". I also miss the ".unknown" category.
Here is what was in Beta 1:
.unknown
.storage
.refrigerator
.stove
.bed
.sink
.washer
.toilet
.bathtub
.oven
.dishwasher
.table
.sofa
.chair
.fireplace
.screen
.stairs
The list above changed with Beta 2 that just shipped.
So in Beta 2 .unknown was REMOVED, .washer is now .washerDryer and .screen is now .television.
This API is super cool and amazing. I've really enjoyed working with it.
Post not yet marked as solved
I would like to start up a new RoomCaptureSession with data from the previous session, similar to using ARWorldTrackingConfiguration.initialWorldMap with ARSession. Is it possible to use RoomBuilder.capturedRoom(from:) to initialize RoomCaptureSession with a CapturedRoom? If not, I've tried initializing the underlying arSession with initialWorldMap, but I get an error indicating that the configuration is malformed. Would that be a reasonable work around? Ideally, I'd be able to get the existing detected objects back into the RoomCaptureSession by initializing with a CapturedRoom, but at least being able to localize the camera to the original World frame would be nice. Thank you!
Post not yet marked as solved
hi supporter,
I've downloaded sample code from link https://developer.apple.com/documentation/roomplan/create_a_3d_model_of_an_interior_room_by_guiding_the_user_through_an_ar_experience
but when click to button "Start Scanning" then my App has been crash with exception as image below:
can you help to resolve this problem.
thanks!
Is there a simple way of finding out which wall the door or window is connected to?
I can see that if you serialise the CapturedRoom the Json does contain a field called parentIdentifier that links them. But there's no mention of it in the documentation.
Post not yet marked as solved
I see from the docs where I can see edges of things, dimensions of things, but when there is a wall with a door or even a window, is there a way to pull the position from the captured room model or session at all?
Post not yet marked as solved
Is it possible to export the scan in some other format such as obj, fbx etc?
As a newby in roomplan I search for apossibility to get the status of the doors from isOpen.
Has anybody any experiences with that? Any help appreciated.
The target ist to scan a room build the model and give a statistic about various things of the model.
Are there some deeper documentations about the model and especially the finalResultafter scanning?
Post not yet marked as solved
When using the sample code for RoomCapture view the user is occasionally given instructions to point the camera at the ceiling or floor to complete the scan, but RoomCaptureSessionDelegate institutions do not provide the same feedback. Adding an ARCoachingOverlayView and connecting it to the arSession gives some prompts but again not as many as the RoomCapture view coaching overlay does. Because of this, I can't give people feedback to do what needs to be done to capture the walls.
roomcaptureSession includes only:
moveCloseToWall
moveAwayFromWall
slowDown
turnOnLight
normal
lowTexture
Is there a better way to get full instructions?
Post not yet marked as solved
Windows and doors that are vertically aligned seem to confuse the system. If I have a door in a wall with a window above or a lower window and an upper window, it seems to give both the same uuid which means that when we get updates it will only add one surface but it will randomly allocate the transform of one or the other, resulting in it flicking back and forth. Sometimes one of the windows seems to have a transform that covers both. Ideally, it would return two surfaces, once for each window or door. It works fine as long as no one tips the device up to the higher widow, after that it can be hard to get it to register the lower door or window underneath.
When I iterate through the data returned I can occasionally see multiple windows with the same uuid which suggests why it's getting confused.
I have tried keeping track of the duplicated uuids and adding multiple windows, but the transforms seem to be broken.
Post not yet marked as solved
The object model is pretty clean but I have a serious concern.
I think you need two Object types or a way to tag a collection subset as it stands.
Let me explain.
So a fireplace is a FIXED asset. It's never going to move or change without you physically changing the wall it's attached to. To put a fireplace in the 'objects' array is a problem for me.
Other fixed assets due to building codes also come to mind.
T o il et, Sink and Bathtub. If it's got running water it better have a drain or we've got bigger problems. Point here is that also makes it a fixed asset. Short of changing a wall/remodel those objects are never moving.
washer/dryer also has physical hookups that fix it to a location. It's not like a chair, couch or table for instance
gas/electric oven stove has either a special 3 phase 220 plug or a natural gas line which also means it's fixed.
I would suggest an 'attached' versus 'detached' type of Category subset or something of that nature.
I'm not saying this is the best answer but I do challenge you to 'walk thru that door' and see what's behind it. :-)
Probably the easiest fix with your current data model and data flows are to just create:
CapturedRoom.Object.Category.Attached
CapturedRoom.Object.Category.Detached
Regardless of your solution putting a fireplace in the same Object array as a chair might burn the whole thing to the ground. :-)
Can't wait to see Beta 3. Great stuff so far.
We have an App that does something similar to RoomPlan. We use SceneKit to draw all the wall lines. We have noticed that RoomPlan has trouble detecting walls around 7 inches or shorter. Our app has tools to deal with this. It seems the difference in time to capture the walls of a room between our app and the RoomPlan demo app is negligible. But we could save time in our app with auto detection of all the other things like windows, doors, openings, cabinets, etc.
Are the lines you see drawn in the RoomPlan demo App SCNNodes?
If so will you ever be able to call .addNode() inside the RoomPlan framework?
If not, does RoomPlan use SpriteKit to draw?
We use an ARSCNView to keep track of all the lines in our app. Changing that member to an instance of RoomCaptureView seems like a non starter.
Starting a new RoomCaptureSession when we're ready to scan for objects other than walls wipes all the wall lines we've previously captured.
Thanks,
Mike
Post not yet marked as solved
Is it possible to use RoomPlan api to scan things on a table? What's the limitation for minimun size.
For example, put some model objects on a table and scan throught iPhone to get a quick 3D model for room or town design reference.
Post not yet marked as solved
Can ARkit RoomPlan detect light fixtures as objects?
Post not yet marked as solved
Hi everyone,
In the demo code and in the video of the RoomPlan framework, UIKit is used with it. However, I would like to know if it is possible to use SwiftUI with RoomPlan instead of UIKit.
Best regards,
Clement
Is there an option to draw ceilings?
Can't seem to find any info regarding the ceilings.