For RoomAnchors there's different mesh classifications for mesh anchors, but only walls and floors are supported by geometries() function.
So given this how can I get information about other mesh classifications?
RoomPlan
RSS for tagCreate parametric 3D scans of rooms and room-defining objects.
Posts under RoomPlan tag
87 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
I have two questions to ask:
1 The wall of the exported USDZ model has no thickness.
2 Can the exported model format be set to USDZ format or OBJ format.
Thank you.
We have an issue with Apple Roomplan - on regular bases the objects which are captured are not positioned corretly in the model which happens 50% of the cases we have - that makes the feature almost useless. Is there any idea how to solve that problem?
In RealityKit using visionOS, I scan the room and use the resulting mesh to create occlusion and physical boundaries. That works well and iI can place cubes (with physics on) onto that too.
However, I also want to update the mesh with versions from new scans and that make all my cubes jump.
Is there a way to prevent this? I get that the inaccuracies will produce slightly different mesh and I don’t want to anchor the objects so my guess is I need to somehow determine a fixed floor height and alter the scanned meshes so they adhere that fixed height.
Any thoughts or ideas appreciated
/Andreas
Is it possible to create a roomplan with the texture of a room plan's wall - or some way to combine ObjectCapture results with RoomPlan results?
Since moving up to IOS18 lat week I am getting an indication that there was a significant drop in IMU data being sent. Using the search capability, I can find very little information in the Developer Documentation that will tell me what the cause is and how to remedy it. Is there some documentation repository like Tech Notes that will tell me what I need to know to get going again? What additional sources of documentation are available for developers? The Search engine used for Developer documentation just does not cut it because it delivered a lot of useless entries that have no obvious relevance to my search terms.
"2024-06-20 19:27:00.669334-0500 RoomPlanExampleApp[902:299709] [Technique] ARWorldTrackingTechnique <0x104bf6d80>: SLAM error callback: Error Domain=Slam Error Code=7 "Non fatal error occurred due to significant drop in a IMU data" UserInfo={NSDescription=Non fatal error occurred due to significant drop in a IMU data, NSLocalizedFailureReason=SlamEngineNodeGroup Failure: IMU issue: gyro data stream verification failed [Significant data drop]. Failed on timestamp: 53902.785827, Last known timestamp: 53901.416828, Delta: 1.368999, System timestamp: 53902.786251, Delta between system and frame: 0.000423. }"
So in the WWDC23 video on the Roomplan enhancement, it says that it is now possible to set a custom ARSession for the RoomCaptureSession. But how do you actually set the config for the custom ARSession?
init() {
let arConfig = ARWorldTrackingConfiguration()
arConfig.worldAlignment = .gravityAndHeading
arSession = ARSession()
roomCaptureView = RoomCaptureView(frame: CGRect(x: 0, y: 0, width: 42, height: 42), arSession: arSession)
sessionConfig = RoomCaptureSession.Configuration()
roomCaptureView.captureSession.delegate = self
roomCaptureView.delegate = self
}
However, I keep getting an issue that self is being used in the property access before being initialised.
What can I do to fix it?
HI there,
I would like for the user to be able to tap on a wall that has been highlighted as scanned (the white outline) and see basic information about the wall (in a pop up view modal) without being taken out of the scan session.
As a first step though i'd simply like to be able to tap on the scanned wall whilst still in the session and see in the NSLog, the data about that CapturedRoom.Surface.
I'm storing the CapturedRoom on update of the sesssion using the RoomCaptureSessionDelegate and I have added a UITapGestureRecognizer to the room capture view.
However i've tried a number of ways (hit testing, raycasting) and i'm unable to target the wall behind the users tap gesture.
If anyone can give any advice even if just the principal of how to achieve this.
Hi Team
Is there a way to extract a colorized scan as well with using the roomplan SDK ? . If yes, can you point me to the right reference link ?
Does the roomplan SDK provide dimensions of the room ?
Is there a way to access the coordinates of where the camera is while scanning the room with Roomplan?
Hello, I am trying to make an app that involves room scanning and then placing of imaginary objects in the room. I had two questions about the specifics behind this.
Is it possible for Roomplan to include the ceiling when scanning the room?
Is it possible to place objects in AR while Room plan is running, or is it necessary to wait until after the scan is done?
This is one of the files being looked for during initialization of the RoomPlan WWDC Demo package but it cannot be found since moving to IOS 18.0. it is not anyrhere since the upgrade.
Reference is 2024-06-18 16:03:36.871062-0500 RoomPlanExampleApp[860:159744] [loading] Unable to create bundle at URL (file:///System/Library/CoreServices/SystemVersion.bundle): does not exist or not a directory (0)
Is it possible to access the RoomPlan API from Objective-C? I cannot figure out how to include the RoomPlan framework into some legacy Objective-C code I have. I can include the RoomPlan.h header but it still does not recognize any of the API classes. I also could not figure out if there was a way to use RoomPlan-Swift.h to expose the API to the Objective-C code.
Hello! I want to create an indoor mapping application in Swift, using the LiDAR scanner. I searched among frameworks and I found that ARKit, RealityKit and RoomPlan would be useful. Which is the proper way to create a 2D indoor mapping app? And which is the proper way to create a 3D indoor mapping app? Are there any modifications I have to make on my code in order to have both?
I'd like to be able to associate some data with each CapturedRoom scan and maintain those associations when CapturedRooms are combined in a CapturedStructure.
For example, in the delegate method captureView(didPresent:error:), I'd like to associate external data with the CapturedRoom. That's easy enough to do with a Swift dictionary, using the CapturedRoom's identifier as the key to the associated data.
However, when I assemble a list of CapturedRooms into a CapturedStructure using StructureBuilder.init(from:), the rooms in the output CapturedStructure have different identifiers so their associations to the external data are lost.
Is there any way to track or identify CapturedRoom objects that are input into a StructureBuilder to the rooms in the CapturedStructure output? I looked for something like a "userdata" property on a CapturedRoom that might be preserved, but couldn't find one. And since the room identifiers change when they are built into a CapturedStructure, I don't see an obvious way to do this.
It has been awhile since I looked at RoomPlan. I noticed when I was poking at the CapturedRoom, that there is binary data called coreModel... Does anyone know what this is? Is this related to ModelProvider?
Thanks in advance.
My goal is to modify CapturedRooms and load them back into the StructureBuilder to generate a new CapturedStructure.
Since CapturedRooms cannot be modified directly I stored them as JSON, modified the parameters (e.g. switching object categories) and serialized them back into a CapturedRoom object. So far so good, the object is loaded correctly. But when i put them into the capturedStructure() all the original parts of the CapturedRoom are used.
As some of you may have already noticed there is an undocumented CoreModel stored in CapturedRooms when you export them in JSON-format. It seems that the structure builder only uses this CoreModel to compose the output.
So here my question to the forum:
Does anybody know a way to edit a CapturedRoom so the StructureBuilder respects those changes and composes a new structure including those changes?
Hello,
I am working on an AR application to visualize a life-size room. I am working with Unity 2023.3, Apple ARKIT XR Plugin 6.0.0-pre.8 and a 2021 5th gen iPad.
First I scan a room with roomplan to get a usdz file. I open it with Blender to make sure I have the right data (I do) and I export it to fbx to use it in Unity.
Then I import the fbx to Unity and I use it as a prefab to instantiate it when I click on a detected floor.
I build my application in Unity, then in Xcode to use it on my iPad. But when the room is displayed, it is way too small.
I tried adding a slider to scale up the room's gameobject and I added a plugin to visualize my Unity scene in my built application. The room is scalling up in the Unity scene but not in the application.
Does anyone ever had this issue and if so how did you fix that?
Best regards,
Angel Garcia
In larger scenes, I need to record motion trajectories. RoomCaptureSession always starts from (0,0,0), and I use the last tracked point as the offset value to connect multiple trajectory points, just like StructureBuilder merging models
But when StructureBuilder merged, it eliminated some of the models, which would make the trajectory points I saved lose accuracy, and I cannot know how much scene size was specifically eliminated between them
Is there any way you can help me?
invalidValue(-nan, Swift.EncodingError.Context(codingPath: [CapturedVolumeCodingKeys(stringValue: "rooms", intValue: nil), _JSONKey(stringValue: "Index 0", intValue: 0), CapturedVolumeCodingKeys(stringValue: "openings", intValue: nil), _JSONKey(stringValue: "Index 0", intValue: 0), CodingKeys(stringValue: "dimensions", intValue: nil), _JSONKey(stringValue: "Index 0", intValue: 0)], debugDescription: "Unable to encode Float.nan directly in JSON.", underlyingError: nil))
Why does this exception occur during encoding? All scan data is CapturedRoom and has not been modified