Hi is there possibly to put text data or information into Roomplanapi elements like data wall color etc and export to usdz after lidar scan
RoomPlan
RSS for tagCreate parametric 3D scans of rooms and room-defining objects.
Posts under RoomPlan tag
56 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
First, I scan first room using the roomplan api. Because I need scan second room, I stop it by “captureSession.stop(pauseARSession: false)”, I think the Arsession is continue work at that time.
Second, before the another room will scan, I want to run another ARView. (in order to detect some objects which are not detected by Roomplan in first room)
But, at this time, the second ARView(there is an ARView in roomplan, I think) will always black screen, can’t normally work. This is the question I want to resolve. Please help me let the second ARView go well.
Is it possible to modify or mark elements in the room plan model generated by the framework?
Can an app made with the Room Plan API be used on iPhones without LIDAR? If so, how much accuracy would be lost compared to iPhones with LIDAR?
If not, is there an API similar to RoomPlan that works on iPhones without LiDAR?
I am currently working on creating a virtual interior design app.
Can an app made with the Room Plan API be used on iPhones without LIDAR? If so, how much accuracy would be lost compared to iPhones with LIDAR?
Hello everyone!
I'm developing an app with this amazing RoomPlan API, so far everything works great, however I want to render a customized viewer and I don't want draw default 3D geometry, the line shapes are very helpful so I want to keep them. Is there any way to disable the 3D geometry shown on the viewr? I could just render another viewer ontop of the room capture viewer however I want to save as much resources as possible.
Hi,
We’ve been successfully using the RoomPlan API in our application for over two years. Recently, however, users have reported encountering persistent capture errors during their sessions. Specifically, the errors observed are:
CaptureError.worldTrackingFailure
CaptureError.exceedSceneSizeLimit
What we have observed:
Persistent Errors: The errors continue to occur even after initiating new capture sessions.
Normal Usage: Our implementation adheres to typical usage patterns of the RoomPlan API without exceeding any documented room size limits.
Limited Feature Usage: We are not utilizing the WorldTracking feature for the StructureBuilder functionality to stitch rooms together.
Potential State Caching: Given that these errors persist across sessions, we suspect that there might be memory or state cached between sessions that is not being cleared, particularly since we are not taking advantage of StructureBuilder.
Request:
Could you please advise if there is any internal caching or memory retention between capture sessions that might lead to these errors? Additionally, we would appreciate guidance on how to clear or manage this state when the StructureBuilder feature is not in use.
Here is a generalised version of our capture session initialization code to help diagnose the issue.
struct RoomARCaptureView: UIViewRepresentable {
typealias Handler = (CapturedRoom, Error?) -> Void
@Binding var stop: Bool
@Binding var done: Bool
let completion: Handler?
func makeUIView(context: Self.Context) -> RoomCaptureView {
let view = RoomCaptureView(frame: .zero)
view.delegate = context.coordinator
view.captureSession.run(configuration: .init())
return view
}
func updateUIView(_ uiView: RoomCaptureView, context: Self.Context) {
if stop {
// Stop the session only once, multiple times causes issues with the final presentation
uiView.captureSession.stop()
stop = false
done = true
}
}
static func dismantleUIView(_ uiView: RoomCaptureView, coordinator: Self.Coordinator) {
uiView.captureSession.stop()
}
func makeCoordinator() -> ARViewCoordinator {
ARViewCoordinator(completion)
}
@objc(ARViewCoordinator)
class ARViewCoordinator: NSObject, RoomCaptureViewDelegate {
var completion: Handler?
public required init?(coder: NSCoder) {}
public func encode(with coder: NSCoder) {}
public init(_ completion: Handler?) {
super.init()
self.completion = completion
}
public func captureView(shouldPresent roomDataForProcessing: CapturedRoomData, error: (Error)?) -> Bool {
return true
}
public func captureView(didPresent processedResult: CapturedRoom, error: (Error)?) {
completion?(processedResult, error)
}
}
}
Thank you for your assistance.
Hello,
We are using the RoomPlan API and our users are facing issues during scanning everytime.
Error: RoomCaptureSession.CaptureError.exceedSceneSizeLimit
Apple Documentation Explanation: An error that indicates when the scene size grows past the framework’s limitations.
Issue: This error is popping up in my iPhone 15 Pro (128 GB) after ONE roomplan scans are done. This error shows up even if the room size is small. It occurs immediately after I start the RoomCaptureSession after the relocalisation of previous AR session (in world tracking configuration). I am having trouble understanding exactly why this error shows and how to debug/solve it.
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process.
Bug Description:
The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern:
User successfully scans first room with multiple hotspots (working correctly)
User stops scanning, moves to a new room
In the new room, first 1-2 hotspots work correctly
Application crashes when attempting to scan additional hotspots
Technical Details:
Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose()
Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode
Error suggests invalid positioning when placing AR anchors
Steps to Reproduce:
Start room scan
Complete multiple hotspot captures in first room
Stop scanning
Start new room scan
Capture 1-2 hotspots successfully
Attempt additional hotspot captures -> crashes
Attempted Solutions:
Implemented anchor cleanup between sessions
Added position validation before anchor placement
Implemented ARSession error handling
Added proper thread management for AR operations
Environment:
Device: iPhone 14 Pro (LiDAR equipped)
iOS Version: 18.1.1 (22B91)
Testing through TestFlight
Crash Log Details:
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Triggered by Thread: 27
Thread 27 Crashed:
0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268
2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128
3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor
Question:
Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides.
Crash Log
Code and full crash logs can be provided if needed.
Hi everyone,
I’m currently developing an app using Apple’s RoomPlan framework, and so far, everything is working great! However, I’d like to extend the functionality to include scanning smaller objects, such as light switches or power outlets, in addition to the walls and larger furniture that RoomPlan already supports.
From what I understand, based on the documentation, RoomPlan doesn’t natively support the detection or measurement of smaller objects like these. Is that correct?
If that’s the case, does anyone have suggestions or ideas on how this could be achieved? Perhaps by integrating another framework or technology alongside RoomPlan?
I’d appreciate any insights or advice from those who have worked on similar use cases.
Thanks in advance!
When using RoomPlan to collect data and processing it with StructureBuilder, the app crashes.
Crash thread:
RoomScanCore.offlineFloorPlanGeneration
How should I deal with this issue? I’ve already implemented crash capture, but no crash was logged—the app just crashes directly.
RoomScanCore.offlineFloorPlanGeneration
Lately i got a lot of crashes on iOS 18 devices (mostly 13 Pro devices but also a 16 Pro Max), has anyone encountered similar issues and is there more information about offlineFloorPlanGeneration?
com.apple.RoomScanCore.offlineFloorPlanGeneration
EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000f5cab18e03c0
RoomScanCore RSFrameFromDictionary + 110992
I created an app where the user:
first scans a room using RoomCaptureView (RoomPlan)
then taps on physical elements (objects, walls...) using an ARView to record some 3d positions
I can handle taps in an ARView using a UITapGestureRecognizer and the ARView raycast(from:, allowing:, alignment:) method. This works fine, so I thought I could do the same using the ARView used by RoomCaptureView., so the user can scan a room and record some 3d positions at the same time. Sadly, this approach does not work, as the raycast method always returns nil.
What I actually need is mapping a tap on screen to a real-world position during RoomCaptureSession.
Does anyone know how to do this?
Hi all,
Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described.
The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs.
These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded().
Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described.
Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
Hello,
I’m working with the RoomPlan API to capture and export 3D room models in an iOS app. My goal is to implement the RoomCaptureSessionDelegate protocol to handle the end of a room capture session. However, I’m encountering a cautionary warning in Xcode:
Warning: "captureSession(:didEndWith:error:) nearly matches defaulted requirement captureSession(:didEndWith:error:) of protocol RoomCaptureSessionDelegate."
I’ve verified that my delegate method signature matches the protocol, but the warning persists. I suspect this might be due to minor discrepancies in the parameter types or naming conventions required by the protocol.
Below is my full RoomScanner.swift file:
import RoomPlan
import SwiftUI
import UIKit
func exportModelToFiles() {
let exportURL = FileManager.default.temporaryDirectory.appendingPathComponent("RoomModel.usdz")
let documentPicker = UIDocumentPickerViewController(forExporting: [exportURL])
documentPicker.modalPresentationStyle = .formSheet
if let windowScene = UIApplication.shared.connectedScenes.first as? UIWindowScene,
let rootViewController = windowScene.windows.first?.rootViewController {
rootViewController.present(documentPicker, animated: true, completion: nil)
}
}
class RoomScanner: ObservableObject {
var roomCaptureSession: RoomCaptureSession?
private let sessionConfig = RoomCaptureSession.Configuration()
private var capturedRoom: CapturedRoom?
func startSession() {
roomCaptureSession = RoomCaptureSession()
roomCaptureSession?.delegate = self
roomCaptureSession?.run(configuration: sessionConfig)
}
func stopSession() {
roomCaptureSession?.stop()
guard let room = capturedRoom else {
print("No room data available for export.")
return
}
let exportURL = FileManager.default.temporaryDirectory.appendingPathComponent("RoomModel.usdz")
do {
try room.export(to: exportURL)
print("3D floor plan exported to \(exportURL)")
} catch {
print("Error exporting room model: \(error)")
}
}
}
// MARK: - RoomCaptureSessionDelegate
extension RoomScanner: RoomCaptureSessionDelegate {
func captureSession(_ session: RoomCaptureSession, didUpdate room: CapturedRoom) {
// Handle real-time updates if necessary
}
func captureSession(_ session: RoomCaptureSession, didEndWith capturedRoom: CapturedRoom, error: Error?) {
if let error = error {
print("Capture session ended with error: \(error.localizedDescription)")
} else {
self.capturedRoom = capturedRoom
}
}
}
Hello everyone,
I'm developing an app using RoomPlan where users can capture several rooms individually and assign custom names to each room. When I merge these captured rooms into a single CapturedStructure using the StructureBuilder, I want the merged structure to retain the original sections with their custom labels.
However, I'm encountering an issue:
Problem: After merging, the CapturedStructure's sections have different identifiers (UUIDs) compared to the original sections in the individual CapturedRooms. This means there's no straightforward way to map the custom labels from the original rooms to the sections in the merged structure.
What I've Tried:
Mapping by Identifiers: Attempted to map custom labels using the sections' identifiers, but the identifiers change during the merge, making this ineffective.
Mapping by Positions: Tried matching sections based on their center positions (simd_float3), but these change.
Modifying Section Labels: Considered changing the label property of the sections before merging, but the label is read-only and cannot be modified directly, and the CoreModel is used for merging rather than the data that can be modified via json.
Question:
Is there a recommended way to preserve or map custom section labels when merging multiple CapturedRooms into a single CapturedStructure using RoomPlan? How can I ensure that the custom labels assigned to rooms before the merge are correctly associated with the corresponding sections after the merge?
Any guidance or suggestions would be greatly appreciated!
Thank you!
I stumbled across the function setWorldOrigin(relativeTransform:) from the ARSession which is documented here:
https://developer.apple.com/documentation/arkit/arsession/2942278-setworldorigin
I made a custom ARSession where i override this function and print and modify the relativeTransform parameter. The print shows that this function is called with an updated relativeTransform value but it seems that it has no impact e.g. on the world origin when starting or continuing a scan, the tiny puppet house in RoomPlan or any tracking position that i get from ARKit.
Has anybody experience with this method or knows what parts are influenced by setWorldOrigin()?
I'm using the Apple RoomPlan sdk to generate a .usdz file, which works fine, and gives me a 3D scan of my room.
But when I try to use Model I/O's MDLAsset to convert that output into an .obj file, it comes out as a completely flat model shaped like a rectangle. Here is my Swift code:
let destinationURL = destinationFolderURL.appending(path: "Room.usdz")
do {
try FileManager.default.createDirectory(at: destinationFolderURL, withIntermediateDirectories: true)
try finalResults?.export(to: destinationURL, exportOptions: .model)
let newUsdz = destinationURL;
let asset = MDLAsset(url: newUsdz);
let obj = destinationFolderURL.appending(path: "Room.obj")
try asset.export(to: obj)
}
Not sure what's wrong here. According to MDLAsset documentation, .obj is a supported format and exporting from .usdz to the other formats like .stl and .ply works fine and retains the original 3D shape.
Some things I've tried:
changing "exportOptions" to parametric, mesh, or model.
simply changing the file extension of "destinationURL" (throws error)
Hi, from the 2023 WWDC video on RoomPlan, they mention that it should be possible to integrate photo / video with RoomPlan: https://developer.apple.com/videos/play/wwdc2023/10192/ (at ~2:30)
However, when I attempt to use AVFoundation and AVCaptureSession with RoomPlan, I get the simple error of "Cannot Record".
So I'm not sure if there is something wrong with my setup/code, or if these two libraries are actually incompatible. Are there any kinds of guides for doing things like this? Am I going in the right direction or should I try a different approach? Happy to share code if necessary. Thanks
Hello everyone, is it possible to embed a USDZ file (Roomplan) in an HTML5 website and view it in 3D? Or do I need to convert the USDZ file into a GLTF file?