RoomPlan

RSS for tag

Create parametric 3D scans of rooms and room-defining objects.

Posts under RoomPlan tag

56 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

RoomCaptureSession with ARSCNView crashes when scanning multiple hotspots across different rooms
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process. Bug Description: The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern: User successfully scans first room with multiple hotspots (working correctly) User stops scanning, moves to a new room In the new room, first 1-2 hotspots work correctly Application crashes when attempting to scan additional hotspots Technical Details: Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose() Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode Error suggests invalid positioning when placing AR anchors Steps to Reproduce: Start room scan Complete multiple hotspot captures in first room Stop scanning Start new room scan Capture 1-2 hotspots successfully Attempt additional hotspot captures -> crashes Attempted Solutions: Implemented anchor cleanup between sessions Added position validation before anchor placement Implemented ARSession error handling Added proper thread management for AR operations Environment: Device: iPhone 14 Pro (LiDAR equipped) iOS Version: 18.1.1 (22B91) Testing through TestFlight Crash Log Details: Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Exception Note: EXC_CORPSE_NOTIFY Triggered by Thread: 27 Thread 27 Crashed: 0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8 1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268 2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128 3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor Question: Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides. Crash Log Code and full crash logs can be provided if needed.
2
1
331
3w
Scanning Smaller Objects with RoomPlan (Light Switches or Sockets)
Hi everyone, I’m currently developing an app using Apple’s RoomPlan framework, and so far, everything is working great! However, I’d like to extend the functionality to include scanning smaller objects, such as light switches or power outlets, in addition to the walls and larger furniture that RoomPlan already supports. From what I understand, based on the documentation, RoomPlan doesn’t natively support the detection or measurement of smaller objects like these. Is that correct? If that’s the case, does anyone have suggestions or ideas on how this could be achieved? Perhaps by integrating another framework or technology alongside RoomPlan? I’d appreciate any insights or advice from those who have worked on similar use cases. Thanks in advance!
1
0
344
Jan ’25
Crash: offlineFloorPlanGeneration
Lately i got a lot of crashes on iOS 18 devices (mostly 13 Pro devices but also a 16 Pro Max), has anyone encountered similar issues and is there more information about offlineFloorPlanGeneration? com.apple.RoomScanCore.offlineFloorPlanGeneration EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x0000f5cab18e03c0 RoomScanCore RSFrameFromDictionary + 110992
1
0
443
Dec ’24
raycast during RoomCaptureSession
I created an app where the user: first scans a room using RoomCaptureView (RoomPlan) then taps on physical elements (objects, walls...) using an ARView to record some 3d positions I can handle taps in an ARView using a UITapGestureRecognizer and the ARView raycast(from:, allowing:, alignment:) method. This works fine, so I thought I could do the same using the ARView used by RoomCaptureView., so the user can scan a room and record some 3d positions at the same time. Sadly, this approach does not work, as the raycast method always returns nil. What I actually need is mapping a tap on screen to a real-world position during RoomCaptureSession. Does anyone know how to do this?
0
1
412
Nov ’24
RoomCaptureSession persistence, ARSession pause broken?
Hi all, Our app allows a user to scan a room and then save that scan on a separate view, followed by additional scans. We're looking into allowing room combining via CapturedStructure, so we need rooms to be scanned in the same ARWorldMap without necessarily needing to re-localize in the same session. This should fit within the first scenario that Apple described. The only way I have found that allows our requirements is to save RoomCaptureView and to re-use that RoomCaptureView whenever we need to start a session again. This creates a number of other issues, and ideally, we wouldn't need to save a View in something like a singleton. We are using captureSession.stop(pauseARSession: false). Additionally, if we use the same RoomCaptureView and an error occurs during the scanning process, we can't get the instructions overlay to appear again if we reuse this view (specifically, the instructions in the middle of the view that state "Move device to start"). It's as if the instructions are completely removed and scanning is stuck on an error state if an error occurs. These instructions also seem to be separate from the instructions we can grab from RoomCaptureViewDelegate via didProvide instruction: RoomCaptureSession.Instruction), so we can't use that either. There's a couple subviews that seem relevant to this: RoomCaptureCoachingOverlayView and ARGlyphView - but both are not public, so we can't force them to appear. Also attempted a number of other things to try to get these subviews to appear, such as layoutIfNeeded(). Saving the ARSession and using it in let roomCaptureView = RoomCaptureView(frame: viewBounds, arSession: arSession) where we're creating a new view with the same ARSession seems much more ideal as that solves the above issues, but we run into another issue: world tracking seems to be completely lost when a new RoomCaptureView (and thus a new RoomCaptureSession) is started, even with the same already started ARSession, almost as if captureSession.stop(pauseARSession: false) doesn't work as described. Is there any way around needing to use the same RoomCaptureView or RoomCaptureSession for subsequent scans in the same session without needing to re-localize via ARWorldMap loading? Is there a way to force the guiding instructions to appear?
0
3
389
Nov ’24
Error: captureSession(_:didEndWith:error:) nearly matches defaulted requirement in RoomCaptureSessionDelegate
Hello, I’m working with the RoomPlan API to capture and export 3D room models in an iOS app. My goal is to implement the RoomCaptureSessionDelegate protocol to handle the end of a room capture session. However, I’m encountering a cautionary warning in Xcode: Warning: "captureSession(:didEndWith:error:) nearly matches defaulted requirement captureSession(:didEndWith:error:) of protocol RoomCaptureSessionDelegate." I’ve verified that my delegate method signature matches the protocol, but the warning persists. I suspect this might be due to minor discrepancies in the parameter types or naming conventions required by the protocol. Below is my full RoomScanner.swift file: import RoomPlan import SwiftUI import UIKit func exportModelToFiles() { let exportURL = FileManager.default.temporaryDirectory.appendingPathComponent("RoomModel.usdz") let documentPicker = UIDocumentPickerViewController(forExporting: [exportURL]) documentPicker.modalPresentationStyle = .formSheet if let windowScene = UIApplication.shared.connectedScenes.first as? UIWindowScene, let rootViewController = windowScene.windows.first?.rootViewController { rootViewController.present(documentPicker, animated: true, completion: nil) } } class RoomScanner: ObservableObject { var roomCaptureSession: RoomCaptureSession? private let sessionConfig = RoomCaptureSession.Configuration() private var capturedRoom: CapturedRoom? func startSession() { roomCaptureSession = RoomCaptureSession() roomCaptureSession?.delegate = self roomCaptureSession?.run(configuration: sessionConfig) } func stopSession() { roomCaptureSession?.stop() guard let room = capturedRoom else { print("No room data available for export.") return } let exportURL = FileManager.default.temporaryDirectory.appendingPathComponent("RoomModel.usdz") do { try room.export(to: exportURL) print("3D floor plan exported to \(exportURL)") } catch { print("Error exporting room model: \(error)") } } } // MARK: - RoomCaptureSessionDelegate extension RoomScanner: RoomCaptureSessionDelegate { func captureSession(_ session: RoomCaptureSession, didUpdate room: CapturedRoom) { // Handle real-time updates if necessary } func captureSession(_ session: RoomCaptureSession, didEndWith capturedRoom: CapturedRoom, error: Error?) { if let error = error { print("Capture session ended with error: \(error.localizedDescription)") } else { self.capturedRoom = capturedRoom } } }
1
0
456
Nov ’24
Map captured rooms from before merge, to new merged captured rooms
Hello everyone, I'm developing an app using RoomPlan where users can capture several rooms individually and assign custom names to each room. When I merge these captured rooms into a single CapturedStructure using the StructureBuilder, I want the merged structure to retain the original sections with their custom labels. However, I'm encountering an issue: Problem: After merging, the CapturedStructure's sections have different identifiers (UUIDs) compared to the original sections in the individual CapturedRooms. This means there's no straightforward way to map the custom labels from the original rooms to the sections in the merged structure. What I've Tried: Mapping by Identifiers: Attempted to map custom labels using the sections' identifiers, but the identifiers change during the merge, making this ineffective. Mapping by Positions: Tried matching sections based on their center positions (simd_float3), but these change. Modifying Section Labels: Considered changing the label property of the sections before merging, but the label is read-only and cannot be modified directly, and the CoreModel is used for merging rather than the data that can be modified via json. Question: Is there a recommended way to preserve or map custom section labels when merging multiple CapturedRooms into a single CapturedStructure using RoomPlan? How can I ensure that the custom labels assigned to rooms before the merge are correctly associated with the corresponding sections after the merge? Any guidance or suggestions would be greatly appreciated! Thank you!
0
1
519
Oct ’24
What does setWorldOrigin() do?
I stumbled across the function setWorldOrigin(relativeTransform:) from the ARSession which is documented here: https://developer.apple.com/documentation/arkit/arsession/2942278-setworldorigin I made a custom ARSession where i override this function and print and modify the relativeTransform parameter. The print shows that this function is called with an updated relativeTransform value but it seems that it has no impact e.g. on the world origin when starting or continuing a scan, the tiny puppet house in RoomPlan or any tracking position that i get from ARKit. Has anybody experience with this method or knows what parts are influenced by setWorldOrigin()?
0
0
421
Sep ’24
converting .usdz to .obj results in flattened model
I'm using the Apple RoomPlan sdk to generate a .usdz file, which works fine, and gives me a 3D scan of my room. But when I try to use Model I/O's MDLAsset to convert that output into an .obj file, it comes out as a completely flat model shaped like a rectangle. Here is my Swift code: let destinationURL = destinationFolderURL.appending(path: "Room.usdz") do { try FileManager.default.createDirectory(at: destinationFolderURL, withIntermediateDirectories: true) try finalResults?.export(to: destinationURL, exportOptions: .model) let newUsdz = destinationURL; let asset = MDLAsset(url: newUsdz); let obj = destinationFolderURL.appending(path: "Room.obj") try asset.export(to: obj) } Not sure what's wrong here. According to MDLAsset documentation, .obj is a supported format and exporting from .usdz to the other formats like .stl and .ply works fine and retains the original 3D shape. Some things I've tried: changing "exportOptions" to parametric, mesh, or model. simply changing the file extension of "destinationURL" (throws error)
0
0
735
Sep ’24
Recording video and using RoomPlan at the same time
Hi, from the 2023 WWDC video on RoomPlan, they mention that it should be possible to integrate photo / video with RoomPlan: https://developer.apple.com/videos/play/wwdc2023/10192/ (at ~2:30) However, when I attempt to use AVFoundation and AVCaptureSession with RoomPlan, I get the simple error of "Cannot Record". So I'm not sure if there is something wrong with my setup/code, or if these two libraries are actually incompatible. Are there any kinds of guides for doing things like this? Am I going in the right direction or should I try a different approach? Happy to share code if necessary. Thanks
2
0
662
Sep ’24
Multiple floor levels in one story
In lots of houses there are different levels but are still on the same floor. What i mean is that there are things like stairs on the entrance that only have a few steps and would count basically as the same story. RoomPlan already does a nice job recognizing them during the scanning but after the StructureBuilder or the optimization step it is not really satisfying. Has anyone managed to handle those cases? Or do you have to scan a specific way to capture such small differences within a level?
0
0
437
Sep ’24
RoomCaptureSession custom ARSession missing SceneDepth
Hello We are exploring the iOS 17 RoomPlan updates that allow for a custom ARSession to be passed into the RoomCaptureSession via the new initializer. let roomCaptureSession = RoomCaptureSession(arSession: myARSession) Currently we use our ARSession to extract sceneDepth from the ARFrames via the delegate callback. This works prior to activation of the RoomCaptureSession via session.run(configuration). However, when we do call run on the RoomCaptureSession, sceneDepth is no longer present on the incoming ARFrames. Are these mutually exclusive? Should we expect ARFrame depth data to be present when a RoomCaptureSession is running with the shared ARSession?
1
0
609
Sep ’24
Modify CapturedRoom Objects
The RoomPlan API makes it possible to serialize and de-serialize CapturedRoom objects. This opens up the possibility to modify a CapturedRoom (e.g. deleting surfaces/objects) in a de-serialized state and serialize it as a new CapturedRoom. All modified attributes are loaded accordingly, so far so good. My problem starts with the StructureBuilder and it's merge function capturedStructure(). This function ignores any modifications to attributes of a CapturedRoom. The only data that is considered is encoded in the CoreModel attribute (which is not mentioned in the official documentation). If someone has more information or a working solution about how to modify CapturedRooms please let me know. Additionally if there is somewhere a documentation about the CoreModel-attribute please post a link here.
0
0
548
Sep ’24
Issue with losing alignment on multi-room session
Hello! We're having this issue in our app that is implementing multi room scan via RoomPlan, where the ARSession world origin is shifted to wherever the RoomCaptureSession is ran again (e.g in the next room) To clarify a few point We are using the RoomCaptureView, starting a new room using roomCaptureView.captureSession.run(configuration: captureSessionConfig) and stopping the room scan via roomCaptureView.captureSession.stop(pauseARSession: false) We are re-using the same ARSession and, which is passed into the RoomCaptureView as so: arSession = ARSession() roomCaptureView = RoomCaptureView(frame: .zero, arSession: arSession) Any clue why the AR world origin is reset? I need it to be consistent for storing frame camera position Thanks!
1
2
382
Sep ’24
RoomPlan: textures for walls and floors.
After scanning the room I use the .export method passing a ModelProvider. Then I import the USDZ into a SCNView and continue processing the scene: I would like to apply a texture to the walls and floor, but I can't do it for the walls and floor because they don't contain the TextureCoordinates. Creating them from the USDZ file is not easy. I tried to combine the CapturedRoom data for the walls and floor only, adding the TextureCoordinates. I'm managing, but I'm struggling a lot. Isn't there an easier way to do it? Is there a ModelProvider planned for surfaces in the future? If so, where can I access the RoomPlan beta documentation?
1
1
662
Sep ’24
Add basic story ceilings to RoomPlan model
The structure builder provides walls and floors for each captured story, but not a ceiling. For my case it is necessary that the scanned geometry is closed to open up the possibility to place objects on the ceiling for example and therefore it is important that there is an estimated ceiling for different rooms within a story. Is there any info that apple has something like this on the roadmap in the future because i think this can open opportunities especially when thinking about industrial application of the API. If somebody has more insights on this topic pls share :)
1
0
724
Aug ’24
RoomPlan AR Frame Position wrong after combining Rooms
Hello everyone, I am struggling to find a solution for the following problem, and I would be glad and thankful if anyone can help me. My Use Case: I am using RoomPlan to scan a room. While scanning, there is a function to take pictures. The position from where the pictures are taken will be saved (in my app, they are called "points of interest" = POI). This works fine for a single room, but when adding a new room and combining the two of them using: structureBuilder.capturedStructure(from: capturedRooms) the first room will be transformed and thus moved around to fit in the world space. The points are not transformed with the rest of the room since they are not in the rooms structure specifically, which is fine, but how can I transform the POIs too, so that they are in the correct positions where they were taken? I used: func captureSession(_ session: RoomCaptureSession, didEndWith data: CapturedRoomData, error: (Error)?) to get the transform matrix from "arFrameReferenceOriginTransform" and apply this to the POIs, but it still seems that this is not enough. I would be happy for any tips and help! Thanks in advance! My Update function: func updatePOIPositions(with originTransform: simd_float4x4) { for i in 0..<(poisOldRooms.count) { var poi = poisOldRooms[i] let originalPosition = SIMD4<Float>( poi.data.cameraOriginX, poi.data.cameraOriginY, poi.data.cameraOriginZ, 1.0 ) let updatedTransform = originTransform * originalPosition poisOldRooms[i].data.cameraX = updatedTransform.x poisOldRooms[i].data.cameraY = updatedTransform.y poisOldRooms[i].data.cameraZ = updatedTransform.z } }
3
2
760
Sep ’24