Can the Object Capture API be used to create building models?

Can the RealityKit 2's Object Capture API be used to create a model of a building's interior?

I've only found examples of creating models using pictures taken by walking around an object, not from inside of it 🤔 I know photogrammetry in general can be used for such cases, but I'm not sure the new RealityKit's API supports it.

I'd be grateful if someone who tried it shared their results (I can't right now) and if someone at Apple could confirm whether this should be supported or not by design.

Thank you for your time 🙇‍♂️

Accepted Reply

Definitely you can. The whole city blocks and cathedrals are captured and reconstructed in 3D by photogrammetry software. That's how we have the whole real cities in Ubisoft games (i.e.). Archeologists and historians use it too. But you need tons of images (100 is very optimistically). 400+ and more. You might capture video of building form drone or satellite if you have one. Then convert video into image sequence and feed it into the photogrammetry. There are few very expensive software for it. But now we have native photogrammetry engine with full power of Apple's ML and metal.

It seems more logical to capture buildings by LiDAR in AR Kit. Just like in "3D Scanner" app and "Capture". But I doubt you're brave enough to attach your $1000 iPhone Pro MAX on drone. it's not possible to handle such huge amount of data in one AR session I guess. That's why scientists/cg artists use LIDARs to scan interiors and photogrammetry to reconstruct exterior landscapes.

Replies

I can confirm from experience and my lab appointment that it is possible to scan interior rooms! I received best results when I stayed in one place while taking photos.

This should be possible, there's nothing in the API that would prevent this sort of thing. However, you will need to take a lot more pictures than average (over 100) to get good results. I would also experiment with capturing smaller sections of the interior at a time, then combining all of the exported USDZs into one larger one. I suspect that this would create more accurate captures since the algorithm only needs to focus on one area of the room at a time.

Definitely you can. The whole city blocks and cathedrals are captured and reconstructed in 3D by photogrammetry software. That's how we have the whole real cities in Ubisoft games (i.e.). Archeologists and historians use it too. But you need tons of images (100 is very optimistically). 400+ and more. You might capture video of building form drone or satellite if you have one. Then convert video into image sequence and feed it into the photogrammetry. There are few very expensive software for it. But now we have native photogrammetry engine with full power of Apple's ML and metal.

It seems more logical to capture buildings by LiDAR in AR Kit. Just like in "3D Scanner" app and "Capture". But I doubt you're brave enough to attach your $1000 iPhone Pro MAX on drone. it's not possible to handle such huge amount of data in one AR session I guess. That's why scientists/cg artists use LIDARs to scan interiors and photogrammetry to reconstruct exterior landscapes.

@tsugar thanks for your tips from the practical point of view, extremely valuable :)

We’re actually interested in interiors only and using ARKit+LiDAR for that, but this new API makes the photogrammetry so easy that it’s tempting to rethink which path to take.