Object Capture API on mac with Lidar data from iOS to get real life size of the objects

Hi,

We are searching a solution to create 3D models in real life size using reflex cameras.

We created an app for mac called Smart Capture that is using Object Capture to recreate 3D models from pictures. We used this project to digitize 5000 archeological findings of the Archaeological Park of Pompeii.

We created a strong workflow using Orbitvu automated photography boxes with 3 reflex cameras for each box to speed up the capture process that allowed us to get a 3D model in less than 10 minutes (2-3 minutes to capture and about 7-8 minutes to process on a m2 max).

The problem is that the resulting object has no size information and we have to manually take measurement and resize the 3d model accordingly, introducing a manual step and a possible error on the workflow.

I was wondering if it's possible, using iOS 17 Object Capture APIs to get point cloud data which I could add to the reflex cameras pictures and process the whole package on the mac to retrieve the size of the real object.

As far as I understood the only way to get it working before iOS 17 was to use depth information (I tried Sample Capture project), but the problem is that we have to work with small objects up to huge objects (our range is objects from about 1 to 25 inches)

Do you have any clue on how to achieve this?

  • You could capture with an iphone side by side with your current setup and then apply the scale from the iphone model to the full model. technically the way scaling is done in most Photogrammetry sw is via scale bars that get detected by the software where you define your bounds and it then scales the models based on a few of these points in space that have a given distance defined. sadly there is no support for this in PhotogrammetrySession yet. maybe file a suggestion.

Add a Comment