Do 3D models generated on iPhone 12 Pro scale photogrammetry using stereo depth data?

This question relates to how Apple tech pairs with the phone app Polycam AI.

When using PolycamAI app in photomode (iPhone 12 Pro) – the scale seems very good, how does this work if only photos (eg photo mode and no LiDAR mode) are being stitched together?

• Do photomode captured images on the iPhone 12 Pro + upwards use Apple's stereo depth data or metadata attached to each image, to more accurately scale the PolyCam generated 3D models?

• Does the Polycam server (and other 3D scanning apps on iPhone) utilise Apple’s Object capture 3D reconstruction software to create the 3D mesh files and point clouds?

Do 3D models generated on iPhone 12 Pro scale photogrammetry using stereo depth data?
 
 
Q