Does anyone actually notice any improvements using the new ObjectCaptureSession with PhotogrammetrySession?

We have implemented all the recent additions Apple made for this on the iOS side for guided capture using Lidar and image data via ObjectCaptureSession.

After the capture finishes we are sending our images to PhotogrammetrySession on macOS to reconstruct models in higher quality (Medium) than the Preview quality that is currently supported on iOS.

We have now done a few side by side captures of using the new ObjectCapureSession vs using the traditional capture via the AvFoundation framework but have not seen any improvements that were claimed during the session that Apple hosted at WWDC.

As a matter of fact we feel that the results are actually worse because the images obtained through the new ObjectCaptureSession aren't as high quality as the images we get from AvFoundation.

Are we missing something here? Is PhotogrammetrySession on macOS not using this new additional Lidar data or have the improvements been overstated? From the documentation it is not clear at all how the new Lidar data gets stored and how that data transfers.

We are using iOS 17 beta 4 and macOS Sonoma Beta 4 in our testing. Both codebases have been compiled using Xcode 15 Beta 5.

  • Do you have the configuration.isOverCaptureEnabled set to true? Looking at the documentation, it looks like they have low detail capture, to speed-up the process re-construction using PhotogrammetrySession. But, if the flag is enabled and you do the re-construction in Mac, it will be able to use the additional information.

Add a Comment