Object Capture API Limitation Concerns

Hello, I'm currently building an app that implements the on-device object capture API to create 3D models. I have two concerns that I cannot find addressed anywhere on the internet:

  1. Can on-device object capture be performed by devices without LiDAR? I understand that depth data is necessary for making scale-accurate models - if there is an option to disable it, where would one specify that in code?

  2. Can models be exported to .obj instead of .usdz? From WWDC2021 at 3:00 it is mentioned that it is possible with the Apple Silicon API but what about with on-device scanning?

I would be very grateful if anyone is knowledgeable enough to provide some insight. Thank you so much!

There are three sources of measurement points (point cloud) in iOS/iPadOS ARKit:

  • ARPointCloud: highly noisy and sparse (Debug info for device-tracking. Scattered as needle-shapes in space but mapped to stable screen-points).
  • ARDepthData: relatively accurate and dense. Provided by processing LiDAR and image sensor data.
  • ARMeshAnchor: provided by processing ARDepthData. Vertices of meshes are practically points.

visionOS ARKit provides only MeshAnchor.

High-level applications require object information of shape, size, and 6DoF pose:

  • ARPointCloud: https://youtu.be/h4YVh2-3p9s
  • ARDepthData: https://youtu.be/zc6GQOtgS7M
  • ARMeshAnchor: https://youtu.be/sMRfX334blI

App-developers are responsible for how to use the shape, size, and 6DoF pose of object.

Object Capture API Limitation Concerns
 
 
Q