Hello!
I'm currently building an app where I feed images into a Photogrammetry session to create a USDZ. Pretty straightforward, works great. We've recently started some testing on older devices, and have discovered that Photogrammetry is requiring devices that have LIDAR (we've seen some console logs referencing LIDAR if we stumble through a photogrammetry process without checking isSupported
first)
Judging from @swredcam's posting about ReefScan from November 24 (https://developer.apple.com/forums/thread/769221) it looks like Photogrammetry did work on those non-LIDAR devices. In my own testing on an iPhone 12 mini with iOS 17, PhotogrammetrySession says it's not supported.
Since we're only feeding in a sequence of photos that have never had depth data, and they process fine on pro/max devices, we're curious why this would require a LIDAR sensor to work, when it seems like it did work without LIDAR in the past. Or is there some other limitation of non-pro devices that is causing photogrammetry to not be supported (especially on today's really powerful hardware)
Thanks! ++md
"RealityKit Object Capture is only available on Mac computers that meet the minimum requirements for performing object reconstruction, including a GPU with at least 4 GB of RAM and ray tracing support. It is also available on select iOS devices with LiDAR capabilities."
The "LiDAR capabilities" requirement here refers to both the camera and the SoC (primarily the GPU). So although you don't technically need a LiDAR camera for photogrammetry you do need a powerful GPU and those happen to be in iOS devices that have a LiDAR camera.
More details in the other thread.