I'm developing 3D Scanner works on iPad.
I'm using AVCapturePhoto and Photogrammetry Session.
photoCaptureDelegate is like below:
extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic")
let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ])
let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB)
let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ])
try? fileData!.write(to: fileUrl, options: .atomic)
}
}
But, Photogrammetry session spits warning messages:
Sample 0 missing LiDAR point cloud!
Sample 1 missing LiDAR point cloud!
Sample 2 missing LiDAR point cloud!
Sample 3 missing LiDAR point cloud!
Sample 4 missing LiDAR point cloud!
Sample 5 missing LiDAR point cloud!
Sample 6 missing LiDAR point cloud!
Sample 7 missing LiDAR point cloud!
Sample 8 missing LiDAR point cloud!
Sample 9 missing LiDAR point cloud!
Sample 10 missing LiDAR point cloud!
The session creates a usdz 3d model but scale is not correct.
I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
This warning message is related to loading the internal LiDAR data saved by the ObjectCaptureSession
front-end. You will see this warning if you do not use ObjectCaptureSession
for capture. There is no public way to directly access or provide this raw LiDAR data -- you will need to use the Object Capture UI if you want the LiDAR improvements to textureless object reconstruction.
Using the AVDepthData
dictionary as was shown in the original 2020 Object Capture release (and referred to by the examples here) is the public way to provide depth information to the reconstruction. This depth map may be from using stereo cameras to get disparity or it could be AVDepthData
depth map derived from LiDAR. Note that regardless of which you provide, you will still see this warning about LiDAR. That said, if provided correctly the depth data will be used to help with scale recovery. Note that all 4 depth pixel formats can be loaded and used by the reconstruction, both half and full float disparity as well as depth, in the current release.
If you are providing depth or disparity data in the AVDepthData but there is still scale variance, please file a bug with Feedback Assistant so that our team can investigate.