I have an application that captures an image with a depth map and calibration data and exports it so then I can work with it in python.
The depth map and calibration data are all converted to Float32 and is stored as a json file. The image is stored as a jpeg file.
The depth map shape is (480, 640) and the image shape is (3024, 4032, 3)
My goal is to be able to create a point cloud from this data.
I’m new to working with data provided by apples TrueDepth camera and would like some clarity to what preprocessing steps I need to perform before creating the point cloud. Here they are below:
1) since the 640x480 is a scaled version of the 12MP image, means that I can scale down the intrinsics as well. So I should scale [fx, fy, cx, cy] by the scaling factor 640/4032 = 0.15873?
2) after scaling comes taking care of the distortion, which I should use lensDistortionLookupTable to distort both the image and depth map?
Are the above two questions correct or am I missing something??