Reconstructing point clouds from AVDepthData (True Depth camera)

Hi,


i successfully created point clouds from the AVDepthData Frames using the Treu Depth camera from the iPhone. Now i would like to merge the point clouds.

The frames don't match, when i try to put them together because of the different positions, when captuing the frame.


I am wondering, if its possible to translate the points with the help of CoreMotion, by using the attitude and the user accelaration, by using the first frame as reference frame.


Has anybody done this before and can give me some tips?


I have seen other apps do it, but i am unsure if they do it alone with coremotion or some other magic like feature points.


Thanks

Do you have a sample code on how you achieved this ?
Reconstructing point clouds from AVDepthData (True Depth camera)
 
 
Q