Seeking Guidance on Extracting Point Cloud and Facial Measurements from Object Capture Scans

Hello Apple community,

I am currently working with Object Capture and would appreciate some guidance on extracting specific data from the scans. I have successfully scanned objects, but I am now looking to obtain the point cloud and facial measurements from these scans.

I have used https://developer.apple.com/documentation/RealityKit/guided-capture-sample as a reference for implementation.

Point Cloud:

How can I extract the point cloud data from my Object Capture scans? Are there any specific tools or methods recommended for this purpose?

Facial Measurements:

Is there a way to extract facial measurements accurately using Object Capture? Are there any built-in features or third-party tools that can assist with this? I've explored the documentation, but I would greatly benefit from any insights, tips, or recommended workflows from the community. Your expertise is highly appreciated!

Thank you in advance.

Seeking Guidance on Extracting Point Cloud and Facial Measurements from Object Capture Scans
 
 
Q