I am developing an AR face experience that requires accurate detection of facial landmarks. Currently I have to just paint on the face mesh to render a texture on the face. This become inaccurate when it is tried with different types a faces.
To give an example, currently I painted a lipstick texture using the default face texture. The lips don't render perfectly for different faces.
It will be nice if ARKit can provide an api for the face mesh where we can ask for specific landmark.
for example. faceMesh.vertices.lips, should return an array of vertices that belong to the lips.
Can something like this happen in the future?
I know the Vision framework currently provides facial landmarks and I can always unprotect these 2D points on the 3D face mesh to get the vertices on the face mesh that correspond to the lips but something like this should be readiliy available in ARKit api.
To give an example, currently I painted a lipstick texture using the default face texture. The lips don't render perfectly for different faces.
It will be nice if ARKit can provide an api for the face mesh where we can ask for specific landmark.
for example. faceMesh.vertices.lips, should return an array of vertices that belong to the lips.
Can something like this happen in the future?
I know the Vision framework currently provides facial landmarks and I can always unprotect these 2D points on the 3D face mesh to get the vertices on the face mesh that correspond to the lips but something like this should be readiliy available in ARKit api.