Posts

Post not yet marked as solved
3 Replies
576 Views
I am currently tracking faces using ARKit. ARKit knows the world coordinates of the face via the ARFaceAnchor and the user's eyes via leftEyeTransform and rightEyeTransform applied to the face. I can get the live pixelbuffer via let frame = sceneView.session.currentFrame let ciimage:CIImage = CIImage(cvPixelBuffer: frame!.capturedImage) let context:CIContext = CIContext(options: nil) let cgImage:CGImage = context.createCGImage(ciimage, from: ciimage.extent)! let myImage:UIImage = UIImage(cgImage: cgImage) How do I combine frame.capturedImage with ARKit's internal knowledge of where the user's eyes are, nose, etc. and create a cropped images of the user's eyes? I am trying to construct images in real time of the user's eyes.
Posted
by scm007.
Last updated
.