VNDetectFaceRectanglesRequest does not use the Neural Engine?

I'm on Tahoe 26.1 / M3 Macbook Air. I'm using VNDetectFaceRectanglesRequest as properly as possible, as in the minimal command line program attached below. For some reason, I always get:

MLE5Engine is disabled through the configuration

printed. I couldn't find any notes on developer docs saying that VNDetectFaceRectanglesRequest can not use the Apple Neural Engine. I'm assuming there is something wrong with my code however I wasn't able to find any remarks from documentation where it might be. I wasn't able to find the above error message online either. I would appreciate your help a lot and thank you in advance.


The code below accesses the video from AVCaptureDevice.DeviceType.builtInWideAngleCamera. Currently it directly chooses the 0th format which has the largest resolution (Full HD on my M3 MBA) and "4:2:0" color "v" reduced color component spectrum encoding ("420v").

After accessing video, it performs a VNDetectFaceRectanglesRequest. It prints "VNDetectFaceRectanglesRequest completion Handler called" many times, then prints the error message above, then continues printing "VNDetectFaceRectanglesRequest completion Handler called" until the user quits it.

To run it in Xcode, File > New project > Mac command line tool. Pasting the code below, then click on the root file > Targets > Signing & Capabilities > Hardened Runtime > Resource Access > Camera.

A possible explanation could be that either Apple's internal CoreML code for this function works on GPU/CPU only or it doesn't accept 420v as supplied by the Macbook Air camera

import AVKit
import Vision

var videoDataOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
var detectionRequests: [VNDetectFaceRectanglesRequest]?
var videoDataOutputQueue: DispatchQueue = DispatchQueue(label: "queue")

class XYZ: /*NSViewController or NSObject*/NSObject, AVCaptureVideoDataOutputSampleBufferDelegate {
    func viewDidLoad() {
        //super.viewDidLoad()
        
        let session = AVCaptureSession()
        let inputDevice = try! self.configureFrontCamera(for: session)
        self.configureVideoDataOutput(for: inputDevice.device, resolution: inputDevice.resolution, captureSession: session)
        self.prepareVisionRequest()
        
        session.startRunning()
    }
    
    fileprivate func highestResolution420Format(for device: AVCaptureDevice) -> (format: AVCaptureDevice.Format, resolution: CGSize)? {
        let deviceFormat = device.formats[0]
        print(deviceFormat)
        let dims = CMVideoFormatDescriptionGetDimensions(deviceFormat.formatDescription)
        let resolution = CGSize(width: CGFloat(dims.width), height: CGFloat(dims.height))
        return (deviceFormat, resolution)
    }
    
    fileprivate func configureFrontCamera(for captureSession: AVCaptureSession) throws -> (device: AVCaptureDevice, resolution: CGSize) {
        let deviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: .video, position: AVCaptureDevice.Position.unspecified)
        let device = deviceDiscoverySession.devices.first!
        let deviceInput = try! AVCaptureDeviceInput(device: device)
        captureSession.addInput(deviceInput)
        let highestResolution = self.highestResolution420Format(for: device)!
        try! device.lockForConfiguration()
        device.activeFormat = highestResolution.format
        device.unlockForConfiguration()
        
        return (device, highestResolution.resolution)
    }
    
    fileprivate func configureVideoDataOutput(for inputDevice: AVCaptureDevice, resolution: CGSize, captureSession: AVCaptureSession) {
        videoDataOutput.setSampleBufferDelegate(self, queue: videoDataOutputQueue)
        captureSession.addOutput(videoDataOutput)
    }
  
    
    fileprivate func prepareVisionRequest() {
        let faceDetectionRequest: VNDetectFaceRectanglesRequest = VNDetectFaceRectanglesRequest(completionHandler: { (request, error) in
            print("VNDetectFaceRectanglesRequest completion Handler called")
        })
        
        // Start with detection
        detectionRequests = [faceDetectionRequest]

    }

    // MARK: AVCaptureVideoDataOutputSampleBufferDelegate
    
    // Handle delegate method callback on receiving a sample buffer.
    public func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        var requestHandlerOptions: [VNImageOption: AnyObject] = [:]
        let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil)
        if cameraIntrinsicData != nil {
            requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData
        }

        let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
        // No tracking object detected, so perform initial detection
        let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
                                                        orientation: CGImagePropertyOrientation.up, options: requestHandlerOptions)

        try! imageRequestHandler.perform(detectionRequests!)
    }
}

let X = XYZ()
X.viewDidLoad()

sleep(9999999)


VNDetectFaceRectanglesRequest does not use the Neural Engine?
 
 
Q