The yolo11 object detection model I exported to coreml stopped working in macOS15.2 beta.

After updating to macOS15.2beta, the Yolo11 object detection model exported to coreml outputs incorrect and abnormal bounding boxes.

It also doesn't work in iOS apps built on a 15.2 mac.

The same model worked fine on macOS14.1.

When training a Yolo11 custom model in Python, exporting it to coreml, and testing it in the preview tab of mlpackage on macOS15.2 and Xcode16.0, the above result is obtained.

Same exact issue that I had since the first beta of 15.2.

My findings are that default Yolo11 models supplied by Ultralytics themselves are working fine. The issue is only affecting custom trained models as for me.

As of now, with release version of 15.2 being installed, models are still acting erratically:

Here is an image of my model running on my friend's Mac with macOS 14.6:

After reading a bunch of related posts, it looks like it might be related to the compute units and flexible shapes and that the ANE doesn't support it. I bet disabling the MLE5Engine flag coincidentally is the equivalent of

config.computeUnits = .cpuAndGPU

I bet the arch running on the neural engine is creating the nonsensical outputs, where constrained to cpu/gpu is returning normal behavior. Going to test and see if I can confirm.

@rromanchuk Your solution worked! Thank you.

I cannot describe my relief when I found this thread. I also have a YOLO11 model that started behaving poorly in exactly this way sometime between last November and this week.

Changing the computeUnits value didn't do anything, but changing the value of experimentalMLE5EngineUsage did.

I submitted a bug report with a sample project: FB16502596

The yolo11 object detection model I exported to coreml stopped working in macOS15.2 beta.
 
 
Q