CoreML Model Compilation Off-device

Our mlmodel crashes during compilation only on iOS 16 devices (ML Compiler bug).

As a work around we could pre-compile the model, zip it up, host it on a server, then have devices download and use it directly in order to avoid compiling (and crashing).

Any foreseeable issues with compiling off-device and using it?

Answered by in 732930022

Sorry about the inconvenience. This issue has been addressed and a fix should be available for verification in the next iOS beta release.

Off device compilation is definitely a recommended path and most of the apps use this approach. In fact, Xcode does this if you include a model in your project. You can also simply use coremlcompiler command line tool from Xcode's toolchain to compile your model off device. xcrun coremlcompiler compile </path/to/mlmodel/or/mlpackage> </path/to/destination/directory>

Accepted Answer

Sorry about the inconvenience. This issue has been addressed and a fix should be available for verification in the next iOS beta release.

Off device compilation is definitely a recommended path and most of the apps use this approach. In fact, Xcode does this if you include a model in your project. You can also simply use coremlcompiler command line tool from Xcode's toolchain to compile your model off device. xcrun coremlcompiler compile </path/to/mlmodel/or/mlpackage> </path/to/destination/directory>

Thanks for the update, I assume we can verify the fix in iOS 16.1 beta 6, priority to the public 16.1 release?

CoreML Model Compilation Off-device
 
 
Q