Our mlmodel crashes during compilation only on iOS 16 devices (ML Compiler bug).
As a work around we could pre-compile the model, zip it up, host it on a server, then have devices download and use it directly in order to avoid compiling (and crashing).
Any foreseeable issues with compiling off-device and using it?
Sorry about the inconvenience. This issue has been addressed and a fix should be available for verification in the next iOS beta release.
Off device compilation is definitely a recommended path and most of the apps use this approach. In fact, Xcode does this if you include a model in your project. You can also simply use coremlcompiler command line tool from Xcode's toolchain to compile your model off device. xcrun coremlcompiler compile </path/to/mlmodel/or/mlpackage> </path/to/destination/directory>