Downloading and Compiling a Model on the User's Device
Distribute Core ML models to the user's device after the app is installed.
Downloading and compiling models on the user's device – rather than bundling them with your app – is beneficial for specific use cases. If your app has a variety of features supported by different models, downloading the models individually in the background could save on bandwidth and storage, by not forcing the user to download every model. Likewise, different locales or regions might use different Core ML models. Or models might be tuned offline for users, with updates provided over the air.
Implement Downloading and Compiling in the Background
The model definition file (.mlmodel) must be on the device before it's compiled. Use URLSession, CloudKit, or another networking toolkit to download the model for your app onto the user's device.
Call compileModel(at:) to generate the .mlmodelc file used to initialize an instance of MLModel. The model has the same capabilities as a model bundled with the app.
Move Reusable Models to a Permanent Location
To limit the use of bandwidth, avoid repeating the download and compile processes when possible. The model is compiled to a temporary location. If the compiled model can be reused, move it to a permanent location, such as your app's support directory.