Discover how to deploy Core ML models outside of your app binary, giving you greater flexibility and control when bringing machine learning features to your app. And learn how Core ML Model Deployment enables you to deliver revised models to your app without requiring an app update. We'll also walk you through how you can protect custom machine learning models through encryption, and preview your model performance in Xcode.
For more information on working with Core ML, including bringing over models trained in environments like TensorFlow and PyTorch, we also recommend watching "Get your models on device using Core ML Converters.”
♪ Voiceover: Hello, and welcome to WWDC. Anil KattI: Hello, my name is Anil Katti, and I'm excited to share with you some of the amazing new features we have introduced this year in CoreML. CoreML makes it easy for you to seamlessly integrate machine learning into your app, unlocking the door to countless amazing experiences for your users. Your CoreML model is at the heart of what makes all these experiences possible. Today we are introducing new features that are centered around this model. In this session we are going to cover a few topics from a new way to deploy your models, to encrypting them, and some enhancements in Xcode. Let's get started. We designed CoreML with your app development workflow in mind. Once you have a model, you can integrate it into your app by simply dragging and dropping it in your Xcode project. Xcode compiles your model into an optimal format for running on device. When you submit your app, the compiled model is bundled with your app, and together, they go to the App Store and then to your users' devices. Bundling a model with your app works great, and we highly recommend doing it. That way, the model is readily available as soon as the user installs the app. There are scenarios where you might need more flexibility and control over the model delivery process. Today we are introducing CoreML Model Deployment to give you that flexibility. CoreML Model Deployment provides you a new way to deliver models to your apps. Using the model deployment dashboard, models can be stored, managed, and deployed via Apple Cloud. Devices periodically retrieve updates as they become available. Model deployment gives you the ability to develop and deploy models independent of the app update cycle, a new way to group and manage models that work together, and the option to target models to specific device populations. We are really excited about these features, so let's explore each one in more detail. Let's start with independent development. Typically when you retrain your models, you probably plan to bundle them with your next app update. But what if you are updating your models at a different pace than your app? In the past you would have to push more app updates just to get the newer models in your user's hands. Now, with model deployment, you can quickly and easily update your models without updating the app itself. Next up: Model Collections. Model collections are a great way to group one or more modelss that relate to a feature in your app. For example, let's say you're working on a game, and one of its levels needs a set of models to support a feature. You can keep these models in sync by grouping them into a single model collection. When you deploy a model collection, that deployment keeps all the models within a collection together and atomically delivers them to your app. I'm eager to show all this in action. But before we jump into the demo, let me describe how we would go about adopting this feature. So there are three steps in adopting model deployment. The first step is to use the new CoreML API to opt in for model deployment. Next prepare the model for deployment by creating a model archive. You can do this in Xcode. And the last step is to upload the prepared model and deploy on the model deployment dashboard. You would only need to repeat steps two and three every time you update your model. Let's take a closer look at these steps with a demo. I have a simple app that classifies flowers and applies cool visual effects when you double tap on a picture. It uses two CoreML models: an image classifier and a style transfer model. I chose to integrate model deployment so I could decouple updating models from the app update process and iterate on models independently. So the first step is to use the new API to opt-in for model deployment. And let's see how to do that in Xcode. I have myXcode project open. Although this app does a couple of things, for this demo, let's just focus on the flower classification feature. Here is the key method we are going to implement today. It classifies the flower in a given image and updates the UI with the class label. The first thing I want to do when this method is invoked is create an instance of flower classifier and store it in a variable. This would allow me to access the preloaded model in subsequent calls and not have to load the model every single time. So let me get started by checking for the preloaded case.
Here I check to see if the variable is already set, and if so, use that to classify the image and return immediately. Next let's implement the logic to load the model. Recall that models are grouped into collections. I will show how to create and deploy a model collections on the new web dashboard in a few minutes. But here, I am interested in accessing a collection that was already deployed. I can do that by simply calling the begin accessing method on the MLModelCollection.
The first time this method is called for, for a model collection with an identifier the system downloads all models in that collection on a background queue and registers for future updates to those models. You can monitor the download using the progress object. Here the model collection identifier we are using is "FlowerModels." Let's just keep a note of that since this is required while creating the model collection later. This method also returns an instance of MLModelCollection asynchronously as a result type. Let's see how we can access the downloaded models from this result.
When this operation succeeds, we get an instance of ModelCollection, which contains model collection entries. There's one entry per model. I can access the model collection entry for our "FlowerClassifier" model by using the model name as the key. Inside the entry is a model URL that points to the compiled model, which was downloaded and placed in the app's container by the system. Accessing a model collection could fail. For instance, if there is no network connectivity the first time this called, downloading models will fail. It is important to handle the failure case. In this example we simply log the error and fallback to using the bundled model. Next let's implement the logic to load the model.
I implemented a simple helper method to load the model. It takes an optional model URL as input and uses that to load the flower classifier model. If URL is nil, it just falls back and loads the model that is bundled with the app. The last step is to use this flower classifier for classifying the image.
Here's the code for that. When the model loading succeeds, we use the loaded model to classify the image and also store it in the variable for subsequent classification requests. We have a separate method to handle the model load failures, which in this case, displays a suitable error message to the user. This is all the code that's required to prepare the app for model deployment. Now how can I prepare the model itself for deployment? Well I can use Xcode for that. If you click on the model in Xcode, you'll see that we have introduced a new utilities tab this year, which has a section for model deployment. I can prepare a model for deployment by creating a model archive. OK, Xcode created "FlowerClassifier .mlarchive" and saved it on the disk. I can locate the file by clicking here. It is placed right next to our original model file. The last step is to deploy this model on the model deployment dashboard. Xcode provides a button that takes you right to the model deployment dashboard in Safari. So the first thing you'll notice on the dashboard is that we have a way to create a model collection. Let's start by creating a new model collection for our flower models and name it FlowerModels. This should match with what was specified in the app.
Next let's provide a nice description. Something like, "collection of models built for flower images." And then specify the two models that we plan to collect in this collection. First being a FlowerClassifier. And then we have a FlowerStylizer. Then click on the create button. Next I can deploy the models by creating a new deployment. I will call it Global Deployment since I intend to deploy these models to my app on all devices. I can upload the model archives I created against each of these model names. Let me pick the flower classifier first and then the stylizer and click on the deploy button on the top right.
In the model collection page, I can verify that the deployment I created just now is in the active status. So this looks good. These models are made available on all devices running the version of my app that uses this model's collection. So let me launch the app and try classifying flowers now.
I picked a dahlia and see that it's been classified as dahlia. Let me try a hibiscus this time. And the app seems to get that right as well. So the model that I used here was only trained on three different classes. And rose was not one of them. Let me try a rose image and see what happens. As expected the app does not seem to recognize rose. Let's say I want to enhance my app to classify rose. Since I've already adopted model deployment in this app, I can easily add this feature by deploying a model retrained with rose images using model deployment. For the sake of this demo, I've already prepared the updated model for deployment. Let's go to the deployment dashboard and see how to update a model. I can deploy the updated model by creating a new deployment. This time I will pick the improved classifier and the same stylizer and click on the deploy button.
Like before, these updated models are made available to my app on all devices, but it might not happen immediately. Each device figures out the right time to download the model in the background and makes it available to the app on its next launch. Let's see what would happen when my app gets launched after the OS has synced the updated models to the device. Picking the same rose image that was not classified previously, I see that it is getting classified as rose this time. So without changing a single line of code, we have enhanced the user experience by improving the model, and I think this is really cool! Next let's talk about targeted deployments. Typically when your app is young, it might have only one set of models for all your users. However as your app evolves, it might make more sense for you to use specialized models for different populations. One solution is simply to bundle all models in your app and use the right one at runtime. But it's not very efficient. And what if the model is large, and you want to keep the app download size small? With targeted deployments, you can define rules, so that each user's device only gets the models it needs. Let's see how to set one of these up. Recently I made an improvement to our flower classifier, but specifically made it for iPads. So it turns out the camera angle and the light setting in images taken on an iPad are slightly different from those on an iPhone. Although I want to use a new model on iPads, the existing model works just fine on all other devices. So let's take a look at how we can target the iPad-specific model to just iPads. Back on the dashboard, I can either clone an existing deployment or create a new one. For now I'll create a new one and call it Targeting iPads. First, I get to pick the default models for this deployment. These models will be delivered on all devices that do not match the deployment rules.
After that I can add an additional targeting rule. You can see all the different criteria I can target. I'll pick the device class and select iPad. So for this targeting rule, I get to pick the iPad-specific models.
I can now deploy these models. Devices pull the right model based on the device class. No additional change required in the app. We showed you how easy it is to use CoreML model deployment to help for some interesting use cases. Here are some things that you should keep in mind as you use this feature. First we strongly suggest testing each model locally before uploading it on the deployment dashboard. This will ensure that your users do not end up with models that don't work. Next, be aware that the models you deploy won't be available on devices right away. Each device's system decides when to download the model in the background. Therefore your app should always have a fallback plan when the deployed models are not available. Lastly Model Collections provide a convenient way to organize models. We recommend grouping your Model Collections around each of your app's features. So to summarize what we've shown you: CoreML Model Deployment is a new way to deliver models to your users, independent of app updates. Model Collections give you a new way to group and work with your models, and targeted deployments are super helpful when you wanted to deploy models to separate populations. We think these features will give you more flexibility and make it more convenient to deliver models to your users. I'll hand it over to John Durand, and he can tell us more about some other exciting features we have for you this year. John Duran: Thank you, Anil. Hi, everyone. And thank you for joining us. We're really excited to share these features with you, and we can't wait to see the amazing apps you'll build with them. So far we've covered some new and faster ways to get your models to your users. Now let's take a look at how you can encrypt those models. With Model Encryption, you can distribute your models knowing that they are encrypted at rest and in transit. Let's take a look at what that means and how it works. Whether your model is bundled with your app or deployed to the cloud, what Xcode actually encrypts is the compiled CoreML model. This compiled model, not the original .ML model file, is what actually lives inside your app on a user's device. So how do you encrypt this compiled model? Easy. With Xcode. The first thing you'll need is an encryption key. I'll show you how to make a key in a few minutes. With the key, you can tell Xcode to encrypt your model at build time, or you can use a key to encrypt the model archive when you generate it. Either way the compile model stays encrypted in transit and at rest on a user's device. So how do you use an encrypted model at runtime? Well you use an encrypted model just as you would a regular one. CoreML automatically decrypts the model for you. But the decrypted model only exists in memory. The compiled model in the file system remains encrypted at all times. The first time you load your encrypted model, the OS securely fetches and securely stores the models decryption key on your app's behalf. After that, your app won't need a network connection to load the same model again. Let's jump into Xcode and see just how easy it is to adopt model encryption. I'll go back to the same app that Anil was using before. And let's say I want to encrypt our image classification model. The first thing I need to do is create an encryption key. To do that, I'll open up the model in X code and then go over here to the utilities tab, and I'm going to click on create encryption key. And what we see here is that Xcode associates the encryption keys with a team. And so it's important to choose the same team account that you use for building your app. When I click on Continue, Xcode sends a request to Apple Cloud, which generates a new encryption key. The server securely stores the key and sends a copy back to Xcode. So now we can see that Xcode generated a new file with extension .mlmodelkey and has dropped it right next to my CoreML model. So here you can see we have the ML model and the ML model key. OK, great. Now we have the key. Now we can take this key and share it on our team's secure key repository so that other developers on our team can use it. Now that we have a key, we can use it to encrypt our model. First let's look at the scenario. Anil showed earlier where he deployed the model with a CoreML Model Deployment. This time, when I go create a model archive, Xcode preselects the .mlmodelkey sitting beside my model. Let's click continue, and we'll see that Xcode generates a model archive just like before, but this time it encrypts the contents of the archive. The remaining steps are exactly the same as before. I can simply deploy this encrypted model archive by uploading it to the CoreML Model Deployment dashboard as Anil did earlier. OK, so that's great for deployments. But what if I wanted to bundle the model within the app itself? Well there's a way to encrypt those models too. In this app I have a second model called Flower Stylizer. I'll run the app again, so we can see what this does. When I pick an image and then double-tap it, we see that a cool style transfer effect gets applied. Looking good. All right. If you'd like to build one of these models yourself, you can check out the session called Build, Image, and Video Style Transfer Models in Create ML. Now this model is packaged with my app bundle. If I want to encrypt it, I have to generate a key for it and then tell Xcode to encrypt the model at build time. First I'll generate a key for this model. And we see that flowerClassifier.mlmodelkey has been saved to disk. And now what I need to do is tell Xcode to use this key for encryption at build time. I'm going to go ahead and grab these compiler flags and go over to my project properties, build phases, compile sources, and look for my model. There it is. I've got flower flowerStylizer.mlmodel, and I'm going to go over here to compiler flags, and I'm going to go ahead and paste in --encrypt and then a pointer to the .mlmodelkey file on disk. Once we've done that, we can go ahead and build again. So now each time, when I build the app, the CoreML compiler will compile a model and then encrypt it with the model's encryption key. That means the app now has this encrypted compiled model built in. OK, so now let's look at how we can load the encrypted model. If we go over here, we see we have a function called stylizeImage. Now since model loading needs a network connection the first time, we've introduced an asynchronous model loading method. And that's right here: flowerStylize.load. We're going to deprecate the default initializer. And we strongly recommend switching to the new .load method as it gives you an opportunity to handle model load errors. Load automatically fetches the key from Apple Cloud and works just like you'd expect it to. Even though CoreML needs network access to get the key the first time, it won't ever need to fetch that key again. For example let's say you close your app and later launch it again. This time CoreML doesn't make a network request because the OS securely stored the key locally. You can see we're getting back a result of T that contains the model, and we can unwrap it in the success case at which point we have our model, and we can continue stylizing as before. Or we also have a failure case. In the failure case, we have a helper method. And you'll notice here, we specifically trap for a modelKeyFetch error. And in that case, we may want to let the user know that they need network connectivity or to try again later. And that's it. Adopting Model Encryption really is that simple. So to recap, you can use Model Encryption in three easy steps. First, generate an encryption key with Xcode. Next, encrypt your model with the encryption key. And finally, load and use the encrypted model on the device. Create, encrypt, load. It's that simple. Model Encryption works seamlessly. Whether you bundle a model in your app or send one through a Model Deployment, all you have to do is create a key, encrypt your model, and call .load. We'll handle the rest of it. So you can focus on making a great app. Before we finish, we have a few more exciting Xcode updates for CoreML that we'd like to share with you. During our demos, you may have noticed that Xcode now shows you a lot more information about your model. At a glance, you can now see which OS versions a model supports, its class labels, and even some internal neural network details. Another new feature we think is particularly useful, not to mention fun, is the interactive model preview. Now you can interactively preview and experiment with your model directly in Xcode before you write a single line of code. We support previews for a wide variety of models such as image segmentation, pose detection, depth estimation, and many more, including all the models you can train with create ML. The full list of preview types is listed here, and we encourage you to explore the models available on developer.apple.com We are also happy to announce that CoreML models are now first-class citizens in Xcode Playgrounds. Simply drag and drop a CoreML model into your resources folder. Now your playground gets the same autogenerated class interface as an Xcode project. This allows you to programmatically experiment with your model in a live, interactive coding session. Playgrounds are also a great way to share your model demonstrations with friends and colleagues. We've covered a lot of ground today, so let's do a quick recap. We introduced CoreML Model Deployment, which allows you to deliver CoreML models independently from your app. This makes it really easy to quickly distribute and target your models as you improve them. We think this will greatly accelerate the app development process and how quickly you can adopt machine learning. We also introduced Model Encryption, which protects your models in transit and at rest without having to set up your own hosting and key management. Now you can encrypt CoreML models whether they ship with your app or as part of a Model Deployment. And finally, we added some CoreML enhancements to Xcode making it easier for you to understand, preview, and interact with your models. Thank you for watching our session and enjoy the rest of WWDC! ♪
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.