스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
Use model deployment and security with Core ML
Discover how to deploy Core ML models outside of your app binary, giving you greater flexibility and control when bringing machine learning features to your app. And learn how Core ML Model Deployment enables you to deliver revised models to your app without requiring an app update. We'll also walk you through how you can protect custom machine learning models through encryption, and preview your model performance in Xcode. For more information on working with Core ML, including bringing over models trained in environments like TensorFlow and PyTorch, we also recommend watching "Get your models on device using Core ML Converters.”
리소스
관련 비디오
WWDC20
-
다운로드
Hello and welcome to WWDC.
Hello. My name is Anil Katti and I'm excited to share with you some of the amazing new features we've introduced this year in Core ML. Core ML makes it easy for you to seamlessly integrate machine learning into your app, unlocking the door to countless amazing experiences for your users. Your Core ML model is at the heart of what makes all these experiences possible. Today, we are introducing new features that are centered around this model. In this session, we're going to cover a few topics, from a new way to deploy your models, to encrypting them, and some enhancements in Xcode. Let's get started. We designed Core ML with your app development workflow in mind. Once you have a model, you can integrate it into your app by simply dragging and dropping it into your Xcode project. Xcode compiles your model into an optimal format for running on-device. When you submit your app, the compiled model is bundled with your app. Together, they go to the App Store and then to your users' devices. Bundling a model with your app works great and we highly recommend doing it. That way, the model is readily available as soon as a user installs the app. There are scenarios where you might need more flexibility and control over the model delivery process. Today, we're introducing Core ML Model Deployment to give you that flexibility. Core ML Model Deployment provides you a new way to deliver models to your apps. Using the Model Deployment dashboard, models can be stored, managed and deployed via Apple cloud. Devices periodically retrieve updates as they become available. Model Deployment gives you the ability to develop and deploy models independent of the app update cycle, a new way to group and manage models that work together, and the option to target models to specific device populations. We're really excited about these features, so let's explore each one in more detail. Let's start with independent development.
Typically, when you retrain your models, you probably plan to bundle them with your next app update.
But what if you're updating your models at a different pace than your app? In the past, you'd have to push more app updates just to get the newer models in your users' hands.
Now, with Model Deployment, you can quickly and easily update your models without updating the app itself. Next up: model collections.
Model collections are a great way to group one or more models that relate to a feature in your app. For example, let's say you're working on a game and one of its levels needs a set of models to support a feature. You can keep these models in sync by grouping them into a single model collection. When you deploy a model collection, that deployment keeps all the models within a collection together and atomically delivers them to your app. I'm eager to show all this in action, but before we jump into the demo, let me describe how we would go about adopting this feature. So, there are three steps in adopting Model Deployment. The first step is to use the new Core ML API to opt-in for Model Deployment. Next, prepare the model for deployment by creating a model archive. You can do this in Xcode.
The last step is to upload the prepared model and deploy on the Model Deployment dashboard. You will only need to repeat steps two and three every time you update your model. Let's take a closer look at these steps with a demo. I have a simple app that classifies flowers and applies cool visual effects when you double-tap on a picture. It uses two Core ML models: an image classifier and a style transfer model.
I chose to integrate Model Deployment so I could decouple updating models from the app update process and iterate on models independently. So, the first step is to use the new API to opt-in for Model Deployment. Let's see how to do that in Xcode. I have my Xcode project open. Although this app does a couple of things, for this demo, let's just focus on the flower classification feature. Here is the key method we are going to implement today. It classifies the flower in a given image and updates the UI with the class label. The first thing I want to do when this method is invoked is create an instance of FlowerClassifier and store it in a variable. This would allow me to access the preloaded model in subsequent calls and not have to load the model every single time.
So, let me get started by checking for the preloaded case.
Here, I check to see if the variable is already set and if so, use that to classify the image and return immediately. Next, let's implement the logic to load the model.
Recall that models are grouped into collections. I will show how to create and deploy model collections on the new dashboard in a few minutes. But here, I'm interested in accessing the collection that was already deployed. I can do that by simply calling beginAccessing method on MLModelCollection.
The first time this method is called for a model collection with an identifier, the system downloads all models in that collection on a background queue and registers for future updates to those models. You can monitor the download using the progress object.
Here the model collection identifier we are using is "FlowerModels." Let's just keep a note of that since this is required while creating the model collection later.
This method also returns an instance of MLModelCollection asynchronously as a result type. Let's see how we can access the downloaded models from this result.
When this operation succeeds, we get an instance of model collection which contains model collection entries. There's one entry per model. I can access the model collection entry for our FlowerClassifier model using the model name as the key. Inside the entry is a model URL that points to the compiled model which was downloaded and placed in the app's container by the system. Accessing a model collection could fail. For instance, if there is no network connectivity the first time this is called, downloading models will fail. It is important to handle the failure case. In this example, we simply log the error and fall back to using the bundled model. Next, let's implement the logic to load the model.
I implemented a simple helper method to load the model. It takes an optional modelURL as input and uses that to load the FlowerClassifier model. If the URL is nil, it just falls back and loads the model that is bundled with the app.
The last step is to use this FlowerClassifier for classifying the image.
Here's the code for that. When the model loading succeeds, we use the loaded model to classify the image and also store it in the variable for subsequent classification requests. We have a separate method to handle the model load failures, which, in this case, displays a suitable error message to the user.
This is all the code that's required to prepare the app for Model Deployment. Now, how can I prepare the model itself for deployment? Well, I can do that in Xcode. If you click on the model in Xcode, you'll see that we have introduced a new utilities tab this year which has a section for Model Deployment.
I can prepare a model for deployment by creating a model archive.
Okay, Xcode created FlowerClassifier.mlarchive and saved it on the disk. I can locate the file by clicking here.
It is placed right next to our original model file. The last step is to deploy this model on the Model Deployment dashboard. Xcode provides a button that takes you right to the Model Deployment dashboard in Safari. The first thing you'll notice on the dashboard is that we have a way to create a model collection. I'll start by creating a new model collection for our flower models and name it "FlowerModels." This should match with what was specified in the app.
Next, let's provide a nice description. Something like...
"collection of models built for flower images." And then specify the two models that we plan to collect in this collection, the first being a FlowerClassfier.
And then we have a FlowerStylizer.
And then click on the create button.
Next, I can deploy the models by creating a new deployment. I will call it Global Deployment since I intend to deploy these models to my app on all devices.
I can upload the model archives I created against each of these model names. Let me pick the FlowerClassfier first and then the Stylizer and click on the deploy button on the top right.
In the model collection page, I can verify that the deployment I created just now is in the active status. So this looks good. These models are made available on all devices running the version of my app that uses this model collection. Let me launch the app and try classifying flowers now.
I picked a dahlia and see that it's being classified as dahlia. Let me try a hibiscus this time.
The app seems to get that right as well. The model that I used here was only trained on three different classes and rose was not one of them. Let me try a rose image and see what happens.
As expected, the app does not seem to recognize rose.
Let's say I want to enhance my app to classify rose. Since I've already adopted Model Deployment in this app, I can easily add this feature by deploying a model retrained with rose images using Model Deployment. For the sake of this demo, I've already prepared the updated model for deployment.
Let's go to the deployment dashboard and see how to update a model. I can deploy the updated model by creating a new deployment. This time, I will pick the improved classifier and the same stylizer and click on the deploy button.
Like before, these updated models are made available to my app on all devices, but it might not happen immediately. Each device figures out the right time to download the model in the background and makes it available to the app on its next launch. Let's see what will happen when my app gets launched after the OS has synced the updated models to the device. Picking the same rose image that was not classified previously, I see that it is getting classified as a rose this time.
So without changing a single line of code, we've enhanced the user experience by improving the model, and I think this is really cool. Next, let's talk about targeted deployments.
Typically, when your app is young, it might have only one set of models for all your users. However, as your app evolves, it might make more sense for you to use specialized models for different populations. One solution is simply to bundle all models in your app and use the right one at the runtime, but it's not very efficient. And what if the models are large and you want to keep the app download size small? With targeted deployments, you can define rules so that each user's device only gets the model it needs.
Let's see how to set one of these up. Recently, I made an improvement to our FlowerClassifier, but specifically made it for iPads. It turns out the camera angle and the light setting in images taken on an iPad are slightly different from those on an iPhone. Although I want to use a new model on iPads, the existing model works just fine on all other devices. Let's take a look at how we can target the iPad-specific model to just iPads.
Back on the dashboard, I can either clone an existing deployment or create a new one. For now, I'll create a new one and call it Targeting iPads.
First, I get to pick the default models for this deployment. These models will be delivered on all devices that do not match the deployment rules.
After that, I can add an additional targeting rule. You can see all the different criteria I can target. I'll pick the device class and select iPad. So for this targeting rule, I get to pick the iPad-specific models.
I can now deploy these models. Devices pull the right model based on the device class. No additional change required in the app. We showed you how easy it is to use Core ML Model Deployment to help with some interesting use cases. Here are some things that you should keep in mind as you use this feature. First, we strongly suggest testing each model locally before uploading it on the Deployment dashboard. This will ensure that your users do not end up with models that don't work.
Next, be aware that the models you deploy won't be available on devices right away. Each device's system decides when to download the model in the background. Therefore, your app should always have a fallback plan when the deployed models aren't available.
Lastly, model collection provides a convenient way to organize models. We recommend grouping your model collections around each of your app's features. So, to summarize what we've shown you: Core ML Model Deployment is a new way to deliver models to your users, independent of app updates. Model collections give you a new way to group and work with your models, and targeted deployments are super helpful when you want to deploy models to separate populations. We think these features will give you more flexibility and make it more convenient to deliver models to your users. I'll hand it over to John Durant, and he can tell us more about some other exciting features we have for you this year.
Thank you, Anil. Hi, everyone, and thank you for joining us. We're really excited to share these features with you, and we can't wait to see the amazing apps you'll build with them. So far we've covered some new and faster ways to get your models to your users. Now let's take a look at how you can encrypt those models. With model encryption, you can distribute your models, knowing that they are encrypted at rest and in transit. Let's take a look at what that means and how it works. Whether your model is bundled with your app or deployed to the cloud, what Xcode actually encrypts is the compiled Core ML model. This compiled model, not the original .mlmodel file, is what actually lives inside your app on a user's device. So, how do you encrypt this compiled model? Easy. With Xcode. The first thing you'll need is an encryption key. I'll show you how to make a key in a few minutes. With the key, you can tell Xcode to encrypt your model at build time. Or you can use a key to encrypt a model archive when you generate it. Either way, the compiled model stays encrypted in transit and at rest on a user's device. So, how do you use an encrypted model at runtime? Well, you use an encrypted model just as you would a regular one. Core ML automatically decrypts the model for you, but the decrypted model only exists in memory. The compiled model in the file system remains encrypted at all times.
The first time you load your encrypted model, the OS securely fetches and securely stores the model's decryption key on your app's behalf. After that, your app won't need a network connection to load the same model again. Let's jump in to Xcode and see just how easy it is to adopt model encryption. I'll go back to the same app that Anil was using before. Let's say I want to encrypt our image classification model. The first thing I need to do is create an encryption key. To do that, I'll open up the model in Xcode, and I'm gonna go over here to the utilities tab and I'm gonna click on "Create Encryption Key." What we see here is that Xcode associates the encryption key with a team, and so it's important to choose the same team account that you use for building your app. When I click on Continue, Xcode sends a request to Apple cloud which generates a new encryption key. The server securely stores the key and sends a copy back to Xcode. So now we can see that Xcode generated a new file with extension .mlmodelkey, and it's dropped it right next to my Core ML model. So here we can see we have the .mlmodel and the .mlmodelkey.
Okay, great. Now we have a key. Now we can take this key and share it on our team's secure key repository so that other developers on our team can use it.
Now that we have a key, we can use it to encrypt our model. First, let's look at the scenario Anil showed earlier, where he deployed the model with the Core ML Model Deployment. This time, when I go create a model archive, Xcode pre-selects the .mlmodelkey sitting beside my model. Let's click Continue, and we'll see that Xcode generates a model archive just like before, but this time it encrypts the contents of the archive. The remaining steps are exactly the same as before. I can simply deploy this encrypted model archive by uploading it to the Core ML Model Deployment dashboard as Anil did earlier. Okay, so that's great for deployments. But what if I wanted to bundle the model within the app itself? Well, there's a way to encrypt those models too. In this app, I have a second model called FlowerStylizer.
I'll run the app again so we can see what this does. When I pick an image and then double-tap it, we see that a cool style transfer effect gets applied. Looking good! If you'd like to build one of these models yourself, you can check out the session called "Build Image and Video Style Transfer Models in Create ML." Now this model is packaged with my app bundle. If I want to encrypt it, I have to generate a key for it and then tell Xcode to encrypt the model at build time. First, I'll generate a key for this model.
We see that FlowerClassifier.mlmodelkey has been saved to disk.
Now what I need to do is tell Xcode to use this key for encryption at build time.
I'm gonna go ahead and grab these compiler flags. I'm gonna go over to my project properties, Build Phases, Compile Sources, and I'm going to look for my model. There it is. I've got FlowerStylizer.mlmodel. And I'm gonna go over here to Compiler Flags.
And I'm going to go ahead and paste in dash-dash encrypt, and then a pointer to the .mlmodelkey file on disk.
Once we've done that, we can go ahead and build again.
So now, each time when I build the app, the Core ML Compiler will compile the model and then encrypt it with the model's encryption key. That means the app now has this encrypted, compiled model built in. Okay, so now let's look at how we can load the encrypted model.
If we go over here, we see we have a function called "Stylize the Image." Now, since model loading needs a network connection the first time, we've introduced an asynchronous model loading method. That's right here. FlowerStylizer.load. We're going to deprecate the default initializer and we strongly recommend switching to the new .load method, as it gives you an opportunity to handle model load errors.
Load automatically fetches the key from Apple cloud and works just like you'd expect it to. Even though Core ML needs network access to get the key the first time, it won't ever need to fetch that key again. For example, let's say you close your app and later launch it again. This time, Core ML doesn't make a network request because the OS securely stored the key locally.
You can see we're getting back a result of "T" that contains the model, and we can unwrap it in the success case, at which point we have our model and we can continue stylizing as before, or we also have a failure case. In the failure case, we have a helper method and you'll notice here we specifically trap for a modelKeyFetch error. And in that case, we may want to let the user know that they need network connectivity or to try again later. And that's it. Adopting model encryption really is that simple. So to recap, you can use model encryption in three easy steps. First, generate an encryption key with Xcode. Next, encrypt your model with the encryption key. And finally, load and use the encrypted model on the device. Create, encrypt, load. It's that simple.
Model encryption works seamlessly whether you bundle a model in your app or send one through a Model Deployment. All you have to do is create a key, encrypt your model, and call .load. We'll handle the rest of it so you can focus on making a great app. Before we finish, we have a few more exciting Xcode updates for Core ML that we'd like to share with you.
During our demos, you may have noticed that Xcode now shows you a lot more information about your model. At a glance, you can now see which OS versions a model supports, its class labels, and even some internal neural network details. Another new feature we think is particularly useful, not to mention fun, is the interactive model preview.
Now you can interactively preview and experiment with your model directly in Xcode, before you write a single line of code. We support previews for a wide variety of models, such as image segmentation, pose detection, depth estimation, and many more, including all the models you can train with Create ML. The full list of preview types is listed here, and we encourage you to explore the models available on developer.apple.com.
We are also happy to announce that Core ML models are now first-class citizens in Xcode Playgrounds. Simply drag and drop a Core ML model into your resources folder. Now your Playground gets the same auto-generated class interface as an Xcode project. This allows you to programmatically experiment with your model in a live, interactive coding session. Playgrounds are also a great way to share your model demonstrations with friends and colleagues. We've covered a lot of ground today, so let's do a quick recap. We introduced Core ML Model Deployment, which allows you to deliver Core ML models independently from your app. This makes it really easy to quickly distribute and target your models as you improve them. We think this will greatly accelerate the app development process and how quickly you can adopt machine learning.
We also introduced model encryption, which protects your models in transit and at rest, without having to set up your own hosting and key management. Now you can encrypt Core ML models, whether they ship with your app or as part of a Model Deployment. And finally, we added some Core ML enhancements to Xcode, making it easier for you to understand, preview and interact with your models. Thank you for watching our session and enjoy the rest of WWDC.
-
-
4:34 - Flower Classifier using Core ML Model Deployment
private func classifyFlower(in image: CGImage) { // Check for a loaded model if let model = flowerClassifier { classify(image, using: model) return } MLModelCollection.beginAccessing(identifier: "FlowerModels") { [self] result in var modelURL: URL? switch result { case .success(let collection): modelURL = collection.entries["FlowerClassifier"]?.modelURL case .failure(let error): handleModelCollectionFailure(for: error) } let result = loadFlowerClassifier(from: modelURL) switch result { case .success(let model): classify(image, using: model) case .failure(let error): handleModelLoadFailure(for: error) } } } func loadFlowerClassifier(from modelURL: URL?) -> Result<FlowerClassifier, Error> { if let modelURL = modelURL { return Result { try FlowerClassifier(contentsOf: modelURL) } } else { return Result { try FlowerClassifier(configuration: .init()) } } }
-
20:03 - Compiler flag for encrypting a model
--encrypt "$SRCROOT/HelloFlowers/Models/FlowerStylizer.mlmodelkey" [Production note] or if we're tight for horizontal space we can use this: --encrypt "$SRCROOT/.../FlowerStylizer.mlmodelkey"
-
20:50 - Working with an encrypted model
func stylizeImage() { // If we already loaded the model, apply the effect if let model = flowerStylizer { applyStyledEffect(using: model) return } // Otherwise load and apply FlowerStylizer.load { [self] result in switch result { case .success(let model): flowerStylizer = model DispatchQueue.main.async { applyStyledEffect(using: model) } case .failure(let error): handleFailure(for: error) } } } func handleFailure(for error: Error) { switch error { case MLModelError.modelKeyFetch: handleNetworkFailure() default: handleModelLoadError(error) } }
-
-
찾고 계신 콘텐츠가 있나요? 위에 주제를 입력하고 원하는 내용을 바로 검색해 보세요.
쿼리를 제출하는 중에 오류가 발생했습니다. 인터넷 연결을 확인하고 다시 시도해 주세요.