Make your own custom layer for Core ML models.
- Core ML
When your neural network layer isn't supported by Core ML, you can create a custom layer by converting the layer in the model to Core ML and implementing the backing classes that define the computational behavior of the layer.
Convert Your Network to Core ML
Use the standard Core ML Tools to convert your network with custom layers to Core ML. Enable the
add flag in the conversion call to avoid failing when the converter encounters a layer it doesn't recognize (your custom layer). A placeholder layer named "custom" will be inserted as part of the conversion process. Listing 1 shows a sample invocation of the script.
Migrate Network Parameters and Weights
The parameters for the network are a dictionary stored in the
parameters field of the custom layer. When your implementation class is initialized, these parameters will be passed in to your custom class implementation to customize the initialization.
To migrate your weights, create a new array of weights in your custom layer, then copy the weights from the original source layer into the newly created array.
The weights are stored in a binary format to optimize for large amounts of data. The parameter dictionary is the opposite — convenient to access, but not appropriate for large amounts of data. Keep that in mind when determining whether to represent the custom layer's data as parameters or weights.
Name the Layer
Before saving your model, define the name of the custom layer. This will also be the name of your Swift or Objective-C class that implements the behavior of your custom layer.
Save the Converted Model
Save the converted model to create a
.mlmodel file. The Xcode view of your model shows a set of dependencies for your model, as shown in Integrating Custom Layers. The list matches the custom layer class names that you added to the
Implement the Layer
The list of dependencies should contain the
class you defined when converting your model. In Listing 3, it's
My. Create your class and make it conform to the
MLCustom protocol by implementing the methods described below.
init(parameters:)to initialize your layer as appropriate. This method will be invoked once at load time, with the parameters from your
setto configure the connection weights of the layer. This method will be invoked once at load time, after initialization, with the weights you migrated into your
outputto declare the shapes of the output for the given input shapes to your layer. This method will be invoked at model load time and again any time that the input shapes to the layer change.
Shapes(for Input Shapes:)
evaluate(inputs:to define the computational behavior for your custom layer. This method will be invoked each time your model makes a prediction on the CPU.
encode(commandif you want the layer to be eligible to run on the GPU. This method doesn't guarantee execution on the GPU.
Buffer: inputs: outputs:)
Integrate the Layer
The prediction workflow is the same as it is for a model without any custom layers. With the
MLCustom protocol implemented, the same
prediction(from:) methods work with your model. Compare your custom Core ML model against your original implementation for accuracy. Use the test cases you used when creating your network to verify the behavior of the Core ML model.