Problems with CICoreMLModelFilter

I'm trying to integrate my neural style transfer models into Core Image using the new

CICoreMLModelFilter
. It is working, however, I noticed a few problems with the filter:
  • Aside from Session 719 of this year's WWDC it's not mentionend at all in any piece of documentation.
  • It leaks a lot of memory with every call to
    outputImage
    . I dug a little deeper into the Memory Graph and found that a new
    CIPredictionModel
    along with a heavy
    IOSurface
    and various other objects is created with each call and never released.
  • I just uses scale-to-fit on the input image to match the MLModel's input size—not the smart scaling, cropping and resizing that Vision does when working with MLModels.
  • It can't handle flexible model input sizes. Even when specifying allowed ranges for model input dimensions, the filter will always scale the input to the model's designated input size.


As an alternative I wrote my own

CIImageProcessorKernel
with just a few lines of code that wraps a Core ML model and does the same without the issues mentioned above.


I wish Apple would provide any documentation on how to use

CICoreMLModelFilter
properly.