Is it possible to support a user input for style via Style Transfer?

Lots of popular web apps (e.g. deepdreamgenerator.com) have support for Style Transfer that allows for the user to upload an image to be used as the "input style".

Based on the current Style Transfer model creation flow with Create ML, it seems that you can only train a model based on a specific input style. A model that accepts arbitrary style inputs don't seem possible from the Core ML interface.

Is there a way to do it?

Maybe I would I just need to download and convert the deep-dream-style-transfer model that accepts any style into a CoreML model?

Replies

I don't think that the Create ML Style Transfer model supports using a custom style at runtime (i.e., when inferencing). There are lots of good style-transfer models out there that don't use Create ML, however, that might have that capability, and Apple offers the coremltools Python package for converting models in other formats so that they can work with Core ML. You'll probably have the easiest time with converting a pre-trained TensorFlow model, but you'll have to make sure that you can convert the input images into the right format because models that aren't specifically designed for Core ML often require extra preprocessing.