I've converted model from caffe and trying to run predict like this:
e = np.zeros((1,3,224,224))
d = {}
d['data'] = e
r = coreml_model.predict(d)
And have
RuntimeError: {
NSLocalizedDescription = "The model expects input feature data to be an image, but the input is of type 5.";
}
Any ideas?
Thanks!
For Images, CoreML uses python's Pillow library (pip install Pillow). Here is the code snippet that should work for you
import coremltools
# Load an image using PIL
from PIL import Image
rose = Image.open('rose.jpg')
coreml_model.predict({'data': rose})
An example of doing this is also availiable in Session 710's video (https://developer.apple.com/videos/play/wwdc2017/710/)
Note that in your code above, you were passing a Numpy object which gets converted to an MLMultiArray in CoreML.