Flexible Input Shapes of Core ML Model

I want to try an any resolution image input Core ML model. So I wrote the model following the Core ML Tools "Set the Range for Each Dimensionas" sample code, modified as below:

# Trace the model with random input.
example_input = torch.rand(1, 3, 50, 50)
traced_model = torch.jit.trace(model.eval(), example_input)

# Set the input_shape to use RangeDim for each dimension.
input_shape = ct.Shape(shape=(1,
                              3,
                              ct.RangeDim(lower_bound=25, upper_bound=1920, default=45),
                              ct.RangeDim(lower_bound=25, upper_bound=1920, default=45)))

scale = 1/(0.226*255.0)
bias = [- 0.485/(0.229) , - 0.456/(0.224), - 0.406/(0.225)]

# Convert the model with input_shape.
mlmodel = ct.convert(traced_model,
                   inputs=[ct.ImageType(shape=input_shape, name="input", scale=scale, bias=bias)],
                   outputs=[ct.TensorType(name="output")],
                   convert_to="mlprogram",
                   )

# Save the Core ML model
mlmodel.save("image_resize_model.mlpackage")

It converts OK but when I predict the result with an image It will get the error as below:

You will not be able to run predict() on this Core ML model. Underlying exception message was: {
    NSLocalizedDescription = "Failed to build the model execution plan using a model architecture file '/private/var/folders/8z/vtz02xrj781dxvz1v750skz40000gp/T/model-small.mlmodelc/model.mil' with error code: -7.";
}

Where did I do wrong?

Hello, could you please file a radar with the torch model code?

Flexible Input Shapes of Core ML Model
 
 
Q