Core ML: Big diff after converting caffemodel to mlmodel.

Hi Guys,


I am working on a model that is to detect upper and lower cloth parts in an image. The model is based on GoogleNet, trained on Caffe and tested on iOS.

With the same image as the input, I predict on both Caffe(on iOS) and Core ML.

caffe result:0.347753 0.163852 0.633884 0.435454 0.334608 0.445311 0.633349 0.808979 1.015382 1.028805

coreml result:0.349121 0.178467 0.608398 0.438965 0.345459 0.406006 0.616211 0.743652 1.000977 0.957520

The output are 10 floats, the first 8 floats are ratios of the points' coordinate and image with/height. So the diff on percentile is a very large diff!


Caffe is the most popular ML framework in Computer Vision field, to use caffemodel on Core ML, we must ensure enough precision.

To use Core ML, I must eliminate the diff!

1. Is there a tool that I can expose all weight parameters in mlmodel?

I want to compare all weight parameters in caffemodel and mlmodel to check if the convertion is OK. You know coremltools use libcaffeconverter which is not open source.

2. Are there methods to debug Core ML?

For developers outside Apple, Core ML is a black box. Can we check every layer's input and output to find which layer caused the big diff?


Hope developers of the Core ML module can see my request, and help me to debug this issue. ^_^


Thanks,

Yanghua

Was there any feedback for this? I have converted a model to coreml, and written code to generate precision/recall/f1. All are at least .5% lower than the original caffe model. Tools to help compare weights would be appreciated.

Core ML: Big diff after converting caffemodel to mlmodel.
 
 
Q