CoreML batch prediction

Hello everyone,


I was wondering if it is possible, with CoreML, to make a prediction of a batch of images.

For example, my model only has a CVPixelBufferRef as an input for method predition and not an array.


Machine learning framework like Keras or Dlib allow that, and are faster when making a prediction of an array of data than looping and predit each value.


Does anyone know if that is possible ?



Thanks

It should be possible because Core ML allows for batches but I haven't tried it myself and I don't know how you'd need to convert the CVPixelBuffers to a batch.


However... predicting a batch of data is not necessarily faster than predicting a single image. It may be faster _on average_ per image. For example, if predicting a single image takes 100 milliseconds, a batch of 5 images may take 450 milliseconds instead of 500, since there is less overhead.


But on mobile we often want to run predictions in real-time, in which case batching is not suitable. So it depends on your use case whether batching makes sense: for offline processing of a video you might get a (small) speed boost from batching, but for live video you don't want to use it.

Hello and thanks for your answer.


Yes, by "faster", I meant on average.


Indeed, I'm trying to process real time video but in this video, I'm extracting square(s) and try to process each square into my NN so it must be as fast as possible.

Hi Polpot16,


Please take a look at the MLBatchProvider API which may suit your needs.


Thanks!

CoreML batch prediction
 
 
Q