Deep Learning on Mac - M1 Chips

Can I run inference on the new MacBook Pro with M1 Chips (Apple Silicon) using Keras Models (sometimes PyTorch). These would be computer vision models, some might have custom loss functions or metrics and would have been trained on lets say, Google Colab.
If I can perform inference, how do I do that?
Also, will the Neural Engines help while performing inference or will it boost training if I have to train on the Mac?

  • *on lets say, Google Colab on GPUs

Both TF and PyTorch allow inference and training on CPUs in python code during development. However, only TF has GPU support at the moment - see the link above provided by @ramaprv for discussion of GPU support in PyTorch.

For inference in iOS, iPadOS and macOS, you will probably be interested in the Core ML Tools project on GitHub from Apple that converts models trained on GPUs into Core ML format:

https://github.com/apple/coremltools

It is my understanding the Core ML model knows when to use the Neural Engine, but I'm not expert!

based on what I see in TG Pro, running training doesn't appear to use ANE with tensorflow-metal. It only uses GPU. If I do a CoreML test and set MLConfiguration to use .all, it does use ANE.

Deep Learning on Mac - M1 Chips
 
 
Q