M1 GPU is extremely slow, how can I enable CPU to train my NNs?

Hi everyone,

I found that the performance of GPU is not good as I expected (as slow as a turtle), I wanna switch from GPU to CPU. but mlcompute module cannot be found, so wired.

The same code ran on colab and my computer (jupyter lab) take 156s vs 40 minutes per epoch, respectively.

I only used a small dataset (a few thousands of data points), and each epoch only have 20 baches.

I am so disappointing and it seems like the "powerful" GPU is a joke.

I am using 12.0.1 macOS and the version of tensorflow-macos is 2.6.0

Can anyone tell me why this happens?

Post not yet marked as solved Up vote post of dkjdjdfdskln Down vote post of dkjdjdfdskln
9.7k views

Replies

It seems like the small batch size will reduce the performance of GPU https://developer.apple.com/forums/thread/685623 so I increased the batch size from 256 to 1024, which reduced the running time from 40 minutes to 10 minutes per epoch. However, again, one epoch only takes around 2 minutes with CPU.

I am so confused now, it seems like I need to increase the batch size from 1024 to 1024 * 5 so that the running time will be reduced to 2 minutes per epoch.....

Update: I found M1 chip is extremely slow on LSTM compared with CNN.

Add a Comment

Update: I ran exactly the same LSTM code on Macbook Pro M1 Pro and Macbook Pro 2017, It turns out M1 Pro costs 6 hrs for one epoch, and 2017 model only needs 158s.

I use pip uninstall tensorflow-metal and I get CPU acceleration again!

An alternative to uninstalling tensorflow-metal is to disable GPU usage. This is a copy-paste from my other post...

To disable the GPU completely on the M1 use tf.config.experimental.set_visible_devices([], 'GPU'). To disable the GPU for certain operations, use:

with tf.device('/cpu:0'):
    # tf calls here
  • How do you check whether you are actually working on CPU then?

    I did:

    with tf.device('/cpu:0'): print(tf.config.list_physical_devices('GPU'))

    and got the output: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

Add a Comment

what is the point of having a "GPU"? My Mac Studio M1 Ultra GPU (20c CPU, 64c GPU) is dead slow while training, slower than even my MBP13-2017 for the same code, same data points!!! What is going on? Please see the History:

using cpu isn't a solution. It is just a workaround.

LSTM takes 3 hours per epoch on gpu and 3 minutes on cpu.

I am too much frustated

  • I agree with you. Otherwise, what is point of having such a "extraordinary GPU" that can beat RTX 3090?

    I am stuck since last few weeks due to memory leakage issue (related to GPU) and GPUs are dead slow. Not only that, when the memory leakage reaches ~125GB out of 128GB in my Mac Studio, the training simply stops!!! I am utterly frustrated and disgusted!!! I should have gone with INTEL machine instead with a decent GPU rather than paying hefty price for this "hyped GPU" and TF-METAL. :-(

Add a Comment

Got M2 Max here (2023), I tried to run inference (one by one, no batch) using huggingface "distilbert-base-cased" (after fine-tuning with my dataset). It runs 10it/s in the beginning, but after a few min, GPU utilization drops to less than 1%, and now it took >1s per it! that's huge disappointment. I don't know what I have done wrong. I tried to turn on an external fan thinking it may be heat throttling, but I don't see utilization going back up.

How can I debug this?

I don't think this is the right question - the integrated GPU will be useless for ML work as its not optmized for it. We need to use the Apple Neural Engine - its 16 cores optimized for ML tasks.

  • I agree but how to connect ANE with the TensorFlow Model training so that we can fasten our training speed?

Add a Comment