How to use GPU in Tensorflow?

Im using my 2020 Mac mini with M1 chip and this is the first time try to use it on convolutional neural network training.

So the problem is I install the python(ver 3.8.12) using miniforge3 and Tensorflow following this instruction. But still facing the GPU problem when training a 3D Unet.

Here's part of my code and hoping to receive some suggestion to fix this.

import tensorflow as tf
from tensorflow import keras 
import json
import numpy as np
import pandas as pd
import nibabel as nib
import matplotlib.pyplot as plt

from tensorflow.keras import backend as K 
#check available devices
def get_available_devices():
    local_device_protos = device_lib.list_local_devices()
    return [x.name for x in local_device_protos]

print(get_available_devices())

Metal device set to: Apple M1 ['/device:CPU:0', '/device:GPU:0'] 2022-02-09 11:52:55.468198: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support. 2022-02-09 11:52:55.468885: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: )

X_norm_with_batch_dimension = np.expand_dims(X_norm, axis=0)
#tf.device('/device:GPU:0') #Have tried this line doesn't work
#tf.debugging.set_log_device_placement(True) #Have tried this line doesn't work
patch_pred = model.predict(X_norm_with_batch_dimension)

InvalidArgumentError: 2 root error(s) found. (0) INVALID_ARGUMENT: CPU implementation of Conv3D currently only supports the NHWC tensor format. [[node model/conv3d/Conv3D (defined at /Users/mwshay/miniforge3/envs/tensor/lib/python3.8/site-packages/keras/layers/convolutional.py:231) ]] [[model/conv3d/Conv3D/_4]]

(1) INVALID_ARGUMENT: CPU implementation of Conv3D currently only supports the NHWC tensor format. [[node model/conv3d/Conv3D (defined at /Users/mwshay/miniforge3/envs/tensor/lib/python3.8/site-packages/keras/layers/convolutional.py:231) ]] 0 successful operations. 0 derived errors ignored.

The code is executable on Google Colab but can't run on Mac mini locally with Jupyter notebook. The NHWC tensor format problem might indicate that Im using my CPU to execute the code instead of GPU.

Is there anyway to optimise GPU to train the network in Tensorflow?

Replies

This works for me:

with tf.device("/cpu:0"):

or

with tf.device("/gpu:0"):
  • Hi Derek, thanks for the reply. I tried your code and it remains the same, may I ask where you put this code? Before the training execution line or at the beginning after import tensorflow?

    btw, did you use Conv2D or Conv3D layer in your network?

  • Hello, I am using 2021 Macbook Pro with M1 chip and have the same problem. 2D UNET works fine with GPU but 3D UNET uses CPU. I am using Python 3.8.12, Miniforge3, Tensorflow 2.8.0. Here are few more observations:

    tf.keras --> Adam optimizer is not working (I had to switch to SGD).tf.keras --> Batch Normalization is not working as well. I had to neglect BN and write my own subroutine for this purpose.

    These may have nothing to do with M1 chip but with tf and python3.8.

  • Hi Tuckle, did you solve this eventually? can you please teach me how? thanks

Add a Comment

execute the code after tf.device("/gpu:0"): for exmaple: """ with tf.device("/cpu:0"): {your code} """ don't forget the indentation before your code

Hi, I have just tried this, but TensorFlow still uses my CPU instead of my GPU. Does anyone have another solution?