M1 MacBook Pro seems to have trouble finding the mlcompute module

I recently downloaded and installed tensorflow on my M1 MacBook Pro following the instructions provided here: [https://developer.apple.com/metal/tensorflow-plugin/].

Then I tried to run this benchmark code

import tensorflow_datasets as tfds

tf.enable_v2_behavior()

from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()

from tensorflow.python.compiler.mlcompute import mlcompute
mlcompute.set_mlc_device(device_name='gpu')


(ds_train, ds_test), ds_info = tfds.load(
    'mnist',
    split=['train', 'test'],
    shuffle_files=True,
    as_supervised=True,
    with_info=True,
)

def normalize_img(image, label):
  """Normalizes images: `uint8` -> `float32`."""
  return tf.cast(image, tf.float32) / 255., label

batch_size = 128

ds_train = ds_train.map(
    normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(batch_size)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)


ds_test = ds_test.map(
    normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_test = ds_test.batch(batch_size)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.experimental.AUTOTUNE)


model = tf.keras.models.Sequential([
  tf.keras.layers.Conv2D(32, kernel_size=(3, 3),
                 activation='relu'),
  tf.keras.layers.Conv2D(64, kernel_size=(3, 3),
                 activation='relu'),
  tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
#   tf.keras.layers.Dropout(0.25),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
#   tf.keras.layers.Dropout(0.5),
  tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
    loss='sparse_categorical_crossentropy',
    optimizer=tf.keras.optimizers.Adam(0.001),
    metrics=['accuracy'],
)

model.fit(
    ds_train,
    epochs=12,
    validation_data=ds_test,
)

and my jupyter notebook had an error below.

ModuleNotFoundError                       Traceback (most recent call last)
/var/folders/gq/ngmr4bqj51x5884pn6srkdt80000gn/T/ipykernel_80371/2836866461.py in <module>
      7 disable_eager_execution()
      8 
----> 9 from tensorflow.python.compiler.mlcompute import mlcompute
     10 mlcompute.set_mlc_device(device_name='gpu')
     11 

ModuleNotFoundError: No module named 'tensorflow.python.compiler.mlcompute'

Should I reinstall my tf environment? Or is it just metal plugin's problem?

Replies

Hi,

You don't need: from tensorflow.python.compiler.mlcompute import mlcompute for setting the GPU device in Metal plugin. We honor the Tensorflow's device placement logic. So depending on the supported operations in Metal plugin the layers will be mapped to GPU by TF's device placer. You can use tf.debugging.set_log_device_placement(True) to dump out the layers mapped to GPU. I just removed the mlcompute import below and was able to train the network.

import tensorflow_datasets as tfds
import tensorflow as tf

from tensorflow.python.framework.ops import disable_eager_execution
disable_eager_execution()

(ds_train, ds_test), ds_info = tfds.load(
    'mnist',
    split=['train', 'test'],
    shuffle_files=True,
    as_supervised=True,
    with_info=True,
)

def normalize_img(image, label):
  """Normalizes images: `uint8` -> `float32`."""
  return tf.cast(image, tf.float32) / 255., label

batch_size = 128

ds_train = ds_train.map(
    normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(batch_size)
ds_train = ds_train.prefetch(tf.data.experimental.AUTOTUNE)


ds_test = ds_test.map(
    normalize_img, num_parallel_calls=tf.data.experimental.AUTOTUNE)
ds_test = ds_test.batch(batch_size)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.experimental.AUTOTUNE)


model = tf.keras.models.Sequential([
  tf.keras.layers.Conv2D(32, kernel_size=(3, 3),
                 activation='relu'),
  tf.keras.layers.Conv2D(64, kernel_size=(3, 3),
                 activation='relu'),
  tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
#   tf.keras.layers.Dropout(0.25),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
#   tf.keras.layers.Dropout(0.5),
  tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
    loss='sparse_categorical_crossentropy',
    optimizer=tf.keras.optimizers.Adam(0.001),
    metrics=['accuracy'],
)

model.fit(
    ds_train,
    epochs=12,
    validation_data=ds_test,
)

Hi @sl5035, Let us know if above addresses the issue. Thanks.

  • Hello, I want to disable GPU but I get ModuleNotFoundError: No module named 'tensorflow.python.compiler.mlcompute' error mentioned by the OP. Any ideas?

  • Hi @beingAnubhab,

    If you want to force the code to run on the CPU, try using

    with tf.device('/cpu:0')

    You shouldn't need tensorflow.python.compiler.mlcompute any more.

Add a Comment

Hi,

I got the same issue but I wanna use GPU. Does anyone have an idea how to solve it?

Hello, I got the same problem and only got limited power on Tensorflow because of non access to GPU on Macbook pro M1 Max. Is this coming from the apple version of Tensorflow??