Tensorflow Autoencoders different results between local (M2 Pro Max) and colab / kaggle

Hi,

I've been going over this tutorial of autoencoders https://www.tensorflow.org/tutorials/generative/autoencoder#third_example_anomaly_detection

Notebook link https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/generative/autoencoder.ipynb

And when I downloaded and ran the notebook locally on my M2 Pro Max - the results were dramatically different and the plots were way off.

This is the plot in the working notebook:

This is the local plot:

I checked every moving piece and the difference seems to be in the output of the autoencoder, these lines:

encoded_data = autoencoder.encoder(normal_test_data).numpy() decoded_data = autoencoder.decoder(encoded_data).numpy()

The working notebook output is:

The local output:

And the overall result is notebook:

Accuracy = 0.944

Precision = 0.9941176470588236

Recall = 0.9053571428571429

local:

Accuracy = 0.44

Precision = 0.0

Recall = 0.0

I'm using Mac M2 Pro Max Python 3.10.12 Tensorflow 2.14.0

Can anyone help?

Thanks a lot in advance.

Post not yet marked as solved Up vote post of nivmorabin Down vote post of nivmorabin
446 views

Replies

Apparently when using Mac M2 you need to only install the tensorflow-macos and not tensorflow alongside with it.

Installing tensorflow-macos alone solved the problem for me :)

Thread can be closed, thanks!

I can produce the same error when following the tutorial: https://www.tensorflow.org/tutorials/generative/autoencoder?hl=zh-cn

TensorFlow packages: tensorflow 2.14.0 tensorflow-estimator 2.14.0 tensorflow-io-gcs-filesystem 0.34.0 tensorflow-macos 2.14.0 tensorflow-metal 1.1.0


When training on linux produce (Google Colab TensorFlow = 2.14.0) Epoch 1/10 1875/1875 [==============================] - 12s 3ms/step - loss: 0.0236 - val_loss: 0.0130 Epoch 2/10 1875/1875 [==============================] - 7s 4ms/step - loss: 0.0115 - val_loss: 0.0105 Epoch 3/10 1875/1875 [==============================] - 6s 3ms/step - loss: 0.0101 - val_loss: 0.0098 Epoch 4/10 1875/1875 [==============================] - 7s 4ms/step - loss: 0.0095 - val_loss: 0.0094 Epoch 5/10 1875/1875 [==============================] - 6s 3ms/step - loss: 0.0092 - val_loss: 0.0092 Epoch 6/10 1875/1875 [==============================] - 7s 4ms/step - loss: 0.0091 - val_loss: 0.0091 Epoch 7/10 1875/1875 [==============================] - 7s 4ms/step - loss: 0.0090 - val_loss: 0.0091 Epoch 8/10 1875/1875 [==============================] - 6s 3ms/step - loss: 0.0089 - val_loss: 0.0089 Epoch 9/10 1875/1875 [==============================] - 7s 3ms/step - loss: 0.0088 - val_loss: 0.0089 Epoch 10/10 1875/1875 [==============================] - 6s 3ms/step - loss: 0.0088 - val_loss: 0.0089 <keras.src.callbacks.History at 0x7fe4fd3afbe0>

Training on Mac gives (Exactly the same code)

Epoch 1/10 1875/1875 [==============================] - 12s 3ms/step - loss: 0.0236 - val_loss: 0.0130 Epoch 2/10 1875/1875 [==============================] - 7s 4ms/step - loss: 0.0115 - val_loss: 0.0105 Epoch 3/10 1875/1875 [==============================] - 6s 3ms/step - loss: 0.0101 - val_loss: 0.0098 Epoch 4/10 1875/1875 [==============================] - 7s 4ms/step - loss: 0.0095 - val_loss: 0.0094 Epoch 5/10 1875/1875 [==============================] - 6s 3ms/step - loss: 0.0092 - val_loss: 0.0092 Epoch 6/10 1875/1875 [==============================] - 7s 4ms/step - loss: 0.0091 - val_loss: 0.0091 Epoch 7/10 1875/1875 [==============================] - 7s 4ms/step - loss: 0.0090 - val_loss: 0.0091 Epoch 8/10 1875/1875 [==============================] - 6s 3ms/step - loss: 0.0089 - val_loss: 0.0089 Epoch 9/10 1875/1875 [==============================] - 7s 3ms/step - loss: 0.0088 - val_loss: 0.0089 Epoch 10/10 1875/1875 [==============================] - 6s 3ms/step - loss: 0.0088 - val_loss: 0.0089 <keras.src.callbacks.History at 0x7fe4fd3afbe0>