Error : No registered 'ResourceApplyAdam' OpKernel for 'GPU' devices compatible with node {{node ResourceApplyAdam}}

Hi !

I'm working on a project using Tensorflow (2.8.0) and GPflow (2.4.0) with Python 3.8.13.

I can't share all my code but when I launch this piece of code :

mymodel.optimize_adam(iter=6000, lr=0.01, b1=0.9, b2=0.999)

i.e multi-layers optimization using ADAM algorithm (TensorFlow).

It results :

NotFoundError: No registered 'ResourceApplyAdam' OpKernel for 'GPU' devices compatible with node {{node ResourceApplyAdam}}
	 (OpKernel was found, but attributes didn't match) Requested Attributes: T=DT_DOUBLE, use_locking=true, use_nesterov=false
	.  Registered:  device='XLA_CPU_JIT'; T in [DT_FLOAT, DT_DOUBLE, DT_COMPLEX64, DT_BFLOAT16, DT_COMPLEX128, DT_HALF]
  device='GPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_HALF]
  device='CPU'; T in [DT_BFLOAT16]
  device='CPU'; T in [DT_FLOAT]
  device='CPU'; T in [DT_DOUBLE]
  device='CPU'; T in [DT_COMPLEX64]
  device='CPU'; T in [DT_COMPLEX128]
 [Op:ResourceApplyAdam]

My model is turning around TensorFlow/GPflow packages. I use a M1 Pro' 2021 MacBookPro.

Note that this same code works without errors on my MacBookPro early 2015 Intel i5 (but super slow :'( ) under the same package and Python versions.

Perhaps some of you will have the solution to this problem?

Thanks

Hello,

A complement to my first message, here is a part of the code so that you understand better the elements I am talking about.

I've created a class corresponding a DNN with different optimisation methods and inference.

def Adam(self, data, iterations = 3000, lr=0.01, beta_1=0.9, beta_2=0.99, epsilon=1e-07):
    X_train, Y_train = data
    optimizer = tf.optimizers.Adam(learning_rate=lr, beta_1=beta_1, beta_2=beta_2, epsilon=epsilon) #Initialise Adam
    for step in range(iterations):   # OptiLoop
        with tf.GradientTape(watch_accessed_variables=False) as tape:
            tape.watch(self.trainable_variables)
            objective = -self.ELBOc((X_train, Y_train)) # Loss
            gradients = tape.gradient(objective, self.trainable_variables) #Grad
        optimizer.apply_gradients(zip(gradients, self.trainable_variables)) # Update the trainable variables

where ELBO is a specific function from my model class.

Error : No registered 'ResourceApplyAdam' OpKernel for 'GPU' devices compatible with node {<!-- -->{node ResourceApplyAdam}}
 
 
Q