Dear All Developers,
I have reported an issue about the HuggingFace package on 683992.
In the beginning, I thought the problem is from HuggingFace. However, I found out it seems results from TensorFlow-Hub after some further tests.
Here is the thing, I made a fine-tuning BERT model with TF and TF-Hub only. And I got the same error as before.
Here is the detail about the error.
InvalidArgumentError: Cannot assign a device for operation AdamWeightDecay/AdamWeightDecay/update/Unique: Could not satisfy explicit device specification '/job:localhost/replica:0/task:0/device:GPU:0' because no supported kernel for GPU devices is available. Colocation Debug Info: Colocation group had the following types and supported devices: Root Member(assigned_device_name_index_=2 requested_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' assigned_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' resource_device_name_='/job:localhost/replica:0/task:0/device:GPU:0' supported_device_types_=[CPU] possible_devices_=[] RealDiv: GPU CPU ResourceGather: GPU CPU AddV2: GPU CPU Sqrt: GPU CPU Unique: CPU ResourceScatterAdd: GPU CPU UnsortedSegmentSum: CPU AssignVariableOp: GPU CPU AssignSubVariableOp: GPU CPU ReadVariableOp: GPU CPU NoOp: GPU CPU Mul: GPU CPU Shape: GPU CPU Identity: GPU CPU StridedSlice: GPU CPU _Arg: GPU CPU Const: GPU CPU
So, obviously, there is something wrong with the TF part and I don't think there is a quick solution.
As transformers and related models are so powerful in the NLP area, it is a great shame that if we cannot solving NLP tasks with GPU accelerating.
I will raise this issue on Feedback Assistant App too, and please comment here if you would also like Apple to solve this issue.
Sincerely,
hawkiyc