I have a model I developed in Tensorflow 2.3 and then converted it to an MLModel w/ the coreml tools. I also reduce to to fp16. The model works great on most iOS devices but on a few particularly ones like the iPhone 11 pro max A2218 - it gives an NaN error in from on the NeuralEngine- if the same model is run on CPU/GPU t - there is no issues. I also tried the fp32 version of the model and has the same results of NaN on NeuralEngine and works great w/ CPU/GPU. Thoughts suggestions?