Small changes in inference results after encrypting CoreML model

Recently we added model encryption for an application we work on. While doing so, our tests notified us about changes in model inference results. I built a small application to reproduce the phenomena.

While the changes are quite small (about 1e-9 maximum), we do want to know what causes the changes. We assumed encrypting a model does not alter it at all and the result should be bit-exact the same before and after.

Anyone else experienced this? Is there an explanation why this changes happen?