Coreml model(neural network) int Quantization

In the documentation it is mentioned that coreml model with weights in float 32 can be quantized to float 16 and can be quantized to 1-8 bits which reduces the size to one fourth of float 32 model, we are unsure if 1-8 bits quantization is float or int quantization. Could you please confirm if 1-8 bits quantization is int or float quantization.

Coreml model(neural network) int Quantization
 
 
Q