How come float only producd 3.02, just two decimal places?
Because Float and Double are binary floating point types.
A simple value in decimal representation like 1.01 or 2.01 makes infinite cyclic fraction in binary system.
1.01 = (binary)1.0000001010001111010111000010100011110101110000101000111101011100...
2.01 = (binary)10.0000001010001111010111000010100011110101110000101000111101011100...
But Float has only 32 bits, so the two number above are represented as Float like this:
1.01 as Float = (Float)"00111111100000010100011110101110"
2.01 as Float = (Float)"01000000000000001010001111010111"
And the `+` operation for Float generates this value as a result:
(Float)"01000000010000010100011110101110"
Decimal value 3.02 also makes infinite cyclic fraction like this:
3.02 = (binary)11.0000010100011110101110000101000111101011100001010001111010111000...
When truncated into Float representaion, it becomes like this:
3.02 as Float = (Float)"01000000010000010100011110101110"
So, "3.02" has enough digits to represent the result: 01000000010000010100011110101110
---
How come double produced 3.0199999999999996?
In Double, 1.01 and 2.01 are represented like this:
1.01 as Double = (Double)"0011111111110000001010001111010111000010100011110101110000101001"
2.01 as Double = (Double)"0100000000000000000101000111101011100001010001111010111000010100"
And the result of Double addition generates:
(Double)"0100000000001000001010001111010111000010100011110101110000101000"
But when the value 3.02 (it is 11.0000010100011110101110000101000111101011100001010001111010111000...) is truncated into 64-bit binary floating point, it becomes:
3.02 as Double = (Double)"0100000000001000001010001111010111000010100011110101110000101001"
A slightly different value than the result of the operation, so Swift tries to show enough digits to represent this difference.
---
Shouldn't the result in double produce something like 3.020000000000006?
3.020000000000006 as Float = (Float)"0100000000001000001010001111010111000010100011110101110000110110"
The diffirence from the operation result is larger than that of `3.02 as Double`.
Swift chooses more accurate representation to show (Double)"0100000000001000001010001111010111000010100011110101110000101000".
---
Conclusion:
- Numbers such as 1.01 or 2.01 in decimal representation cannot be represented in binary floating point types.
- So, any results of an operation on binary floating point numbers may have some error.
- Swift tries to show enough digits to represent the operation result even if it has some error.
The algorithm to generate default decimal representation in Swift may be different in each version of Swift.
You should better avoid the default when showing the result of binary floating point operation.