Does Double and Float Calculate Like This or Is This a Bug?

let floatJust: Float = 1.01 + 2.01

let doubleJust: Double = 1.01 + 2.01


print(floatJust)

print(doubleJust)


On the console, they are calculate as:


2018-10-28 11:02:58.827997+0800 PracticeProject[23508:4846922] libMobileGestalt MobileGestalt.c:890: MGIsDeviceOneOfType is not supported on this platform.


3.02

3.0199999999999996


How come float only producd 3.02, just two decimal places?

How come double produced 3.0199999999999996?

Shouldn't the result in double produce something like 3.020000000000006?

Thank you. God bless, Proverbs 31

Accepted Answer

Not a bug at all ; it'snormal behavior. That's the "magic" of Float or Double, when converted to decimal !


Double is higher precision, so it tries a more precise conversion from decimal to its internal format.

Why do you think it should produce something like 3.020000000000006?


Conclusion: you can never guess what will be the printed output of a float.


If you want 2 digits, just write


print(String(format: "%.2f", doubleJust))


As for the mesage:

2018-10-28 11:02:58.827997+0800 PracticeProject[23508:4846922] libMobileGestalt MobileGestalt.c:890: MGIsDeviceOneOfType is not supported on this platform.


That's not related ; it is a simulator warning, without any consequence (and pretty mysterious in fact). Ignore it.

How come float only producd 3.02, just two decimal places?


Because Float and Double are binary floating point types.


A simple value in decimal representation like 1.01 or 2.01 makes infinite cyclic fraction in binary system.


1.01 = (binary)1.0000001010001111010111000010100011110101110000101000111101011100...

2.01 = (binary)10.0000001010001111010111000010100011110101110000101000111101011100...


But Float has only 32 bits, so the two number above are represented as Float like this:

1.01 as Float = (Float)"00111111100000010100011110101110"

2.01 as Float = (Float)"01000000000000001010001111010111"

And the `+` operation for Float generates this value as a result:

(Float)"01000000010000010100011110101110"


Decimal value 3.02 also makes infinite cyclic fraction like this:

3.02 = (binary)11.0000010100011110101110000101000111101011100001010001111010111000...

When truncated into Float representaion, it becomes like this:

3.02 as Float = (Float)"01000000010000010100011110101110"


So, "3.02" has enough digits to represent the result: 01000000010000010100011110101110


---

How come double produced 3.0199999999999996?


In Double, 1.01 and 2.01 are represented like this:

1.01 as Double = (Double)"0011111111110000001010001111010111000010100011110101110000101001"

2.01 as Double = (Double)"0100000000000000000101000111101011100001010001111010111000010100"

And the result of Double addition generates:

(Double)"0100000000001000001010001111010111000010100011110101110000101000"


But when the value 3.02 (it is 11.0000010100011110101110000101000111101011100001010001111010111000...) is truncated into 64-bit binary floating point, it becomes:

3.02 as Double = (Double)"0100000000001000001010001111010111000010100011110101110000101001"


A slightly different value than the result of the operation, so Swift tries to show enough digits to represent this difference.


---

Shouldn't the result in double produce something like 3.020000000000006?


3.020000000000006 as Float = (Float)"0100000000001000001010001111010111000010100011110101110000110110"


The diffirence from the operation result is larger than that of `3.02 as Double`.

Swift chooses more accurate representation to show (Double)"0100000000001000001010001111010111000010100011110101110000101000".


---

Conclusion:

- Numbers such as 1.01 or 2.01 in decimal representation cannot be represented in binary floating point types.

- So, any results of an operation on binary floating point numbers may have some error.

- Swift tries to show enough digits to represent the operation result even if it has some error.


The algorithm to generate default decimal representation in Swift may be different in each version of Swift.

You should better avoid the default when showing the result of binary floating point operation.

I'm not sure it's documented anywhere, but I believe that Swift conversions from floating point to string will produce sufficient decimal places so that converting the string back to floating point will produce the same floating point number you started with.


This is important for something like JSON, where numbers are necessarily represented in decimal digits within a (UTF-8) JSON string, so there's no penalty for the round-trip.

Does Double and Float Calculate Like This or Is This a Bug?
 
 
Q