Alas, your question does not make any sense given the realities of binary floating point. First up, some overall comments:
Swift uses
Double
as its standard floating point type. double_t
is a compatibility thing inherited from C code. It only works in this context because you’re importing Foundation.Swift’s
Double
type is defined to be an IEEE 754 ‘double’ (binary64), which means it uses binary floating point.
The fact that
Double
uses
binary floating point means that questions about decimal places don’t make a lot of sense in Swift. For example, consider this snippet:
let a: Double = 0.2
let b: Double = 0.1
let c: Double = 0.3
let sum = a + b
print(sum == c)
// prints "false"
If you look at the binary representation of these values it’s easy to see why:
print(String(unsafeBitCast(a, UInt64.self), radix: 16))
// prints "3fc9_9999_9999_999a"
print(String(unsafeBitCast(b, UInt64.self), radix: 16))
// prints "3fb9_9999_9999_999a"
print(String(unsafeBitCast(c, UInt64.self), radix: 16))
// prints "3fd3_3333_3333_3333"
print(String(unsafeBitCast(sum, UInt64.self), radix: 16))
// prints "3fd3_3333_3333_3334"
That is, simple decimal numbers (like 0.1) don’t have simple representations in binary floating point, and thus simple decimal calculations accrue rounding errors.
IEEE 754 has a good model for the limits of these rounding errors. To answer your question you’ll need to understand that model and make a decision as to how you want to handle them. If you search the ’net for “floating point rounding errors” you’ll find lots of good articles about this. I can’t make a specific recommendation because I don’t really understand this stuff myself (-:
Share and Enjoy
—
Quinn “The Eskimo!”
Apple Developer Relations, Developer Technical Support, Core OS/Hardware
let myEmail = "eskimo" + "1" + "@apple.com"