import Darwin
print( max(.NaN, 1.23) ) // 1.23
print( max(1.23, .NaN) ) // nan
print( fmax(.NaN, 1.23) ) // 1.23
print( fmax(1.23, .NaN) ) // 1.23
print( max(.NaN, 1.0) != max(1.0, .NaN) ) // true
print( fmax(.NaN, 1.0) != fmax(1.0, .NaN) ) // false
Why should the order of the arguments decide if nan is propagated or not? And why should max not behave as fmax?
I haven't checked what IEEE-754 says, but the man page for fmax is pretty clear about this:
"If exactly one argument is a NaN, fmax() returns the other argument. If both arguments are NaNs, fmax() returns a NaN."
The reason for why Swift's standard library max (and min) are behaving this way is probably because they are defined something like this:
func myMax<T : Comparable>(x: T, _ y: T) -> T { return x > y ? x : y }
func myMin<T : Comparable>(x: T, _ y: T) -> T { return x > y ? y : x }
Which produces the same results (ie order of arguments decides whether nan is propagated).
Btw, apparently Haskell (GHCi) has this "problem" too, but the order that results in propagation is the opposite(!), so maybe I should just keep my mouth shut:
Prelude> let nan = 0/0
Prelude> nan
NaN
Prelude> max nan 1.23
NaN
Prelude> max 1.23 nan
1.23
But still, the simple bold question above begs for an answer.