CGFloat vs Double

I've read all the discussions about number types in Swift, and generally understand the complexity (even if I don't like it, and think that this is one huge area that the smarter people behind Swift could make some big improvements).


This is a much narrower question tho: Why can't we have a seamless bridge between CGFloat and Double? The lack of such makes dealing with Cocoa very annoying with tons of boilerplate and repetition required. I know that the definition of CGFloat is architecture dependent, but presumably the compiler knows what architecture it's compiling for.


At the very least, why can't we get arithmetic operator overrides for CGFloat <-> [swift number types]?


let c = mark.center // a CGPoint
let path = NSBezierPath()
let dx = 4.0;
let dy = 8.0;
path.moveToPoint(CGPoint(x: c.x - dx, y: c.y - dy))
                               ^


results in the error "Binary operator '-' cannot be applied to operands of type 'CGFloat' and 'Double'"


This makes the Type Inference system virtually useless when dealing with Cocoa APIs that expect/return CGFloats.


I know that I can override the operators myself, but... yuck. This feels like something better handled by the Swift team, no?

Yeah, totally that's something the Swift team could have handled. I'd file a radar (aka a Bug Report) to let Apple know it doesn't match up to your expectations. However, I don't know if Swift is 64-bit only; if it's not, then it wouldn't be able to do the seamless bridging because CGFloat might be just float instead. Anyway, definitely file a radar.

CGFloat can be either (32-bit) float or a (64-bit) double, depending on architecture, as can be seen in its declaration (cmd clicking on CGFloat):

struct CGFloat {
    /// The native type used to store the CGFloat, which is Float on
    /// 32-bit architectures and Double on 64-bit architectures.
    typealias NativeType = Double
    init()
    init(_ value: Float)
    init(_ value: Double)

    /// The native value.
    var native: NativeType
}


So bridging to Double (which is always a (64-bit) double) is not possible.


I've never been particularly bothered by this "issue", especially since the alternative (ie introducing strange exceptions in the type system for floats and doubles) would be a real issue. And there are cases when accidently mixing floats and doubles will mean bugs that are hard to find.


Your example could just be written eg:

let c = mark.center // a CGPoint
let path = NSBezierPath()
path.moveToPoint(CGPoint(x: c.x - 4, y: c.y - 8))


or if you really need variables instead of literals there:

let c = mark.center // a CGPoint
let path = NSBezierPath()
let dx, dy : CGFloat
dx = 4.0
dy = 8.0
path.moveToPoint(CGPoint(x: c.x - dx, y: c.y - dy))

That's one CGFloat you have to write there (of course you need to repeat dx and dy).


Another example with 5 CGFloats:

let x, y, z, w, t : CGFloat
(x, y, z, w, t) = (1.2, 3.4, 5.6, 7.8, 9.0)

Yes, and as I said, I know it's architecture dependent, but the architecture is known at compile-time, so surely this could be handled more elegantly on projects that are targeting a single architecture (64-bit in my case).


My complaint stems not from any single example, but from over 150 different places in only 18 source files where I'm having to cast to CGFloat or hint the compiler that I want a CGFloat.


In any case, as I said, I recognize that bridging is complex, hence the fallback request for including CGFloat in the numerous arithmetic operator overrides, where they've already covered the other types.

I think there are 2 reasons why Swift doesn't do it the way you suggest (while clang does).


1. The Swift compiler doesn't know that Float and Double are numeric types, or that they're related. That in turn is because most Swift types are defined in the Swift library, not built into the compiler.


2. The fact that CGFloat and Double have the same representation is a temporary "fact" that happens to be true in current 64-bit targets. This may not be true in some hypothetical future hardware (or on some hypothetical non-Apple platform that you might port code to under an open-source Swift). So, the clang compiler behavior (recognizing the target architecture) doesn't solve a problem, it just defers it until later.

CGFloat vs Double
 
 
Q