a) Decimal calculation is needed for systems in which calculation details are specified based on decimal representation of number.
You know, in binary floating point system, 0.1 cannot be represented precisely.
For example, in finance or currency exchange, the result of calucation has a detailed definition including rounding mode -- based on decimal representation.
In such systems, 0.1 × 10 needs to be precisely 1.0, not just displaying 1.0 with actual value 0.9999999999999998 rounded.
Decimal calculation is sort of heavy operation, so, if you do not need such behaviours, you should use Int or Double. (Or sometimes Float.)
b) NO
In Swift 3, writting `pow(2,3)` is read as `pow(2 as Decimal, 3 as Int)`. As `pow` for Decimal has only one overload: `pow(Decimal, Int)`.
c) In Swift, no numeric types are automatically converted to other numeric types. But numeric literals are typeless.
So, integer literals are not always Int. They can be treated as any type which conforms to IntegerLiteralConvertible.
When you write `pow(2, 3.0)`, you are calling `pow(Double, Double)` and the literal `2` is treated as Double.
This is not the same as some other languages which automatically converts `int` to `double`.
So, you cannot call it with:
let iVal: Int = 2
let dVal: Double = 3 //`3` is treated as Double here.
let result = pow(iVal, dVal) //->causes compile time error
Decimal is also IntegerLiteralConvertible, so in `pow(2,3)`, 2 is treated as Decimal, to match one overload for `pow`.