[swift-evolution] Proposal: Remove % operator for floating-point types
ccruden at novafore.com
Fri Dec 18 15:17:45 CST 2015
Floating point numbers by their very nature are approximation represented in binary. Many numbers in Decimal cannot be represented exactly in decimal floating point format.
So after math calculations decimal number -> floating point binary -> repeating decimal numbers.
If decimal accuracy such as accounting, you should avoid using floats altogether and stick to Decimal type numbers (in java it is called BigDecimal, in Swift they have NSDecimalNumber I think which has it’s own issues.
> On 2015-12-19, at 4:12:58, Stephen Canon via swift-evolution <swift-evolution at swift.org> wrote:
> Hi everybody —
> I’d like to propose removing the “%” operator for floating-point types.
> While C and C++ do not provide the “%” operator for floating-point types, many newer languages do (Java, C#, and Python, to name just a few). Superficially this seems reasonable, but there are severe gotchas when % is applied to floating-point data, and the results are often extremely surprising to unwary users. C and C++ omitted this operator for good reason. Even if you think you want this operator, it is probably doing the wrong thing in subtle ways that will cause trouble for you in the future.
> The % operator on integer types satisfies the division algorithm axiom: If b is non-zero and q = a/b, r = a%b, then a = q*b + r. This property does not hold for floating-point types, because a/b does not produce an integral value. If it did produce an integral value, it would need to be a bignum type of some sort (the integral part of DBL_MAX / DBL_MIN, for example, has over 2000 bits or 600 decimal digits).
> Even if a bignum type were returned, or if we ignore the loss of the division algorithm axiom, % would still be deeply flawed. Whereas people are generally used to modest rounding errors in floating-point arithmetic, because % is not continuous small errors are frequently enormously magnified with catastrophic results:
> (swift) 10.0 % 0.1
> // r0 : Double = 0.0999999999999995 // What?!
> [Explanation: 0.1 cannot be exactly represented in binary floating point; the actual value of “0.1” is 0.1000000000000000055511151231257827021181583404541015625. Other than that rounding, the entire computation is exact.]
> Proposed Approach:
> Remove the “%” operator for floating-point types. The operation is still be available via the C standard library fmod( ) function (which should be mapped to a Swiftier name, but that’s a separate proposal).
> Alternative Considered:
> Instead of binding “%” to fmod( ), it could be bound to remainder( ), which implements the IEEE 754 remainder operation; this is just like fmod( ), except instead of returning the remainder under truncating division, it returns the remainder of round-to-nearest division, meaning that if a and b are positive, remainder(a,b) is in the range [-b/2, b/2] rather than [0, b). This still has a large discontinuity, but the discontinuity is moved away from zero, which makes it much less troublesome (that’s why IEEE 754 standardized this operation):
> (swift) remainder(1, 0.1)
> // r1 : Double = -0.000000000000000055511151231257827 // Looks like normal floating-point rounding
> The downside to this alternative is that now % behaves totally differently for integer and floating-point data, and of course the division algorithm still doesn’t hold.
> Remove the % operator for floating-point types in Swift 3. Add a warning in Swift 2.2 that points out the replacement fmod(a, b).
> Thanks for your feedback,
> – Steve
> swift-evolution mailing list
> swift-evolution at swift.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the swift-evolution