[swift-evolution] [Proposal draft] Enhanced floating-point protocols
Ted F.A. van Gaalen
tedvgiosdev at gmail.com
Fri Apr 15 12:57:12 CDT 2016
Hi Stephen
> Hi Erica, thanks for the feedback.
>
>> On Apr 14, 2016, at 6:29 PM, Erica Sadun <erica at ericasadun.com <mailto:erica at ericasadun.com>> wrote:
>>
>> * I do use % for floating point but not as much as I first thought before I started searching through my code after reading your e-mail. But when I do use it, it's nice to have a really familiar symbol rather than a big word.
>
Agreeing completely with Erica here.
>
>> What were the ways that it was used incorrectly? Do you have some examples?
>
> As it happens, I have a rationale sitting around from an earlier (internal) discussion:
>
> While C and C++ do not provide the “%” operator for floating-point types, many newer languages do (Java, C#, and Python, to name just a few). Superficially this seems reasonable, but there are severe gotchas when % is applied to floating-point data, and the results are often extremely surprising to unwary users.
What is your definition of unwary users? Programmers that usually don’t work with floating point data?
No gotchas or total flabbergasting astonishment with day to day floating point usage.
> C and C++ omitted this operator for good reason. Even if you think you want this operator, it is probably doing the wrong thing in subtle ways that will cause trouble for you in the future.
I don’t share this opinion at all.
>
> The % operator on integer types satisfies the division algorithm axiom: If b is non-zero and q = a/b, r = a%b, then a = q*b + r. This property does not hold for floating-point types, because a/b does not produce an integral value.
The axiom is correct of course, but only in a perfect world with perfect numbers. Not in real life. **
Floats never represent integral values, if so then they were Integers.
> If it did produce an integral value, it would need to be a bignum type of some sort (the integral part of DBL_MAX / DBL_MIN, for example, has over 2000 bits or 600 decimal digits).
Impossible. E.g. one would need a planet covered completely
with memory (even holographic quantum) units and even then
you would not have enough storage to store (one) PI with all its decimals.
By the way most of PIs decimals are unknown as you know.
It could even be that memory units on all the planets in (this)
the universe are even not enough to satisfy the storage need
for just one PI.
* Please read the Hitchhikers Guide To The Galaxy for more info on this interesting subject :o)
Of course, the last decimals of PI are …42. Dolphins know that, but they left long ago in the future *
>
> Even if a bignum type were returned, or if we ignore the loss of the division algorithm axiom, % would still be deeply flawed.
% is not flawed. It’s just the real life precision limitations of the floating point type. Live with it.
> Whereas people are generally used to modest rounding errors in floating-point arithmetic, because % is not continuous small errors are frequently enormously magnified with catastrophic results:
>
> (swift) 10.0 % 0.1
> // r0 : Double = 0.0999999999999995 // What?!
As I have tried to explain in more detail before:
This is perfectly normal, acceptable and expected, because,
whether you like it or not, that is the exactly? the nature of
floating point numbers stored in a computer
(well, at least in this age) they’re not precise,
but in context precise enough voor most purposes in scientific
and engineering applications.
If you want to work with exact values e.g. for representing money,
then use an appropriate numerical type
or roll your own, make a struct for it.
Swift is poor in this because it only offers Integer and Floating point numerical types,
not Fixed Decimal (Like e.g. in PL/1 and C#)
also its Ranges and iterations are with Integers only and
going in one direction only.
>
> [Explanation: 0.1 cannot be exactly represented in binary floating point; the actual value of “0.1” is 0.1000000000000000055511151231257827021181583404541015625. Other than that rounding, the entire computation is exact.]
no, it's not. or at least you can’t count on that. especially when intermediate expressions (with floats) are involved.
** (to all, not meant cynically here, and with all due respect:
If you don’t value floats and understand their purpose and place,
please go programming for a half year or so in an industrial/engineering workshop.
You’ll notice that these precision issues you are writing about are mostly irrelevant.
E.g. You could theoretically calculate the length of a steel bar for a bridge (the hardware one :o)
exactly to 0.000000000001 or so, but the bar in question would only coincidentally have
this exact value. For instance thermal expansion will be much larger.
http://www.engineeringtoolbox.com/thermal-expansion-metals-d_859.html <http://www.engineeringtoolbox.com/thermal-expansion-metals-d_859.html>
It’s all a matter of magnitude in the context/domain of what one is calculating.
>
> Proposed Approach:
> Remove the “%” operator for floating-point types. The operation is still be available via the C standard library fmod( ) function (which should be mapped to a Swiftier name, but that’s a separate proposal).
I completely disagree here. It is very useful in many calculations!
E.g. If I want to space (geometric) objects in 3D space in SceneKit
I don’t care and would really not notice if a block > 0.000000000000000001 or so dislocated
(it would also not be possible, because this small value is
very much less of the resolution in use, although, in theory
(no one has checked this :o) 3D space is infinite)
Btw. SceneKit is so fast, because it uses floating point values.
Very sure I am missing things / come for a different time / world, etc.
but to me, a floating point value is nothing more than a storage location
holding a number of bytes, in an IEEE standard format
with all its related arithmetic perfectly well implemented in Swift,
no need to touch it or define additional functions. infinity etc.
So, better imho to leave it as it is. it’s OK.
=============
Off topic, but perhaps readable just the same:
Just a thought/ fear:
Watching trends here. I am also (hopefully unnecessary)
a bit afraid that in the long run Swift might become more
complex and thus as tedious to work with as Objective C?
Loosing it’s simplicity?
Note that the language should serve (nearly) every one,
I am now talking like an old guy, well I am 65 earth years :o) ->
Most of you are very talented, that’s great! and I am learning
things from U 2. But it appears to me that the trend is to introduce
a lot of “very exotic” features, (albeit with great things too that make
a programmer’s life easier) but for some, are they really necessary? Useful?
How fragile is the line between “academic” and “academic distortion” ?
How far does it go?
E.g for Strings, it is all logically very correct, Unicode and all,
but I had to write my own extensions for this:
s2 = s1[1..5] // was an unimplemented feature. Silly.
One could have saved the trouble, by pushing the Unicode values
(with variable length of course) down as atomic values,
instead of the opposite and laying a character view over it...
using overly complicated things like
startIndex, strides and so on ?
Just one example.
Maybe it’s just a cultural difference,
but like you all the same.
Kind regards
Ted
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20160415/3c14d50c/attachment.html>
More information about the swift-evolution
mailing list