[swift-evolution] floating point numbers implicit conversion

Ted F.A. van Gaalen tedvgiosdev at gmail.com
Mon Jun 19 10:46:40 CDT 2017

> On 18. Jun 2017, at 08:04, Xiaodi Wu <xiaodi.wu at gmail.com> wrote:
> On Sun, Jun 18, 2017 at 00:02 Brent Royal-Gordon <brent at architechies.com <mailto:brent at architechies.com>> wrote:
>> On Jun 17, 2017, at 8:43 PM, Xiaodi Wu via swift-evolution <swift-evolution at swift.org <mailto:swift-evolution at swift.org>> wrote:
>> How do you express the idea that, when you add values of disparate types T and U, the result should be of the type with greater precision? You need to be able to spell this somehow.
> To be slightly less coy:
> :)

what if one would allow all numeric conversions, 
also to those with smaller storage, and from signed to unsigned,
(Of course with clear compiler warnings) 
but trap (throwable) conversion errors at runtime? 
Runtime errors would then be things like: 
-assigning a negative value to an unsigned integer
-assigning a too large integer to an integer  e.g   UInt8 = 43234
E.g. doing this:

let n1 = 255   // inferred Int
var u1:UInt8
u1 = n1                              //this would be OK because:  0 <=n1 <= 255 
(currently this is not possible in Swift: "cannot assign value of type ‘Int’ to type ‘UInt8’)  

Personally, I’d prefer that this should be allowed and cause a runtime error when conversion is not possible.
Compile time warning could be “Warning: Overflow might occur when assigning value from type ‘Int’ to type ‘UInt8’”  
Same for floats if out of magnitude.

Then this would not be necessary:
> You need to be able to say that one value type is a subtype of another. Then you can say Int8 is a subtype of Int16, and the compiler knows that it can convert any Int8 to an Int16 but not vice versa. This adds lots of complexity and makes parts of the compiler that are currently far too slow even slower, but it's not difficult to imagine, just difficult to practically implement given the current state of the Swift compiler.
> And then there are follow-on issues. As but a single example, consider integer literals, which are of course commonly used with these operations. Take the following statements:
> var x = 42 as UInt32
> let y = x + 2
> What is the inferred type of 2? Currently, that would be UInt32. What is the inferred type of y? That would also be UInt32. So, it's tempting to say, let's keep this rule that integer literals are inferred to be of the same type. But now:
> let z = x + (-2)
> What is the inferred type of -2? If it's UInt32, then this expression is a compile-time error and we've ruled out integer promotion for a common use case. If OTOH it's the default IntegerLiteralType (Int), what is the type of z? It would have to be Int.

One of the things imho which I would change in Swift: 
I’d prefer this rule:
*** within the scope of an expression, individual operands (vars and literals) 
      should all implicitly be promoted to lowest possible precision
     with which the complete expression can be evaluated  *** 
 the smallest-type rule as it is now should be replaced by the above rule
or in other words
  the smallest type possible within the scope of the complete expression. 

Your above example would then not result in a compilation error.  (currently it does) 

in your example, with this rule, x would be implicitly promoted to Int 
and the result *z* would be an inferred Int.
another example of wat could be an allowable expression: 

var result: Float = 0.0
result = float * integer * uint8 +  double   
// here, all operands should be implicitly promoted to Double before the complete expression evaluation.

//the evaluation of the expression results in a Double, which then is converted to a float during assignment to “result” 

To summarise: Allow implicit conversions, but trap impossible things at runtime,
unless of course the compiler can already figure out that it doesn’t work e.g in some cases with literals: 
let n1 = 234543
var u1:UInt8 = n1   // overflow error


> Now suppose x were instead of type UInt64. What would be the type of z, if -2 is inferred to be of type Int? The answer would have to be DoubleWidth<Int64>. That is clearly overkill for subtracting 2. So let's say instead that the literal is inferred to be of the smallest type that can represent the value (i.e. Int8). If so, then what is the result of this computation?
> let a = x / ~0
> Currently, in Swift 3, ~0 is equal to UInt32.max. But if we have a rule that the literal should be inferred to be the smallest type that can represent the value, then the result of this computation _changes_. That won't do. So let's say instead that the literal is inferred to be of the same type as the other operand, unless it is not representable as such, in which case it is then of the smallest type that can represent the value. Firstly, and critically, this is not very easy to reason about. Secondly, it still does not solve another problem with the smallest-type rule. Consider this example:
> let b = x / -64
> If I import a library that exposes Int7 (the standard library itself has an internal Int63 type and, I think, other Int{2**n - 1} types as well), then the type of b would change!
> Of all the alternatives here, it would seem that disallowing integer promotion with literals is after all the most straightforward answer. However, it is not a satisfying one.
As written, I think that the other (better?) option is catching conversion errors at runtime,
… this would require programmers having some common sense and awareness of what they’re doing :o) 
> My point here, at this point, is not to drive at a consensus answer for this particular issue, but to point out that what we are discussing is essentially a major change to the type system. As such, and because Swift already has so many rich features, _numerous_ such questions will arise about how any such change interacts with other parts of the type system. For these questions, sometimes there is not an "obvious" solution, and there is no guarantee that there will even be a single fully satisfying solution even after full consideration. That makes this a _very_ difficult topic, very difficult indeed.
Yes, agree, it is. It is one of the difficulties of a static typed language (but a static type pre-compiled language is -whether I like it or not - currently still the best option for fast apps)  in a static typed (OOP) language you need generics and protocols, but if going too far with these, one might paint him/herself into a corner, as increasingly more features become interdependent.. Looks like this could become problematic in further improving Swift. Also decisions of what are possible improvements and what not. 
> To evaluate whether any such undertaking is a good idea, we would need to discuss a fully thought-out design and consider very carefully how it changes all the other moving parts of the type system; it is not enough to say merely that the feature is a good one.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20170619/9a7ccab8/attachment.html>

More information about the swift-evolution mailing list