[swift-evolution] Allow FloatLiteralType in FloatLiteralConvertible to be aliased to String

Joe Groff jgroff at apple.com
Fri May 6 11:46:31 CDT 2016


> On May 6, 2016, at 9:42 AM, Stephen Canon <scanon at apple.com> wrote:
> 
> 
>> On May 6, 2016, at 12:41 PM, Joe Groff via swift-evolution <swift-evolution at swift.org> wrote:
>> 
>>> 
>>> On May 6, 2016, at 2:24 AM, Morten Bek Ditlevsen via swift-evolution <swift-evolution at swift.org> wrote:
>>> 
>>> Currently, in order to conform to FloatLiteralConvertible you need to implement
>>> an initializer accepting a floatLiteral of the typealias: FloatLiteralType.
>>> However, this typealias can only be Double, Float, Float80 and other built-in
>>> floating point types (to be honest, I do not know the exact limitation since I have
>>> not been able to read find this in the documentation).
>>> 
>>> These floating point types have precision limitations that are not necessarily
>>> present in the type that you are making FloatLiteralConvertible.
>>> 
>>> Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
>>> representation of the value:
>>> 
>>> 
>>> public struct CurrencyAmount {
>>> public let value: NSDecimalNumber 
>>> // .. other important currency-related stuff ..
>>> }
>>> 
>>> extension CurrencyAmount: FloatLiteralConvertible { 
>>> public typealias FloatLiteralType = Double
>>> 
>>> public init(floatLiteral amount: FloatLiteralType) {
>>>   print(amount.debugDescription) 
>>>   value = NSDecimalNumber(double: amount) 
>>> } 
>>> }
>>> 
>>> let a: CurrencyAmount = 99.99
>>> 
>>> 
>>> The printed value inside the initializer is 99.989999999999995 - so the value
>>> has lost precision already in the intermediary Double representation.  
>>> 
>>> I know that there is also an issue with the NSDecimalNumber double initializer,
>>> but this is not the issue that we are seeing here.
>>> 
>>> 
>>> One suggestion for a solution to this issue would be to allow the
>>> FloatLiteralType to be aliased to a String.  In this case the compiler should
>>> parse the float literal token: 99.99 to a String and use that as input for the
>>> FloatLiteralConvertible initializer.
>>> 
>>> This would mean that arbitrary literal precisions are allowed for
>>> FloatLiteralConvertibles that implement their own parsing of a String value.
>>> 
>>> For instance, if the CurrencyAmount used a FloatLiteralType aliased to String we
>>> would have:
>>> 
>>> extension CurrencyAmount: FloatLiteralConvertible { 
>>> public typealias FloatLiteralType = String
>>> 
>>> public init(floatLiteral amount: FloatLiteralType) { 
>>>   value = NSDecimalNumber(string: amount) 
>>> } 
>>> }
>>> 
>>> and the precision would be the same as creating an NSDecimalNumber from a
>>> String: 
>>> 
>>> let a: CurrencyAmount = 1.00000000000000000000000000000000001
>>> 
>>> print(a.value.debugDescription)
>>> 
>>> Would give: 1.00000000000000000000000000000000001
>>> 
>>> 
>>> How does that sound? Is it completely irrational to allow the use of Strings as
>>> the intermediary representation of float literals?
>>> I think that it makes good sense, since it allows for arbitrary precision.
>>> 
>>> Please let me know what you think.
>> 
>> Like Dmitri said, a String is not a particularly efficient intermediate representation. For common machine numeric types, we want it to be straightforward for the compiler to constant-fold literals down to constants in the resulting binary. For floating-point literals, I think we could achieve this by changing the protocol to "deconstruct" the literal value into integer significand and exponent, something like this:
>> 
>> // A type that can be initialized from a decimal literal such as
>> // `1.1` or `2.3e5`.
>> protocol DecimalLiteralConvertible {
>>  // The integer type used to represent the significand and exponent of the value.
>>  typealias Component: IntegerLiteralConvertible
>> 
>>  // Construct a value equal to `decimalSignificand * 10**decimalExponent`.
>>  init(decimalSignificand: Component, decimalExponent: Component)
>> }
>> 
>> // A type that can be initialized from a hexadecimal floating point
>> // literal, such as `0x1.8p-2`.
>> protocol HexFloatLiteralConvertible {
>>  // The integer type used to represent the significand and exponent of the value.
>>  typealias Component: IntegerLiteralConvertible
>> 
>>  // Construct a value equal to `hexadecimalSignificand * 2**binaryExponent`.
>>  init(hexadecimalSignificand: Component, binaryExponent: Component)
>> }
>> 
>> Literals would desugar to constructor calls as follows:
>> 
>> 1.0 // T(decimalSignificand: 1, decimalExponent: 0)
>> 0.123 // T(decimalSignificand: 123, decimalExponent: -3)
>> 1.23e-2 // same
>> 
>> 0x1.8p-2 // T(hexadecimalSignificand: 0x18, binaryExponent: -6)
> 
> This seems like a very good approach to me.

It occurs to me that "sign" probably needs to be an independent parameter, to be able to accurately capture literal -0 and 0:

// A type that can be initialized from a decimal literal such as
// `1.1` or `-2.3e5`.
protocol DecimalLiteralConvertible {
 // The integer type used to represent the significand and exponent of the value.
 typealias Component: IntegerLiteralConvertible

 // Construct a value equal to `decimalSignificand * 10**decimalExponent * (isNegative ? -1 : 1)`.
 init(decimalSignificand: Component, decimalExponent: Component, isNegative: Bool)
}

// A type that can be initialized from a hexadecimal floating point
// literal, such as `0x1.8p-2`.
protocol HexFloatLiteralConvertible {
 // The integer type used to represent the significand and exponent of the value.
 typealias Component: IntegerLiteralConvertible

 // Construct a value equal to `hexadecimalSignificand * 2**binaryExponent * (isNegative ? -1 : 1)`.
 init(hexadecimalSignificand: Component, binaryExponent: Component, isNegative: Bool)
}

-Joe


More information about the swift-evolution mailing list