[swift-evolution] Allow FloatLiteralType in FloatLiteralConvertible to be aliased to String

Morten Bek Ditlevsen bek at termestrup.dk
Fri May 6 04:24:23 CDT 2016

Currently, in order to conform to FloatLiteralConvertible you need to implement
an initializer accepting a floatLiteral of the typealias: FloatLiteralType.
However, this typealias can only be Double, Float, Float80 and other built-in
floating point types (to be honest, I do not know the exact limitation since I have
not been able to read find this in the documentation).

These floating point types have precision limitations that are not necessarily
present in the type that you are making FloatLiteralConvertible.

Let’s imagine a CurrencyAmount type that uses an NSDecimalNumber as the
representation of the value:

public struct CurrencyAmount {
  public let value: NSDecimalNumber 
  // .. other important currency-related stuff ..

extension CurrencyAmount: FloatLiteralConvertible { 
  public typealias FloatLiteralType = Double
  public init(floatLiteral amount: FloatLiteralType) {
    value = NSDecimalNumber(double: amount) 

let a: CurrencyAmount = 99.99

The printed value inside the initializer is 99.989999999999995 - so the value
has lost precision already in the intermediary Double representation.  

I know that there is also an issue with the NSDecimalNumber double initializer,
but this is not the issue that we are seeing here.

One suggestion for a solution to this issue would be to allow the
FloatLiteralType to be aliased to a String.  In this case the compiler should
parse the float literal token: 99.99 to a String and use that as input for the
FloatLiteralConvertible initializer.

This would mean that arbitrary literal precisions are allowed for
FloatLiteralConvertibles that implement their own parsing of a String value.

For instance, if the CurrencyAmount used a FloatLiteralType aliased to String we
would have:

extension CurrencyAmount: FloatLiteralConvertible { 
  public typealias FloatLiteralType = String
  public init(floatLiteral amount: FloatLiteralType) { 
    value = NSDecimalNumber(string: amount) 

and the precision would be the same as creating an NSDecimalNumber from a

let a: CurrencyAmount = 1.00000000000000000000000000000000001


Would give: 1.00000000000000000000000000000000001

How does that sound? Is it completely irrational to allow the use of Strings as
the intermediary representation of float literals?
I think that it makes good sense, since it allows for arbitrary precision.

Please let me know what you think.

More information about the swift-evolution mailing list