[swift-evolution] Thoughts on clarity of Double and Float type names?

Xiaodi Wu xiaodi.wu at gmail.com
Mon May 23 21:55:02 CDT 2016

On Mon, May 23, 2016 at 9:40 PM, David Sweeris <davesweeris at mac.com> wrote:

> On May 23, 2016, at 8:18 PM, Xiaodi Wu <xiaodi.wu at gmail.com> wrote:
> >
> > Int is the same size as Int64 on a 64-bit machine but the same size as
> Int32 on a 32-bit machine. By contrast, modern 32-bit architectures have
> FPUs that handle 64-bit and even 80-bit floating point types. Therefore, it
> does not make sense for Float to be Float32 on a 32-bit machine, as would
> be the case in one interpretation of what it means to mirror naming
> "conventions." However, if you interpret the convention to mean that Float
> should be the largest floating point type supported by the FPU, Float
> should actually be a typealias for Float80 even on some 32-bit machines. In
> neither interpretation does it mean that Float should simply be a typealias
> for what's now called Double.
> IIRC, `Int` is typealiased to the target's biggest
> native/efficient/practical integer type, regardless of its bit-depth
> (although I believe some do exist, I can’t think of any CPUs in which those
> are different). I don’t see why it shouldn’t be the same way with floats…
> IMHO, `Float` should be typealiased to the biggest
> native/efficient/practical floating point type, which I think is pretty
> universally Float64. I’m under the impression that Intel’s 80-bit format is
> intended to be an interim representation which is automatically converted
> to/from 64-bit, and loading & storing a full 80-bits is a non-trivial
> matter. I’m not even sure if the standard “math.h" functions are defined
> for Float80 arguments. If Float80 is just as native/efficient/practical as
> Float64, I wouldn’t object to Float being typealiased to Float80 on such
> platforms.
> > Another issue to consider: a number like 42 is stored exactly regardless
> of whether you're using an Int32 or an Int64. However, a number like 1.1 is
> not stored exactly as a binary floating point type, and it's approximated
> *differently* as a Float than as a Double. Thus, it can be essential to
> consider what kind of floating point type you're using in scenarios even
> when the number is small, whereas the same is not true for integer types.
> Oh I know. I’m not arguing that floating point math isn’t messy, just that
> since we can use “Int” for when we don’t care and “IntXX” for when we do,
> we should also be able to use “Float” when we don’t care and “FloatXX” when
> we do. If someone’s worried about the exact value of “1.1”, they should be
> specifying the bit-depth anyway. Otherwise, give them most precise type
> which can work with the language’s goals.

I wouldn't be opposed to renaming Float and Double to Float32 and Float64,
but I would care if Float were typealiased to different types on different
platforms. That solution is a non-starter for me because something as
simple as (1.1 + 1.1) would evaluate to a different result depending on the
machine. That's a problem. An analogous issue does not come into play with
Int because 1 + 1 == 2 regardless of the size of Int. Swift traps when the
max value that can be stored in an Int is exceeded, so it is not possible
to obtain two different results on two different machines.

Have we (meaning the list in general, not you & me in particular) had this
> conversation before? This feels familiar...

It does, doesn't it? I've been reading this list for too long.

> -Dave Sweeris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20160523/5a39db04/attachment.html>

More information about the swift-evolution mailing list