[swift-evolution] protocol-oriented integers (take 2)

Dave Abrahams dabrahams at apple.com
Sun Jan 15 19:14:42 CST 2017

on Sun Jan 15 2017, Jacob Bandes-Storch <jtbandes-AT-gmail.com> wrote:

> [ proposal link: https://gist.github.com/moiseev/
> 62ffe3c91b66866fdebf6f3fcc7cad8c ]
> On Sat, Jan 14, 2017 at 4:55 PM, Dave Abrahams via swift-evolution <
> swift-evolution at swift.org> wrote:
>> Responding to both Jacob and Xiaodi here; thanks very much for your
>> feedback!
>> on Sat Jan 14 2017, Xiaodi Wu <swift-evolution at swift.org> wrote:
>> > I think, in the end, it's the _name_ that could use improvement here. As
>> > the doc comments say, `Arithmetic` is supposed to provide a "suitable
>> basis
>> > for arithmetic on scalars"--perhaps `ScalarArithmetic` might be more
>> > appropriate? It would make it clear that `CGVector` is not meant to be a
>> > conforming type.
>> We want Arithmetic to be able to handle complex numbers.  Whether Scalar
>> would be appropriate in that case sort of depends on what the implied
>> field is, right?
> I think "scalar" is an appropriate term for any field. 

Yes, but IIUC complex numbers are not scalars in every field.

> The scalar-ness usually comes into play when it's used in a vector
> space, but using the term alone doesn't bother me.
>> It's true that CGPoint and CGVector have no obvious sensible
>> interpretation of "42", and that's unfortunate.  The problem with
>> protocols for algebraic structures is that there's an incredibly
>> complicated lattice (see figures 3.1, 3.2 in
>> ftp://jcmc.indiana.edu/pub/techreports/TR638.pdf) and we don't want to
>> shove all of those protocols into the standard library (especially not
>> prematurely) but each requirement you add to a more-coarsely aggregated
>> protocol like Arithmetic will make it ineligible for representing some
>> important type.
> Yep, it is quite complicated, and I understand not wanting to address all
> that right now; calling it ScalarArithmetic seems appropriate to clarify
> the limitations. 

I don't see why we should do that, as opposed to moving integer literal
convertibility down the refinement hierarchy and adding an additive
identity.  The latter seems like it makes a sensible API for average
programmers while still accomodating vector types.

If we can define the semantics properly (not sure if that's possible),
it could even cover SIMD types.

> FieldArithmetic might also be appropriate, but is less clear (+ see
> below about quaternions).
> Daves Sweeris and Abrahams wrote:
>> > I was under the impression that complex numbers are scalar numbers...
> although maybe not since once you get past, I think quaternions, you start
> losing division and eventually multiplication, IIRC. (I hate it when two of
> my recollections try to conflict with each other.)
>> Well, you can view them as 2d vectors, so I'm not sure.  We need more of
>> a numerics expert than I am to weigh in here.
> But complex numbers have multiplication and division operations defined
> (they form a field), unlike regular vectors in R². Meaning you can have a
> vector space over the field of complex numbers.

Yes, I know.  I'm just not clear enough on the meaning of “scalar” to
know whether it's appropriate to call complex numbers scalars
independent of knowing the field in which they are scalars.  Does
https://www2.clarku.edu/~djoyce/complex/mult.html describe a field,
where the complex numbers are treated as 2d vectors?  I haven't done the

> You still have multiplication and division past quaternions, but the
> quaternions are *not commutative*. This isn't really a problem in Swift,
> since the compiler never allows you to write an expression where the order
> of arguments to an operator is ambiguous. This means they are *not a field*,
> just a division ring <https://en.wikipedia.org/wiki/Division_ring> (a field
> is a commutative division ring). (I believe you can't technically have a
> vector space over a non-commutative ring; the generalization would be a
> module <https://en.wikipedia.org/wiki/Module_%28mathematics%29>. That's
> probably an argument for the name ScalarArithmetic over FieldArithmetic.)
> Octonions are furthermore *not associative*, which is more of a problem
> since the standard library arithmetic operators are given an associativity.
> We can't give that up for regular numeric types, so someone working with
> octonions would just have to define their own non-associative operators
> (since there's no way for the protocol requirement to specify the operator
> associativity).

I think syntactic associativity of operators might be a totally
orthogonal thing to semantic associativity of binary functions, so I'm
not sure this is an issue.

>> That said, the ability to interpret integer literals as an arbitrary
>> Arithmetic isn't used anywhere in the standard library, so I'd like to
>> consider undoing
>> https://github.com/apple/swift/commit/de5b03ddc41be9c5ca5e15
>> d5709eb2be069286c1
>> and moving ExpressibleByIntegerLiteral down the protocol hierarchy to
>> BinaryInteger.
> +1
>> >> *## BinaryInteger*
>> >>
>> >> I'm a little confused by the presence of init(extendingOrTruncating:)
>> for
>> >> *all* binary integers. Why does it make sense to be able to write
>> >> UInt16(extendingOrTruncating: (-21 as Int8)) ? In my mind,
>> sign-extension
>> >> only has meaning when the source and destination types are both signed.
>> >>
>> >
>> > Here (just IMHO), I disagree. Since any binary integer can be truncated
>> and
>> > any can be right-shifted, it makes sense for
>> `init(extendingOrTruncating:)`
>> > to be available for all of them. If I understand the proposal correctly,
>> > `Int(extendingOrTruncating: (-1 as Int8))` would give you a different
>> > result than `Int(extendingOrTruncating: (255 as UInt8)`.
>> Yes it would.  But the real justification for this operation is generic
>> programming with integers.  It isn't possible, today, to define:
>>   init<T:BinaryInteger>(extending:T) where T.isNarrowerThan(Self)
>>   init<T:BinaryInteger>(truncating:T) where T.isWiderThan(Self)
>> >> Although I would not be against a clamp() function in the standard
>> library,
>> >> "init(clamping:)" sounds strange to me. What about calling it
>> >> "init(nearestTo:)"?  One could also define a FloatingPoint version
>> >> of init(nearestTo:), i.e. lround(). For maximum annoyance and
>> >> explicitness, you could even rename the existing init(_:) to
>> >> init(truncating:) to make it clear that truncation, not regular
>> >> rounding, occurs when converting from floating-point.
>> >>
>> >
>> > I would disagree with that as well; the existing `init(_:)`
>> > truncates the fractional part but errors if that value is outside
>> > the representable range, while the word "truncating" makes it sound
>> > like something is done to make any possible source value give you a
>> > destination value (a la "extendingOrTruncating" for integers).
>> >
>> > Meanwhile, "nearest to" is problematic for me because either 127
>> > and 129 is "nearest to" 128, and neither integer is "nearest to"
>> > 500, yet `Int8(clamping: 128)` and `Int8(clamping: 500)` both give
>> > you 127. This operation is very different from lround() for
>> > floating point, which in Swift is
>> > `rounded(.toNearestOrAwayFromZero)` (or just `rounded()`).
> 127 and 129 are both nearest to 128, but both of them are not Int8s. The
> "Int8(nearestTo: 128)" would be 127.

Makes sense to me.  I think "clamping:" is more term-of-art-y and
"nearestTo:" is more comprehensible to a wide audience.  I prefer the
latter unless the numericists out there are going to howl in protest.
Steve Canon, what's your take?

>> > In both these cases, I think it's important to use spellings that
>> > distinguish doing things to the fractional part of floating point values
>> > and doing things to the binary representation of integer values. I think
>> > there's great value in using the term "clamp", which is very different
>> from
>> > "nearest"; and in using an unlabeled `init(_:)` for initializing from FP
>> > values, which is most similar to the unlabeled `init(_:)` for
>> initializing
>> > from integer values, as opposed to your suggested `init(truncating:)`
>> which
>> > connotes some similarity to `init(extendingOrTruncating:)` for integer
>> > values.
>> +1
>> > *... masking shifts*
>> >>
>> >> The example claims that "(30 as UInt8) &>> 11" produces 3, because it
>> >> right-shifts 30 by 3. But isn't the bitWidth of UInt8 always 8 since
>> it's a
>> >> fixed-width type?
>> Yes.
>> >> Why should 11 get masked to 3 before the shift?
>> Because those are the semantics of masking shift?
> Ah, I see why I was confused (the point is that  2³ = 8 = bitWidth). Does
> this mean that masking shifts only work when the bitWidth is a power of
> two? 

Yes; we think that's a safe assumption for all fixed-width integers.  We
could always define the mask in terms of the smallest power of 2 > T.max
if we want to lift that restriction, but it seems likely to not be

> Otherwise, there'd be no way to reduce the shift range to exactly
> 0..<bitWidth using a mask.
>> You can think of masking shift as an optimization akin to using &+ to
>> add signed numbers when you know it can't overflow.  It's an expert's
>> tool.  If you want semantics that always make sense, use regular shift.
>> >> (Also, it might be a good idea to choose examples with numbers whose
>> >> base-ten representations don't look like valid binary. 😉)
>> Good point.
>> >> What use cases are there for masking shifts? I was under the
>> >> impression that "smart shifts" were pretty much how the usual shift
>> >> instructions already behaved.
>> No, sadly not!  The way the usual shift instructions behave is that if
>> you shift by a negative amount or you overshift, you get undefined
>> behavior, which gets expressed in various fun ways at runtime!
>> >> (Minor: were the smart shift operators supposed to be included as
>> >> BinaryInteger protocol requirements? I only see them in the
>> "heterogeneous
>> >> shifts" extension.)
>> They don't need to be requirements, as they're defined entirely in terms
>> of other (required) operations.  I'd be interested in any arguments you
>> might have for making them requirements, though.
>> >> *... init<T: BinaryInteger>(_ source: T)*
>> >>
>> >> Now a thought experiment: suppose you wanted to write an
>> >> arbitrary-precision BigInt, or any binary integer such as Int256. The
>> >> BinaryInteger protocol requires you to provide init<T:BinaryInteger>(_
>> >> source: T). Given the source of type T, how do you access its bits? Is
>> >> repeated use of word(at:) the recommended way?
>> Yes.
>> >> If so, it might be nice to include a "wordCount" returning the number
>> >> of available words; otherwise I suppose the user has to use something
>> >> like bitWidth/(8*MemoryLayout<Int>.size), which is pretty ugly.
>> good catch; countRepresentedWords is in the prototype
>> (https://github.com/apple/swift/blob/new-integer-protocols/
>> stdlib/public/core/Integers.swift.gyb#L1521),
>> and it should be in the proposal.
>> >> *## FixedWidthInteger*
>> >>
>> >> Why is popcount restricted to FixedWidthInteger? It seems like it could
>> >> theoretically apply to any UnsignedInteger.
>> >>
>> >
>> > You can perfectly legitimately get a popcount for a signed
>> > integer. It's just looking at the binary representation and
>> > counting the ones. But then with two's complement, it'd have to be
>> > restricted to FixedWidthInteger and not BinaryInteger, because the
>> > same negative value would have a different popcount depending on
>> > the type's bitwidth.
>> Right, or to put it differently, the popcount of a negative BigInt would
>> always be inifinite.
> Makes sense, but infinity has no representation in Int, so what would the
> popcount be?

Exactly my point.  You can't express the result, unless maybe you want
to do it relative to `countRepresentedWords`

>> >> *## Masking arithmetic*
>> >>
>> >> Do &* and &+ and &- need their operands to have the same type, or could
>> >> these be heterogeneous too (at least &+ and &-)?
>> I don't know what semantics you're imagining for heterogeneous &+.  What
>> type would it return?  What mask would it use?
> I was thinking of these as overflow rather than masking operators; i.e. the
> lhs type would be allowed to overflow if the rhs value were too large.

You didn't really answer the questions, AFAICT.  Anyway, I think the
answer is that unless someone can define for heterogeous &+ et
al. simple and useful semantics that are implementable in the current
language, they should remain homogeneous.


More information about the swift-evolution mailing list