[swift-evolution] protocol-oriented integers (take 2)
xiaodi.wu at gmail.com
Sun Jan 15 16:42:02 CST 2017
On Sun, Jan 15, 2017 at 3:29 PM, Jacob Bandes-Storch via swift-evolution <
swift-evolution at swift.org> wrote:
> [ proposal link: https://gist.github.com/moiseev/62ffe3c91b66866fdebf6f
> 3fcc7cad8c ]
> On Sat, Jan 14, 2017 at 4:55 PM, Dave Abrahams via swift-evolution <
> swift-evolution at swift.org> wrote:
>> Responding to both Jacob and Xiaodi here; thanks very much for your
>> on Sat Jan 14 2017, Xiaodi Wu <swift-evolution at swift.org> wrote:
>> > I think, in the end, it's the _name_ that could use improvement here. As
>> > the doc comments say, `Arithmetic` is supposed to provide a "suitable
>> > for arithmetic on scalars"--perhaps `ScalarArithmetic` might be more
>> > appropriate? It would make it clear that `CGVector` is not meant to be a
>> > conforming type.
>> We want Arithmetic to be able to handle complex numbers. Whether Scalar
>> would be appropriate in that case sort of depends on what the implied
>> field is, right?
> I think "scalar" is an appropriate term for any field. The scalar-ness
> usually comes into play when it's used in a vector space, but using the
> term alone doesn't bother me.
>> It's true that CGPoint and CGVector have no obvious sensible
>> interpretation of "42", and that's unfortunate. The problem with
>> protocols for algebraic structures is that there's an incredibly
>> complicated lattice (see figures 3.1, 3.2 in
>> ftp://jcmc.indiana.edu/pub/techreports/TR638.pdf) and we don't want to
>> shove all of those protocols into the standard library (especially not
>> prematurely) but each requirement you add to a more-coarsely aggregated
>> protocol like Arithmetic will make it ineligible for representing some
>> important type.
> Yep, it is quite complicated, and I understand not wanting to address all
> that right now; calling it ScalarArithmetic seems appropriate to clarify
> the limitations. FieldArithmetic might also be appropriate, but is less
> clear (+ see below about quaternions).
> Daves Sweeris and Abrahams wrote:
> > > I was under the impression that complex numbers are scalar numbers...
> although maybe not since once you get past, I think quaternions, you start
> losing division and eventually multiplication, IIRC. (I hate it when two of
> my recollections try to conflict with each other.)
> > Well, you can view them as 2d vectors, so I'm not sure. We need more of
> a numerics expert than I am to weigh in here.
> But complex numbers have multiplication and division operations defined
> (they form a field), unlike regular vectors in R². Meaning you can have a
> vector space over the field of complex numbers.
> You still have multiplication and division past quaternions, but the
> quaternions are *not commutative*. This isn't really a problem in Swift,
> since the compiler never allows you to write an expression where the order
> of arguments to an operator is ambiguous. This means they are *not a
> field*, just a division ring <https://en.wikipedia.org/wiki/Division_ring> (a
> field is a commutative division ring). (I believe you can't technically
> have a vector space over a non-commutative ring; the generalization would
> be a module <https://en.wikipedia.org/wiki/Module_%28mathematics%29>.
> That's probably an argument for the name ScalarArithmetic over
Hmm, the issue is that the integers are not a field. So, if we're going to
have it all modeled by one protocol, maybe neither is the best term.
> Octonions are furthermore *not associative*, which is more of a problem
> since the standard library arithmetic operators are given an associativity.
> We can't give that up for regular numeric types, so someone working with
> octonions would just have to define their own non-associative operators
> (since there's no way for the protocol requirement to specify the operator
>> That said, the ability to interpret integer literals as an arbitrary
>> Arithmetic isn't used anywhere in the standard library, so I'd like to
>> consider undoing
>> and moving ExpressibleByIntegerLiteral down the protocol hierarchy to
>> >> *## BinaryInteger*
>> >> I'm a little confused by the presence of init(extendingOrTruncating:)
>> >> *all* binary integers. Why does it make sense to be able to write
>> >> UInt16(extendingOrTruncating: (-21 as Int8)) ? In my mind,
>> >> only has meaning when the source and destination types are both signed.
>> > Here (just IMHO), I disagree. Since any binary integer can be truncated
>> > any can be right-shifted, it makes sense for
>> > to be available for all of them. If I understand the proposal correctly,
>> > `Int(extendingOrTruncating: (-1 as Int8))` would give you a different
>> > result than `Int(extendingOrTruncating: (255 as UInt8)`.
>> Yes it would. But the real justification for this operation is generic
>> programming with integers. It isn't possible, today, to define:
>> init<T:BinaryInteger>(extending:T) where T.isNarrowerThan(Self)
>> init<T:BinaryInteger>(truncating:T) where T.isWiderThan(Self)
>> >> Although I would not be against a clamp() function in the standard
>> >> "init(clamping:)" sounds strange to me. What about calling it
>> >> "init(nearestTo:)"? One could also define a FloatingPoint version of
>> >> init(nearestTo:), i.e. lround(). For maximum annoyance and
>> >> you could even rename the existing init(_:) to init(truncating:) to
>> make it
>> >> clear that truncation, not regular rounding, occurs when converting
>> >> floating-point.
>> > I would disagree with that as well; the existing `init(_:)` truncates
>> > fractional part but errors if that value is outside the representable
>> > range, while the word "truncating" makes it sound like something is
>> done to
>> > make any possible source value give you a destination value (a la
>> > "extendingOrTruncating" for integers).
>> > Meanwhile, "nearest to" is problematic for me because either 127 and
>> 129 is
>> > "nearest to" 128, and neither integer is "nearest to" 500, yet
>> > `Int8(clamping: 128)` and `Int8(clamping: 500)` both give you 127. This
>> > operation is very different from lround() for floating point, which in
>> > Swift is `rounded(.toNearestOrAwayFromZero)` (or just `rounded()`).
> 127 and 129 are both nearest to 128, but both of them are not Int8s. The
> "Int8(nearestTo: 128)" would be 127.
Well, now we're splitting hairs, but I don't think this is among the more
plausible rationalizations of a hypothetical init(nearestTo:). Either you
look for the integer(s) nearest to (128 as Int) in the set of all Int
values. Then, the nearest integers are equally 127 and 129. You attempt to
convert both to Int8 and trap (unless you first _clamp_ your results to the
range [-128, 127] before attempting to convert). Or, you look for the Int8
value(s) nearest to 128 in the set of all possible Int8 values. However,
since the distance between (128 as Int) and any Int8 is undefined, (127 as
Int8) is strictly speaking no nearer to (128 as Int) than is (-128 as Int8)
or any other Int8 value.
By contrast, the word `clamping` tells you that something is first done to
(128 as Int)--namely, it is clamped to the range [-128, 127]--and then the
result is converted to an Int8; thus, this function never traps.
>> > In both these cases, I think it's important to use spellings that
>> > distinguish doing things to the fractional part of floating point values
>> > and doing things to the binary representation of integer values. I think
>> > there's great value in using the term "clamp", which is very different
>> > "nearest"; and in using an unlabeled `init(_:)` for initializing from FP
>> > values, which is most similar to the unlabeled `init(_:)` for
>> > from integer values, as opposed to your suggested `init(truncating:)`
>> > connotes some similarity to `init(extendingOrTruncating:)` for integer
>> > values.
>> > *... masking shifts*
>> >> The example claims that "(30 as UInt8) &>> 11" produces 3, because it
>> >> right-shifts 30 by 3. But isn't the bitWidth of UInt8 always 8 since
>> it's a
>> >> fixed-width type?
>> >> Why should 11 get masked to 3 before the shift?
>> Because those are the semantics of masking shift?
> Ah, I see why I was confused (the point is that 2³ = 8 = bitWidth). Does
> this mean that masking shifts only work when the bitWidth is a power of
> two? Otherwise, there'd be no way to reduce the shift range to exactly
> 0..<bitWidth using a mask.
>> You can think of masking shift as an optimization akin to using &+ to
>> add signed numbers when you know it can't overflow. It's an expert's
>> tool. If you want semantics that always make sense, use regular shift.
>> >> (Also, it might be a good idea to choose examples with numbers whose
>> >> base-ten representations don't look like valid binary. 😉)
>> Good point.
>> >> What use cases are there for masking shifts? I was under the
>> >> impression that "smart shifts" were pretty much how the usual shift
>> >> instructions already behaved.
>> No, sadly not! The way the usual shift instructions behave is that if
>> you shift by a negative amount or you overshift, you get undefined
>> behavior, which gets expressed in various fun ways at runtime!
>> >> (Minor: were the smart shift operators supposed to be included as
>> >> BinaryInteger protocol requirements? I only see them in the
>> >> shifts" extension.)
>> They don't need to be requirements, as they're defined entirely in terms
>> of other (required) operations. I'd be interested in any arguments you
>> might have for making them requirements, though.
>> >> *... init<T: BinaryInteger>(_ source: T)*
>> >> Now a thought experiment: suppose you wanted to write an
>> >> arbitrary-precision BigInt, or any binary integer such as Int256. The
>> >> BinaryInteger protocol requires you to provide init<T:BinaryInteger>(_
>> >> source: T). Given the source of type T, how do you access its bits? Is
>> >> repeated use of word(at:) the recommended way?
>> >> If so, it might be nice to include a "wordCount" returning the number
>> >> of available words; otherwise I suppose the user has to use something
>> >> like bitWidth/(8*MemoryLayout<Int>.size), which is pretty ugly.
>> good catch; countRepresentedWords is in the prototype
>> and it should be in the proposal.
>> >> *## FixedWidthInteger*
>> >> Why is popcount restricted to FixedWidthInteger? It seems like it could
>> >> theoretically apply to any UnsignedInteger.
>> > You can perfectly legitimately get a popcount for a signed integer. It's
>> > just looking at the binary representation and counting the ones. But
>> > with two's complement, it'd have to be restricted to FixedWidthInteger
>> > not BinaryInteger, because the same negative value would have a
>> > popcount depending on the type's bitwidth.
>> Right, or to put it differently, the popcount of a negative BigInt would
>> always be inifinite.
> Makes sense, but infinity has no representation in Int, so what would the
> popcount be?
I think the point here is that not all BigInt values would have a
representable popcount, justifying the requirement being on
FixedWidthInteger as opposed to BinaryInteger.
>> >> *## Masking arithmetic*
>> >> Do &* and &+ and &- need their operands to have the same type, or could
>> >> these be heterogeneous too (at least &+ and &-)?
>> I don't know what semantics you're imagining for heterogeneous &+. What
>> type would it return? What mask would it use?
> I was thinking of these as overflow rather than masking operators; i.e.
> the lhs type would be allowed to overflow if the rhs value were too large.
> swift-evolution mailing list
> swift-evolution at swift.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the swift-evolution