[swift-evolution] protocol-oriented integers (take 2)
Dave Abrahams
dabrahams at apple.com
Sun Jan 15 08:52:55 CST 2017
on Sun Jan 15 2017, David Sweeris <davesweeris-AT-mac.com> wrote:
>> On Jan 14, 2017, at 18:55, Dave Abrahams via swift-evolution <swift-evolution at swift.org> wrote:
>>
>>
>> Responding to both Jacob and Xiaodi here; thanks very much for your
>> feedback!
>
>>
>>> on Sat Jan 14 2017, Xiaodi Wu <swift-evolution at swift.org> wrote:
>>>
>>> On Sat, Jan 14, 2017 at 1:32 AM, Jacob Bandes-Storch via swift-evolution <
>>> swift-evolution at swift.org> wrote:
>>>
>>>> Great work, all. I'm not sure I ever commented on SE-0104, so I've read
>>>> through this one more carefully. Here are some things that came to mind:
>>>>
>>>> *## Arithmetic*
>>>>
>>>> Why are ExpressibleByIntegerLiteral and init?<T:BinaryInteger>(exactly:)
>>>> required? I could understand needing access to 0, but this could be
>>>> provided by a static property or nullary initializer. It doesn't seem like
>>>> all types supporting arithmetic operations would necessarily be convertible
>>>> from an arbitrary binary integer.
>>>>
>>>>
>>>> I tried to evaluate the Arithmetic protocol by considering what it means
>>>> for higher-dimensional types such as CGPoint and CGVector. My use case was
>>>> a linear interpolation function:
>>>>
>>>> func lerp<T: Arithmetic>(from a: T, to b: T, by ratio: T) -> T {
>>>> return a + (b - a) * ratio
>>>> }
>>>>
>>>> If I've read the proposal correctly, this definition works for integer and
>>>> floating-point values. But we can't make it work properly for CGVector (or,
>>>> perhaps less mathematically correct but more familiar, CGPoint). It's okay
>>>> to define +(CGVector, CGVector) and -(CGVector, CGVector), but *(CGVector,
>>>> CGVector) and /(CGVector, CGVector) don't make much sense. What we really
>>>> want is *(CGVector, *CGFloat*) and /(CGVector, *CGFloat*).
>>>>
>>>> After consulting a mathematician, I believe what the lerp function really
>>>> wants is for its generic param to be an affine space
>>>> <https://en.wikipedia.org/wiki/Affine_space>. I explored this a bit here:
>>>> https://gist.github.com/jtbandes/93eeb7d5eee8e1a7245387c660d53b
>>>> 03#file-affine-swift-L16-L18
>>>>
>>>> This approach is less restrictive and more composable than the proposed
>>>> Arithmetic protocol, which can be viewed as a special case with the
>>>> following definitions:
>>>>
>>>> extension Arithmetic: AffineSpace, VectorSpace { // not actually
>>>> allowed in Swift
>>>> typealias Displacement = Self
>>>> typealias Scalar = Self
>>>> }
>>>>
>>>> It'd be great to be able to define a lerp() which works for all
>>>> floating-point and integer numbers, as well as points and vectors (assuming
>>>> a glorious future where CGPoint and CGVector have built-in arithmetic
>>>> operations). But maybe the complexity of these extra protocols isn't worth
>>>> it for the stdlib...
>>>>
>>>
>>> I think, in the end, it's the _name_ that could use improvement here. As
>>> the doc comments say, `Arithmetic` is supposed to provide a "suitable basis
>>> for arithmetic on scalars"--perhaps `ScalarArithmetic` might be more
>>> appropriate? It would make it clear that `CGVector` is not meant to be a
>>> conforming type.
>>
>> We want Arithmetic to be able to handle complex numbers. Whether Scalar
>> would be appropriate in that case sort of depends on what the implied
>> field is, right?
>
> I was under the impression that complex numbers are scalar
> numbers...
Well, you can view them as 2d vectors, so I'm not sure. We need more of
a numerics expert than I am to weigh in here.
> although maybe not since once you get past, I think quaternions, you
> start losing division and eventually multiplication, IIRC. (I hate it
> when two of my recollections try to conflict with each other.)
>
>> It's true that CGPoint and CGVector have no obvious sensible
>> interpretation of "42", and that's unfortunate. The problem with
>> protocols for algebraic structures is that there's an incredibly
>> complicated lattice (see figures 3.1, 3.2 in
>> ftp://jcmc.indiana.edu/pub/techreports/TR638.pdf) and we don't want to
>> shove all of those protocols into the standard library (especially not
>> prematurely) but each requirement you add to a more-coarsely aggregated
>> protocol like Arithmetic will make it ineligible for representing some
>> important type.
>
> My only concern with this is that later on, if we do decide the stdlib
> should have them, does it then become a breaking change to start
> moving all that stuff around? Or would everything be fine as long as
> the net requirements for the "Arithmetic" protocol stay the same
> regardless of how things get redistributed under the hood?
As far as I know, we don't have a strategy for injecting protocols into
a refinement hierarchy without destablizing ABI.
>>>> *## FixedWidthInteger*
>>>>
>>>> Why is popcount restricted to FixedWidthInteger? It seems like it could
>>>> theoretically apply to any UnsignedInteger.
>>>>
>>>
>>> You can perfectly legitimately get a popcount for a signed integer. It's
>>> just looking at the binary representation and counting the ones. But then
>>> with two's complement, it'd have to be restricted to FixedWidthInteger and
>>> not BinaryInteger, because the same negative value would have a different
>>> popcount depending on the type's bitwidth.
>>
>> Right, or to put it differently, the popcount of a negative BigInt would
>> always be inifinite.
>>
>>> I'd disagree strongly with removing popcount from signed binary
>>> integers. However, I suppose the same requirement could be applied to
>>> both FixedWidthInteger and UnsignedInteger.
>>
>> Given that there's little point in ever creating an unsigned BigInt
>> type, I think that wouldn't make much sense.
>
> Why is that? I'd think they'd be useful anywhere it'd be a logical
> error to have a negative value and UInt64 wasn't big enough.
You can always build that type around a signed BigInt implementation.
The point of fixed width unsigned integers is that you really need that
last bit for representation. With a BigInt, there is no last bit.
Note also that representing “it's a logical error to have a negative
value” in the type system clashes a bit with the Swift philosophy that
you're basically supposed to use Int everywhere.
> Is that situation simply that rare?
I think it's pretty rare, but the question is really, in the situations
where this could come up, is it important to be able to write a generic
algorithm over UnsignedInteger that doesn't use any additional protocols
and needs to work with your unsigned BigInt and needs to call popcount.
At that point the use-case starts to seem pretty darned narrow.
Remember, you can always define protocol UnsignedPopCount and make all
the unsigned types conform to it.
--
-Dave
More information about the swift-evolution
mailing list