[swift-evolution] Static Dispatch Pitfalls
xiaodi.wu at gmail.com
Mon May 23 15:09:22 CDT 2016
On Mon, May 23, 2016 at 12:13 PM, Matthew Johnson <matthew at anandabits.com>
> On May 23, 2016, at 10:50 AM, Xiaodi Wu <xiaodi.wu at gmail.com> wrote:
> On Mon, May 23, 2016 at 6:58 AM, Matthew Johnson <matthew at anandabits.com>
>> Sent from my iPad
>> On May 22, 2016, at 11:55 PM, Xiaodi Wu <xiaodi.wu at gmail.com> wrote:
>> On Sun, May 22, 2016 at 11:20 PM, Matthew Johnson <matthew at anandabits.com
>> > wrote:
>>> On May 22, 2016, at 4:22 PM, Xiaodi Wu <xiaodi.wu at gmail.com> wrote:
>>> On Sun, May 22, 2016 at 3:38 PM, Brent Royal-Gordon via swift-evolution
>>> <swift-evolution at swift.org> wrote:
>>>> > The proposal is well thought out and makes a valiant attempt at
>>>> handling all of the issues necessary. But I don't support it for a number
>>>> of reasons. I think it highlights how awkward it would be to try to
>>>> address shadowing on a case-by-case basis, which isn't necessarily obvious
>>>> until you explore what a solution might look like.
>>>> It does, but I'm just not sure what else you can do about it. If
>>>> there's a warning, you need a way to silence it. If you ignore some cases
>>>> (like creating a conflict by importing two modules), you'll miss some of
>>>> the subtlest and hardest-to-fix bugs.
>>>> Honestly, I'm tempted to say "you just can't ever shadow a final
>>>> protocol method" and be done with it. If that prevents certain conformances
>>>> or stops certain imports, so be it. You can always work around that with
>>>> wrapper types or other techniques.
>>> You know, I think this might be cleverest solution. It adds a small
>>> limit to the language, but it doesn't unduly penalize retroactive modeling.
>>> If you control either the protocol or the conforming type, you can change
>>> the name of one of the methods so it doesn't shadow/get shadowed by the
>>> If you control the conforming type this isn’t too big an issue as long
>>> as the protocol was well designed. However, if the protocol was poorly
>>> designed it could be an issue. Maybe a method that can be more efficiently
>>> implemented by some types was not made a requirement, but an extension
>>> method (with a slower implementation) takes the obvious name. Maybe you
>>> would be willing to live with the slower implementation when your type is
>>> accessed via the protocol, because at least it can still be used via the
>>> protocol, but you don’t want to burden callers who use the concrete type
>>> with the slow implementation. What do you do then?
>> If a method that really ought to be a protocol requirement isn't a
>> requirement and you don't control the protocol, well you're pretty much out
>> of luck even today. Any conforming type accessed via the protocol will use
>> the less efficient extension method and nothing about Brent's proposal
>> would make that worse or better.
>> Shadowing of the slow extension method doesn't remove the burden. It may
>> make calling your fast implementation look nicer, but a less informed user
>> of your type would unwittingly call the slower implementation if they
>> access your type via the protocol. You could instead:
>> * come up with another name for your fast implementation; maybe the
>> "obvious" name for the method is "frobnicate"--then name your method
>> * or, decide you don't want to conform your type to a poorly designed
>> protocol after all, instead retroactively modeling your type and other
>> types of interest with a better designed protocol of your making.
>> Maybe you want the type to inter operate with code you don't control and
>> in order to do that it must conform to the protocol. And you don't want to
>> obfuscate the interface to the concrete type because the protocol is poorly
> I'm not sure this is a reasonable set of demands. I understand a protocol
> to be a contract. If you decide to conform to a poorly designed protocol,
> you *should* have a poorly designed concrete type.
> This is crazy. If a protocol happens to place a method in an extension
> rather than making it a default implementation of a requirement and I want
> to conform to it does not mean my concrete type should have be poorly
Quite simply, I think it should.
> Sure, when the 3rd party type uses my concrete type via the interface of
> its protocol it will not receive the benefits of a higher performance
> implementation. But maybe interoperability is more important than the
> highest performance possible.
As you have said, the workaround is to provide two classes, one wrapping
the other. One provides interoperability, the other high performance.
> That is the purpose of a protocol, to provide certain guarantees, be they
> wise or unwise. To me, it's a bug rather than a feature to support papering
> over poor protocol design when conforming to a protocol, which also forces
> the poor design to be exposed only when accessing your type through that
> protocol. In the end, you don't have a well-designed type; your type is
> instead simultaneously well and poorly designed!
> When it comes to interoperating with code you do not control you get what
> you get. You have to work with it one way or another. What I am saying is
> that the language should not artificially limit our options here.
I wouldn't call it an 'artificial limit,' at least any more than any other
aspect of language design. The language must determine what it means to
conform a type to a protocol. One option, which I think is entirely valid,
is to decide that extensions on a protocol are not to be shadowed by a
>> Conforming to the protocol *is not* the primary reason your type exists
>> - conformance is used only for the purpose of using your type with a
>> specific piece of third party code.
>> You *could* wrap your type for the purpose of this conformance. This is
>> what a Brent alluded to. But it requires boilerplate and a bunch of
>> conversion operations. This is not just annoying, it could also be complex
>> enough to lead to bugs.
>>> If you control the protocol but want to retroactively model types you do
>>> not control this assumes you are willing to design your protocol around
>>> those types. What if one of those types happens to implement a method that
>>> should not be a requirement of the protocol for one reason or another, but
>>> will be implemented as an extension method. What do you do then?
>> I'm not sure I quite understand when this arises. Surely, by
>> construction, if you wish to retroactively model types, you are willing to
>> design your protocol around them. What else could it mean to retroactively
>> model existing types? Can you give a concrete example where during
>> retroactively modeling you simply have no choice but to name an extension
>> method using a name that it is shadowed by a conforming type?
>> I'm not saying you have *no choice*. But again, conforming the one
>> problematic type is not the primary purpose for which you are designing the
>> protocol. You know the shadowing method will not present any problems for
>> your design. Why should you be forced to design your protocol around this?
> Because, again, a protocol is a contract. If you're retroactively modeling
> many well designed types and one poorly designed type, the lowest common
> denominator is a poorly designed protocol. That *should* be the result, no?
> It all depends on the types and the protocol. Protocols often represent a
> small subset of the total interface and functionality of a type.
> The fact that the type may happen to use a name for a method that matches
> a name you would like to use in an extension to your protocol does not
> necessarily mean either is poorly designed.
I agree. Both the type and the protocol might be beautifully designed. But
I would argue that conforming the one to the other might be a poor design.
> And of course there are cases where you do not control either. Some
>>> people write code with a lot of 3rd party dependencies these days (not my
>>> style, but pretty common). This is not a trivial concern.
>> You are saying that it would be possible for a protocol extension in one
>> dependency to conflict with a conforming type in another? This issue can be
>> avoided if enforcement of non-shadowing by the compiler is such that when
>> neither conforming type nor protocol extension is under your control
>> everything continues to work as-is.
>> So you would still allow the developer to declare the conformance without
>> error? This means that developers still need to understand the shadowing
>> behavior but it is pushed even further into a dark corner of the language
>> with more special case rules that must be learned to understand when you
>> might run into it.
> No, I would forbid declaring any such conformance in code the developer
> controls. If you control the conforming type and it contains a method name
> that clashes with a protocol extension method, you would be forbidden from
> declaring conformance without renaming your clashing method. In fact, I'm
> beginning to wonder if protocol extension methods on protocols outside the
> same module should be internally scoped.
> In this case you *don’t* control the conforming type. You want to declare
> conformance for a type that is in a 3rd party library. You said when
> neither is under your control you would leave things as-is. Today you can
> declare conformance without error. Would you still allow that?
In that scenario, I would not. The type you don't control simply cannot
conform to the protocol you don't control. Again, there would be
>>>> > (And btw, 'final' in this proposal is not exactly, because when
>>>> combined with @incoherent the methods are not actually 'final' - there is a
>>>> necessary escape hatch).
>>>> There is no particular reason you couldn't allow similar annotated
>>>> shadowing of `final` methods on classes; they would have basically the same
>>>> semantics as you get here, where if a piece of code knows it's working with
>>>> the subclass you get subclass semantics, but otherwise you get superclass
>>>> ones. I do not claim this is a good idea. :^)
>>>> > Second, we should require annotation of methods in protocol
>>>> extensions that are not default implementation of requirements. Maybe
>>>> 'shadowable' or 'staticdispatch'? These are awkward, but so is the
>>>> behavior and they describe it better than anything else I've seen so far
>>>> (maybe we can do better though).
>>>> I don't think `shadowable` makes sense here; that doesn't acknowledge a
>>>> limitation, which is what we're trying to do here.
>>>> I continue to wish we hadn't taken `static` for statically-dispatched
>>>> type methods. But I lost that argument years before swift-evolution became
>>>> a thing.
>>>> > I don't like 'nondynamic' both because it is not aligned with the
>>>> meaning of 'dynamic' and also because it only says what the behavior *is
>>>> not* rather than what the behavior *is*.
>>>> I do understand what you mean here. Unfortunately, the word "virtual"
>>>> in a keyword makes me break out in hives, and I'm not sure what else we
>>>> might base it on.
>>>> This is why I selected `final` in my proposal. `final` is desperately
>>>> close to the actual semantic here, far closer than anything else in the
>>> How about `nonoverridable`? That said, I agree with earlier comments
>>> that training-wheel annotations probably aren't the way to go. Maybe, as
>>> you suggest above, just don't allow shadowing at all.
>>> Unfortunately, ‘nonoverridable’ doesn’t really make sense because you
>>> don’t ‘override’ protocol requirements.
>> You don't override protocol requirements, but you do override their
>> default implementations, whereas you cannot 'override' statically
>> dispatched extension methods. Within a protocol extension, there are
>> methods that are overridable by conforming types (i.e. default
>> implementations of protocol requirements) and methods that are not (i.e.
>> statically dispatched non-requirements).
>>>> Brent Royal-Gordon
>>>> swift-evolution mailing list
>>>> swift-evolution at swift.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the swift-evolution