[swift-evolution] Final by default for classes and methods

Austin Zheng austinzheng at gmail.com
Wed Dec 23 13:09:00 CST 2015


+1 for final by default. Thanks to Matthew for his eloquent summary of the arguments against, and counterpoints.

> 1) Workarounds for framework “bugs”.  I put “bugs” in quotes because I think it is likely that sometimes it is not exactly a bug, but misunderstood or disliked behavior that is being “worked around".  No need to rehash this.  It’s been beaten to death.  It’s also irrelevant as it’s clear that the default is almost certainly going to be at least `sealed`.  It’s also irrelevant because Apple’s frameworks are currently written in Objective-C, not Swift, and when Apple begins writing frameworks in Swift they are very likely to be thinking carefully about subclassing as part of the API contract decision.

I think this is the argument that convinced me regarding framework monkey patching. If the frameworks are in Objective-C, then the proposal is either irrelevant, the behavior for subclassing Objective-C classes with this proposal hasn't yet been defined, or we can carve out an exception for @objc classes. If the frameworks are written in Swift, nothing is stopping Apple from slapping 'final' on all the classes they don't want developers to inherit from, even if the proposal is rejected; neither is there anything stopping them from making all their classes inheritable (if that happened to be what they wanted, which it isn't).

Other than that, all the other major arguments seem to stem from convenience, and there are conceivably workarounds for all of them for the specific use cases being addressed. For example, just as we have @testable to subvert access control for testing purposes, we should have some sort of modifier to allow test code to subclass 'final' classes to create mocks or harnesses.

Arguments aside, I like this proposal because it creates a clear distinction within library (and even application-internal code) between "class as a reference semantics equivalent to struct" and "class as an extension point for further customization". This is a useful tool for building API contracts, just like access control modifiers or generic types for arguments/return values.

There might be performance benefits, but I expect them to be relatively minor given how WMO works, and how someone (I think Chris) has mentioned they want to extend that optimization to modules as well in the future with the package manager.

Best,
Austin

> 2) Flexibility.  If I don’t need inheritance when I first write a type and later realize I do need it, I have to revisit the type and add an `inheritable` annotation later.  This is closely related to argument #2 and mostly relevant during prototyping (argument #5).  IMO when this scenario arises you should have to revisit the original type.  If you don’t you are asking for trouble as inheritance was not considered when it was written.  Adding an `inheritable` annotation is trivial when compared to the analysis that should be performed when it becomes a superclass.
> 
> 3) Annoyance.  Some consider it to be annoying to have to annotate a class declaration in order to inherit from it.  People stating this argument either are either writing a lot of superclasses or are so bothered by the need to annotate their type declarations that they avoid `final` and its related benefits when that is really the right thing for their class.  For me personally, `final` is the right thing for most classes I write.  I also think adding a `final` annotation is the right thing to do if you’re not sure whether it will be a superclass or not.  The need to modify the annotation will remind you that you haven’t fully considered inheritance in your design yet.
> 
> 4) Testing.  This is solvable with behavior similar to @testable.  It should not influence the decision about what the default is for production code.
> 
> 5) Prototyping.  This should also not influence the decision about what the default is for production code.  I would not have a problem with a prototyping environment allowing `inheritable` by default (maybe a Playground mode?).  There could even be a tool that migrates the prototype to a real project and adds the `inheritable` annotation where necessary.  Regardless of what happens here, the prototyping problem can and should be solved independently of the production language and should not influence the default is used in and impacts production code.
> 
> 6) Education.  There may be some value in allowing inheritance by default in education settings, especially early on.  I view this as being quite similar to the prototyping case and again should not have an influence on the default that professionals use in production code.
> 
> If I have missed any of the major arguments against making `final` the default please respond with them.  I would be happy to add them to the list.
> 
> I don’t find any of these arguments compelling.  The only one that is really relevant to production code (once you accept the reality of #1) is annoyance which I consider a minor complaint that is definitely not relevant to my code and is likely not relevant to many other people’s code as well.  
> 
> On the other hand, the argument for `final` as default is compelling IMHO.  As has been stated many times, inheritance is part of an API contract that needs to be considered as clearly as anything else.  This still applies when the API is internal to an app.
> 
> Final by default greatly improves our ability to get up to speed on a new codebase quickly and reason about the code:
> 
> 1) When I look at an unannotated class declaration I know there are no subclasses.  I don’t have to search the code to look for subclasses.  
> 2) I know that where there are superclasses, the author was reminded by the language to consider inheritance.  They may have made mistakes in the design of the superclass, but at least the language gave them a subtle reminder that they need to think about it.  
> 3) I also know there will not be any subclasses in the future unless someone adds an `inheritable` annotation (in which case they are responsible for considering the implications of that).  The `inheritable` annotation also serves as a good prompt for code reviews to consider the implications of that annotation if / when the class becomes an intentional superclass.
> 
> Of course all of these advantages also apply to a codebase where I am the sole author and maintainer when I come back to it year or two later and have forgotten some details.
> 
> One consideration that has not been definitively established one way or the other is frequency of use.  In application code are there usually more classes that are superclasses (or could reasonably be a superclass in the future without additional analysis and design)?  Or are there usually more classes that are `final`, effectively final, or should be final, at least until further analysis and design has been performed?  
> 
> In my experience the reality is that the majority of my classes inherit from UIKit classes, but are not themselves superclasses.  I don’t claim to speak for anyone else, but I think we would find that to be the most common pattern if we looked at the question closely.
> 
> I hope this is a reasonably accurate summary of the positions on both sides of this.
> 
> Matthew
> 
> 
> 
>> 
>> On Tue, Dec 22, 2015, at 09:03 AM, Paul Cantrell via swift-evolution wrote:
>>> Joe’s and Brent’s writeups copied below really get to the heart of this for me. This is a tough question, and I find myself torn. I’m sympathetic to both lines of argument.
>>> 
>>> It’s not entirely true that “you can never take back” overridability — you can make a breaking API change with a new major version, of course — but it’s a compelling point nonetheless. One default is clearly safer than the other, at least for the library author. “Safe by default” is indeed the Swift MO. (Well, except for array subscripting. And any public API involving tuples, for which any change, even type widening, is a breaking change. And….OK, it’s not absolute, but “safe by default” is the MO 95% of the time.) “Final by default” just seems more Swift-like to me.
>>> 
>>> Despite that, Joe, I have to agree with Brent on his central point: the perspective that comes from spending a lot of time _writing_ libraries is very different from one who spend more time _using_ them. Yes, UIKit is not going to be rewritten in Swift anytime soon, but Brent is rightly imagining a future where the Swift way is The Way.
>>> 
>>> I weigh the safety argument against the many long hours I’ve spent beating my head against library behaviors, wishing I could read UIKit’s source code, wishing I could tweak that one little thing that I can’t control, and being grateful for the dubious workaround that saves the day — yes, even when a subsequent OS update breaks it. I know what I’m getting into when I solve a problem with a hack, and if the hack ships, it’s only because I weighed the risks and benefits. We app developers rely on swizzling, dubious subclassing, and (especially) undocumented behavior far more often than any of us would like. It is just part of the reality of making things ship — and an important part of the success of Apple’s app ecosystem.
>>> 
>>> This debate reminds me of something that often happens when a humans-and-paper process moves to software. When the software starts rigorously enforcing all the rules the humans were theoretically following all along, and it turns out that quite a lot of in-the-moment nuanced human judgement was crucial to making everything work. With nuance removed, things fall apart — and instead of things at last achieving the rigor that seemed so desirable in theory, the process has to explicitly loosen. (At the local coffee shop, a new iPad-based POS system suddenly made it an “uh let me get the manager” moment when I want to get the off-menu half-sized oatmeal I’ve always got for my toddler.)
>>> 
>>> I’m not totally opposed to final by default. Joe’s arguments sway me in principle. In practice, if Swift does indeed moves us toward “less wiggle room, less hackable” by default, then that wiggle room _will_ have to come from somewhere else: perhaps more open sourcing and more forking, or faster turnaround on fixes from library authors, or a larger portion of time spent by library authors explicitly exposing and documenting customization points. The new effort involved for library authors is nothing to sneeze at.
>>> 
>>> Cheers,
>>> 
>>> Paul
>>> 
>>> 
>>>> On Dec 22, 2015, at 9:46 AM, Joe Groff via swift-evolution <swift-evolution at swift.org <mailto:swift-evolution at swift.org>> wrote:
>>>> 
>>>> I think a lot of people in this thread are conflating "final by default" or "sealed by default" with "sealed everywhere". No matter what the default is, the frameworks aren't going to suddenly rewrite themselves in Swift with final everything; Objective-C will still be what it is. Furthermore, we're only talking about language defaults; we're not taking away the ability for frameworks to make their classes publicly subclassable or dynamically overrideable. That's a policy decision for framework authors to make. The goal of "sealed by default" is to make sure the language doesn't make promises on the developer's behalf that they weren't ready to keep. ObjC's flexibility is valuable, and Apple definitely takes advantage of it internally all over place; Apple also has an army of compatibility engineers to make sure framework changes work well with existing software. Not every developer can afford that maintenance burden/flexibility tradeoff, though, so that flexibility is something you ought to opt in to. You can always safely add public subclassability and dynamic overrideability in new framework versions, but you can never take them back.
>>> 
>>> 
>>>> On Dec 22, 2015, at 12:31 AM, Brent Royal-Gordon via swift-evolution <swift-evolution at swift.org <mailto:swift-evolution at swift.org>> wrote:
>>>> 
>>>> Just imagine going through UIKit and marking every class inheritable *by hand*—no cheating with a script—and you'll have some idea of the additional burden you'll be imposing on developers as they write their code. The proposals that every single method should be explicitly marked as overridable are even worse; frankly, I don't think I'd want to use Swift if you forced me to put a `virtual` keyword on every declaration.
>>>> 
>>>> I worry that the team's use of Swift to build the standard library, and their close association with teams building OS frameworks, is biasing the language a little bit. I think that, in all likelihood, most Swift code is in individual applications, and most libraries are not published outside of a single team. If I'm right, then most Swift code will probably be quite tolerant of small but technically "breaking" ABI changes, such as making a class `final`, or (as mentioned in another thread) making a closure `@noescape`.
>>>> 
>>>> That won't be true of published library code, of course. But published library code is a small minority of the Swift code people will write, and it already will require greater scrutiny and more careful design. 
>>>> 
>>>> There is already a good opportunity to reflect on whether or not an API should be `final`. It's when you put the `public` keyword on it. I think programmers will have a better, easier time writing their code if, in this case, we put a little bit of trust in them, rather than erecting yet another hoop they must jump through.
>>>> 
>>>> Perhaps we could even provide a "strict interfaces" mode that published frameworks can turn on, which would require you to declare the heritability of every class and member. But even that may not be a good idea, because I also suspect that, in the field, most published libraries probably have to be extended in ways the library's author did not expect or anticipate. 
>>>> 
>>>> This means doing some dangerous overriding, yes. But a UI that breaks after an iOS upgrade is not nearly as dangerous to my business as a three-month delay while I reimplement half of UIKit because someone in Cupertino thought they knew what I need better than I do and turned off—or even worse, *left turned off without a single thought*—subclassing of UIBarButtonItem.
>>>> 
>>>> The bottom line is this: Your users like Swift's strictures when they're helpful. *This stricture is not helpful.* Library users don't accidentally subclass things, and with the `override` keyword in Swift, they don't accidentally override them either. And where it truly is important, for safety or for speed, to prevent subclassing, we already have `final`. Making it the default is less safety than suffering.
>>> _______________________________________________
>>> swift-evolution mailing list
>>> swift-evolution at swift.org <mailto:swift-evolution at swift.org>
>>> https://lists.swift.org/mailman/listinfo/swift-evolution
>> _______________________________________________
>> swift-evolution mailing list
>> swift-evolution at swift.org <mailto:swift-evolution at swift.org>
>> https://lists.swift.org/mailman/listinfo/swift-evolution
> 
>  _______________________________________________
> swift-evolution mailing list
> swift-evolution at swift.org <mailto:swift-evolution at swift.org>
> https://lists.swift.org/mailman/listinfo/swift-evolution <https://lists.swift.org/mailman/listinfo/swift-evolution>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20151223/dea3522c/attachment.html>


More information about the swift-evolution mailing list