[swift-evolution] Final by default for classes and methods
cantrell at pobox.com
Tue Dec 22 11:03:39 CST 2015
Joe’s and Brent’s writeups copied below really get to the heart of this for me. This is a tough question, and I find myself torn. I’m sympathetic to both lines of argument.
It’s not entirely true that “you can never take back” overridability — you can make a breaking API change with a new major version, of course — but it’s a compelling point nonetheless. One default is clearly safer than the other, at least for the library author. “Safe by default” is indeed the Swift MO. (Well, except for array subscripting. And any public API involving tuples, for which any change, even type widening, is a breaking change. And….OK, it’s not absolute, but “safe by default” is the MO 95% of the time.) “Final by default” just seems more Swift-like to me.
Despite that, Joe, I have to agree with Brent on his central point: the perspective that comes from spending a lot of time _writing_ libraries is very different from one who spend more time _using_ them. Yes, UIKit is not going to be rewritten in Swift anytime soon, but Brent is rightly imagining a future where the Swift way is The Way.
I weigh the safety argument against the many long hours I’ve spent beating my head against library behaviors, wishing I could read UIKit’s source code, wishing I could tweak that one little thing that I can’t control, and being grateful for the dubious workaround that saves the day — yes, even when a subsequent OS update breaks it. I know what I’m getting into when I solve a problem with a hack, and if the hack ships, it’s only because I weighed the risks and benefits. We app developers rely on swizzling, dubious subclassing, and (especially) undocumented behavior far more often than any of us would like. It is just part of the reality of making things ship — and an important part of the success of Apple’s app ecosystem.
This debate reminds me of something that often happens when a humans-and-paper process moves to software. When the software starts rigorously enforcing all the rules the humans were theoretically following all along, and it turns out that quite a lot of in-the-moment nuanced human judgement was crucial to making everything work. With nuance removed, things fall apart — and instead of things at last achieving the rigor that seemed so desirable in theory, the process has to explicitly loosen. (At the local coffee shop, a new iPad-based POS system suddenly made it an “uh let me get the manager” moment when I want to get the off-menu half-sized oatmeal I’ve always got for my toddler.)
I’m not totally opposed to final by default. Joe’s arguments sway me in principle. In practice, if Swift does indeed moves us toward “less wiggle room, less hackable” by default, then that wiggle room _will_ have to come from somewhere else: perhaps more open sourcing and more forking, or faster turnaround on fixes from library authors, or a larger portion of time spent by library authors explicitly exposing and documenting customization points. The new effort involved for library authors is nothing to sneeze at.
> On Dec 22, 2015, at 9:46 AM, Joe Groff via swift-evolution <swift-evolution at swift.org> wrote:
> I think a lot of people in this thread are conflating "final by default" or "sealed by default" with "sealed everywhere". No matter what the default is, the frameworks aren't going to suddenly rewrite themselves in Swift with final everything; Objective-C will still be what it is. Furthermore, we're only talking about language defaults; we're not taking away the ability for frameworks to make their classes publicly subclassable or dynamically overrideable. That's a policy decision for framework authors to make. The goal of "sealed by default" is to make sure the language doesn't make promises on the developer's behalf that they weren't ready to keep. ObjC's flexibility is valuable, and Apple definitely takes advantage of it internally all over place; Apple also has an army of compatibility engineers to make sure framework changes work well with existing software. Not every developer can afford that maintenance burden/flexibility tradeoff, though, so that flexibility is something you ought to opt in to. You can always safely add public subclassability and dynamic overrideability in new framework versions, but you can never take them back.
> On Dec 22, 2015, at 12:31 AM, Brent Royal-Gordon via swift-evolution <swift-evolution at swift.org> wrote:
> Just imagine going through UIKit and marking every class inheritable *by hand*—no cheating with a script—and you'll have some idea of the additional burden you'll be imposing on developers as they write their code. The proposals that every single method should be explicitly marked as overridable are even worse; frankly, I don't think I'd want to use Swift if you forced me to put a `virtual` keyword on every declaration.
> I worry that the team's use of Swift to build the standard library, and their close association with teams building OS frameworks, is biasing the language a little bit. I think that, in all likelihood, most Swift code is in individual applications, and most libraries are not published outside of a single team. If I'm right, then most Swift code will probably be quite tolerant of small but technically "breaking" ABI changes, such as making a class `final`, or (as mentioned in another thread) making a closure `@noescape`.
> That won't be true of published library code, of course. But published library code is a small minority of the Swift code people will write, and it already will require greater scrutiny and more careful design.
> There is already a good opportunity to reflect on whether or not an API should be `final`. It's when you put the `public` keyword on it. I think programmers will have a better, easier time writing their code if, in this case, we put a little bit of trust in them, rather than erecting yet another hoop they must jump through.
> Perhaps we could even provide a "strict interfaces" mode that published frameworks can turn on, which would require you to declare the heritability of every class and member. But even that may not be a good idea, because I also suspect that, in the field, most published libraries probably have to be extended in ways the library's author did not expect or anticipate.
> This means doing some dangerous overriding, yes. But a UI that breaks after an iOS upgrade is not nearly as dangerous to my business as a three-month delay while I reimplement half of UIKit because someone in Cupertino thought they knew what I need better than I do and turned off—or even worse, *left turned off without a single thought*—subclassing of UIBarButtonItem.
> The bottom line is this: Your users like Swift's strictures when they're helpful. *This stricture is not helpful.* Library users don't accidentally subclass things, and with the `override` keyword in Swift, they don't accidentally override them either. And where it truly is important, for safety or for speed, to prevent subclassing, we already have `final`. Making it the default is less safety than suffering.
More information about the swift-evolution