[swift-evolution] Remove Failable Initializers

Joe Groff jgroff at apple.com
Mon Mar 7 13:29:29 CST 2016


> On Mar 5, 2016, at 7:01 PM, Brent Royal-Gordon via swift-evolution <swift-evolution at swift.org> wrote:
> 
>> My point being that if you consider failable initialisers to be somehow less severe than a thrown error, then that’s not really reinforced in any way, as with a throwable initialiser you’re free to throw error types that are minor, e.g- YouProbablyMadeAMistakeError vs SomethingJustExplodedAndPeopleAreDyingError.
> 
> I don't think it's an issue of severity, but rather an issue of frequency.
> 
> Someone asked a similar question on swift-users, and what I said there was that optionals are best used for errors where failure is *expected*—where it is just as likely that an input will fail as that it will succeed, and both cases deserve equal attention. Thrown errors are for cases where failure is still going to happen regularly, but most of the time the operation will succeed and it doesn't make sense to clutter the main flow of control with error handling. Preconditions are for cases where failure should be impossible.
> 
> Making a choice between these three categories constrains your API's users. But that is what APIs *do*. Using any API requires you to give up some flexibility and accept the judgement of the API's creator. The decision of `Int.init(_:radix:)` to return an optional instead of throwing an error is no different from its decision to accept Arabic numerals but not Roman numerals. The API's designer has an opinion about what you need and designed it to serve those needs as well as it can; you can either accept that opinion or use something else.
> 
>> There’s nothing that a failable initialiser represents that is distinct from error handling; before error handling was added it was the only option, but now it’s effectively redundant, with less capabilities.
> 
> You see it as having fewer capabilities; I see it as having different use cases. Most code using `Int.init(_:radix:)` is *better* for it being an optional. By returning an optional, the API makes it clear that it can fail, there is only one way for it to fail, and failure is just a routine part of using this API. `throws` says nothing about what can be thrown, and it also suggests that the error cases are unusual and require special handling. That's just not true for this API.
> 
>> As I pointed out it is possible to optimise error handling to be just as efficient as failable initialisers; if the compiler currently doesn’t do this then it absolutely should irrespective of this proposal.
> 
> It is never going to be possible to fully optimize it in the way you imagine.
> 
> Ultimately, the reason throwing isn't as efficient as optionals is simply that throwing communicates more data. And it communicates more data precisely *because* it is more expressive, because it supports many error cases instead of just one. Ultimately you're running up against information theory here, and the optimizer can't change fundamental laws of mathematics.
> 
> Now, I suppose it would be possible for the compiler to duplicate the function and provide a second implementation for `try?` to use which signals pass/fail in one bit without any details. In private APIs (or internal APIs with testability disabled), it might even be able to optimize away the unused variant. But this is going to make the app bigger and its cache performance worse.
> 
> And if it ever *does* have an opportunity to optimize away the full implementation, all that *really* means is that you've wasted programming effort. You took time to carefully model all of the possible errors, but you didn't actually do anything with that information. Or it means you *didn't* do that—you just threw `IDontKnowItBroke` everywhere—and so you used a mechanism that burdens both caller and callee with additional syntax without getting an ounce of benefit from it. Either way, you have wasted programmer time and effort. (Not to mention compiler engineer time and effort to implement this hypothetical optimization.)
> 
>>> However, so far, I don't see any technical merit to it. It's not faster, it's not shorter, it's not better error handling. And even if you think that you can say that it's not worse, you so far have stopped short of showing that it's better.
>> 
>> The main reason against failable initialisers is that given that the above assignments are identical, this means that it is a redundant feature.
> 
> All languages have redundant features. `+` is redundant—we have `appendContentsOf` and `addWithOverflow` (this one even provides additional error information!) and all sorts of other things which make `+` unnecessary. And yet we provide it because it would be cumbersome to require those heavyweight mechanisms everywhere.
> 
>> Requiring the use of thrown errors is definitely better error handling, as it’s far more flexible, and encourages developers to think about the type of errors should be thrown and/or the contents of these errors
> 
> As I said, this does not guarantee better error handling if you're just throwing generic `InvalidParameterError`s. And if you *are* taking the time to precisely model all errors, that time may be better spent elsewhere.
> 
> Swift goes to great pains to eliminate causes of *incorrect* code. We have optionals because "anything can be nil" causes people to make mistakes; we have trapping arithmetic because people don't design their code to handle overflows; we have designated and convenience initializers because it's really easy to accidentally create mutual recursion between parent and child class initializers.
> 
> It is also *incorrect* to ignore the possibility of an error, and so none of the three error handling mechanisms allows you to ignore an error without explicitly choosing to do so.* But providing limited error detail is not incorrect—it is merely making a perfectly acceptable tradeoff.
> 
> (* This is not quite true of returning an optional (or boolean) value right now because Swift does not yet make `@warn_unused_result` the default, but it looks like that will change in Swift 3. However, if you *do* capture the result, Swift does force you to reckon with the possibility that it may be optional.)
> 
>> (i.e- that InvalidParameterError could easily take a string describing why the parameter was invalid such as “Value cannot contain non-numeric characters”)
> 
> As an aside, I hope you realize that adding a human-readable string is *not helpful*. Without a machine-readable representation, the error cannot be expressed in domain-specific terms ("The ID number included an 'o'; did you mean '0'?") or even easily localized.
> 
> Actually providing a useful error would take a lot of careful modeling; in fact, almost any amount of effort you might put into it may prove insufficient for some particular client. Even if requiring `throws` did succeed in encouraging *more* specific error modeling, that doesn't mean it would provide *enough* error modeling for any particular use case.
> 
> And even if we put a magic spell on the Swift compiler to force all programmers using Swift to completely describe errors, *that effort would almost always be wasted*, because most of the time, most of that detail would be ignored.
> 
> Completely modeling all sources of error is not always the right solution. At some point, the right solution is to stop answering the "why" questions. We shouldn't punish people for that, even if the stopping point they choose is right at the top.

ErrorType does get some special treatment from the compiler and runtime to make throwing errors cheap. ErrorType's representation is specialized to use a single pointer to a refcounted box, so that it can be returned and passed cheaply, and on Darwin platforms it is laid out to be toll-free-bridgeable as an NSError subclass, so that 'as NSError' conversions are free. It's not implemented yet, but on 64-bit platforms we have enough room to pack a small error enum along with its type metadata into the pointer without allocating at all.

-Joe


More information about the swift-evolution mailing list