[swift-evolution] Throws? and throws!

Xiaodi Wu xiaodi.wu at gmail.com
Sat Jan 14 22:22:48 CST 2017


On Sat, Jan 14, 2017 at 10:10 PM, Jonathan Hull <jhull at gbis.com> wrote:

> And finally, even if an operator function could fail in multiple ways
> (we're really getting to very hypothetical hypotheticals here), writing
> `try!` all the time might look silly and non-Swift users might then mock
> the language, but I dispute the contention that it would make things
> "unbearable.”
>
>
> The whole point of ‘try’/’try!’ is to make the user consider how to handle
> the error cases.  If it gets used everywhere, then you have a boy who cried
> wolf situation where it is seen as noise and ignored… which definitely
> affects usability. (Take Windows' error dialogs as an example of this
> phenomenon).
>
>
In a hypothetical world where + was throwing, that would be a fair point,
and it would be something to balance against Greg's argument that `try!`
and `!` have value because they show all potential crash points at the
point of use. However, as this is very much a hypothetical, the more
salient point here is that there _aren't_ so many things that have multiple
meaningfully distinct ways of recovering from error.

In this present version of Swift, practical experience shows that if
anything people pay great amounts of attention (maybe too much) to
statements with `!`, even so far as to try to forbid it in their house
style (definitely too extreme, IMO)! Meanwhile, `try?` simply can't be
ignored because the type system makes you unwrap the result at some point
down the line.

On Jan 14, 2017, at 7:29 PM, Xiaodi Wu <xiaodi.wu at gmail.com> wrote:
>
> On Sat, Jan 14, 2017 at 8:03 PM, Jonathan Hull <jhull at gbis.com> wrote:
>
>> My intended framing of this does not seem to be coming across in my
>> arguments.  I am not thinking of this as a way to avoid typing ‘try!’ or
>> ‘try?’.  This is not intended to replace any of the current uses of
>> ‘throws’.  Rather, it is intended to replace trapping and nil-returning
>> functions where converting it to throw would be burdensome in the most
>> common use cases, but still desirable in less common use cases.  In my
>> mind, it is only enabling the author to provide extra information and
>> flexibility, compared to the current behavior.
>>
>> For example, let’s say I have a failable initializer, which could fail
>> for 2 or 3 different reasons, and that the vast majority of use-cases I
>> only care whether it succeeded or not (which is why nil-returning was
>> chosen)… but there may be a rare case or two where I really would prefer to
>> probe deeper (and changing it to a throwing initializer would inhibit the
>> majority cases).  Then using ’throws?’ allows the primary usage to remain
>> unchanged, while allowing users to opt-in to throwing behavior when desired.
>>
>> Right now I end up making multiple functions, which are identical except
>> for throw vs nil-return, and must now be kept in sync.  I’ll admit it isn’t
>> terribly common, but it has come up enough that I think it would still be
>> useful.
>>
>
> As you say, I think this is a pretty niche use case. When you are in
> control of the code, it's trivial to write a second function that wraps the
> throwing function, returning an optional value on error. The only thing
> you'd need to keep in sync would be the declaration, not the function body,
> and that isn't truly onerous on the rare occasion when this is at issue.
>
>
>> The other argument I will make is one of symmetry.  We have 3 different
>> types of error handling in swift: throwing, optional-returns, and trapping.
>>
>
> As the Error Handling Rationale document has pointed out, these three
> different types of error handling are meant for different _kinds_ of error.
> The idea is that ideally the choice of what kind of error handling to use
> shouldn't be down to taste or other arbitrary criteria, but should reflect
> whether we're dealing with a recoverable error (throws), simple domain
> error (return nil), or logical error (trap). That much can be determined at
> the point of declaration. At the use site, there are tools to allow the end
> user to handle these errors in a variety of ways, but there is a logic
> behind allowing conversions between some and not all combinations:
>
> * A logical error is meant to be unrecoverable and thus cannot be
> converted to either nil or throw. To call a function that traps is to
> assert that the function's preconditions are met. If it's a possibility
> that the preconditions cannot be met, it should be handled before calling
> the function. A trap represents a programming mistake that should be fixed
> by changing the code so as not to trap. There are adequate solutions to the
> few instances where an error that currently traps might not be always have
> to be fatal: in the case of array indices, for instance, there's been
> proposals to allow more lenient subscripting that don't trap, at the cost
> of extra overhead--of course, you can already implement this for yourself
> in an extension.
>
> * A simple domain error fails in only one obvious way and doesn't need an
> error; the end user can always decide that a failure should be handled by
> trapping using `!`--in essence, the user is asserting that the occurrence
> of a simple domain error at that use site is a logical error. It shouldn't
> be useful to convert nil to an error, because a simple domain error should
> be able to fail in only one way; if the function fails in more than one
> way, the function should throw, as it's no longer a simple domain error.
>
> * A recoverable error can fail in one or more ways, and how you recover
> may depend on how it failed; a user can always decide that they'll always
> recover in the same way by using `try?`, or they can assert that it's a
> logical error to fail at all using `try!`. The choice is up to the user.
>
> As far as I can tell, `throws?` and `throws!` do not change these choices;
> it simply says that a recoverable error should be handled by default as a
> simple domain error or a logical error, which in the Swift error handling
> model should be up to the author who's using the function and not the
> author who's declaring it.
>
>
>> There is already some ability to convert between these:
>>
>> If you have a throwing function:
>> ‘try?’ allows you to convert to optional-return
>> ‘try!’ allows you to convert to trapping
>>
>> If you have an optional-return:
>> ‘!’ allows you to convert to trapping
>> you are unable to convert to throwing (because it requires extra info
>> which isn’t available)
>>
>> If you have a trapping function, you are unable to convert to either.
>>
>> With ‘throws?’ you have an optional return which you can convert to
>> throwing with ‘try’
>>
>> With ‘throws!’ you have a trapping function where:
>> ‘try?’ allows you to convert to optional-return
>> ‘try’ allows you to convert to throwing
>>
>>
>> Thus, ‘throws?’ and ‘throws!’ allow you provide optional-return and
>> trapping functions where extra information is provided so that it is
>> possible to convert to throwing when desired.  In cases where this
>> conversion is not appropriate, the author would simply continue to use the
>> current methods.
>>
>> Basically it is useful in designs where optional-return or trapping were
>> ultimately chosen, but there was also a strong case to be made for making
>> it a throwing function.
>>
>
> This is totally the opposite use case from that outlined above. Here, you
> don't control the code and the original author decided to return an
> optional value or to trap. In essence, you're saying that the original
> author made a mistake, and what the author considered to be an
> unrecoverable error should be recoverable. However, you won't be able to
> squeeze useful errors out of it unless you write additional diagnostic
> logic yourself. This is already possible to do in an extension, where you
> can add a throwing function that checks the arguments before forwarding to
> the failable or trapping function. As far as I can tell, `throws!` doesn't
> provide you with any more tools to do so.
>
> I think the fears of people using it instead of ‘throws’ are unfounded
>> because they already have the ability to use optionals or trapping… this
>> just mitigates some of the losses from those choices.
>>
>> Does that make more sense?
>>
>
> Maybe I'm misunderstanding something. An author that writes a function
> that throws offers the greatest number of choices to their end users for
> how to handle errors. You're saying that in designing libraries you choose
> not to use `throws` because you don't want to burden your users with `try?`
> or `try!`, which as you say allows users to handle these errors in any way
> they choose, even though your functions fail in more than one non-trivial
> way. This represents a fundamental disagreement with the Swift error
> handling rationale, and again the disagreement boils down to: are the four
> letters in `try!` a burden? I would just think of it as making every
> throwing function at most four letters longer in name.
>
> Put another way, the Swift error handling design says that at the point of
> declaration, the choice of `throws` vs. returning nil should be based on
> how many ways there are to fail (or more accurately, how many meaningfully
> distinct ways there are to recover from failure), not how often the user
> cares about that information. If there are two meaningfully distinct ways
> to recover from failure in your function, but users will likely choose to
> recover from both failures in the same way 99.9% of the time, still choose
> `throws`. If there is only one way to recover, choose to return nil. If
> there are none, choose to trap.
>
> Put another way, going back to your original statement of motivation:
>
> There are some cases where it would be nice to throw errors, but errors
>> are rarely expected in most use cases, so the overhead of ‘try’, etc… would
>> make things unusable.
>
>
> I disagree with this statement. The overhead of `try` essentially never
> tips the balance between unusable and usable, for the same reason that
> making a function name three or four letters longer essentially never tips
> the balance between usable and unusable.
>
>
>> Thus fatalError or optionals are used instead.
>
>
> In the Swift error handling model, the frequency with which a user might
> have to write `try!` or `try?` should play no role in the author's choice
> of throwing vs. returning nil vs. fatalError.
>
>
>> For example, operators like ‘+’ could never throw because adding ’try’
>> everywhere would make arithmetic unbearable.
>
>
> As we discussed above, AFAICT, addition traps for performance reasons, as
> Swift aspires to be usable for systems programming.
>
> Even if that weren't the case, it would never throw because there's only
> one meaningful way in which addition can fail; thus, if anything, it'd be a
> failable operation. This would probably not be terrible (other than for
> performance), as nil values could be propagated to the end of any
> calculation, at which point a user would write `!` or handle the issue in a
> more sophisticated way.
>
> (As a digression, for FP values, NaN offers yet another way of signaling
> an error, which due to IEEE conformance Swift is obliged to keep distinct;
> however, as can be evidenced by the fact that the NaN payload is pretty
> much never used, it can be thought of as a counterpart to nil as opposed to
> Error.)
>
> And finally, even if an operator function could fail in multiple ways
> (we're really getting to very hypothetical hypotheticals here), writing
> `try!` all the time might look silly and non-Swift users might then mock
> the language, but I dispute the contention that it would make things
> "unbearable."
>
> Thanks,
>> Jon
>>
>>
>>
>> On Jan 12, 2017, at 5:34 PM, Greg Parker <gparker at apple.com> wrote:
>>
>>
>> On Jan 12, 2017, at 4:46 PM, Xiaodi Wu via swift-evolution <
>> swift-evolution at swift.org> wrote:
>>
>> On Thu, Jan 12, 2017 at 6:27 PM, Jonathan Hull <jhull at gbis.com> wrote:
>>
>>
>> Also, ‘try’ is still required to explicitly mark a potential error
>> propagation point, which is what it was designed to do.  You don’t have
>> ‘try’ with the variants because it is by default no longer a propagation
>> point (unless you make it one explicitly with ’try’).
>>
>>
>> If this is quite safe and more convenient, why then shouldn't it be the
>> behavior for `throws`? (That is, why not just allow people to call throwing
>> functions without `try` and crash if the error isn't caught? It'd be a
>> purely additive proposal that's backwards compatible for all currently
>> compiling code.)
>>
>>
>> Swift prefers that potential runtime crash points be visible in the code.
>> You can ignore a thrown error and crash instead, but the code will say
>> `try!`. You can force-unwrap an Optional and crash if it is nil, but the
>> code will say `!`.
>>
>> Allowing `try` to be omitted would obscure those crash points from humans
>> reading the code. It would no longer be possible to read call sites and be
>> able to distinguish which ones might crash due to an uncaught error.
>>
>> (There are exceptions to this rule. Ordinary arithmetic and array access
>> are checked at runtime, and the default syntax is one that may crash.)
>>
>>
>> --
>> Greg Parker     gparker at apple.com     Runtime Wrangler
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20170114/89a97318/attachment.html>


More information about the swift-evolution mailing list