[swift-evolution] typed throws

Chris Lattner clattner at nondot.org
Fri Aug 18 22:52:51 CDT 2017

On Aug 17, 2017, at 11:27 PM, John McCall <rjmccall at apple.com> wrote:
> Even for non-public code.  The only practical merit of typed throws I have ever seen someone demonstrate is that it would let them use contextual lookup in a throw or catch.  People always say "I'll be able to exhaustively switch over my errors", and then I ask them to show me where they want to do that, and they show me something that just logs the error, which of course does not require typed throws.  Every.  Single.  Time.

I won’t make any claims about what people say to you, but there are other benefits of course.  Even if you don’t *exhaustively* catch your errors, it is still useful to get code completion for the error cases that you do intend to handle (covering the rest with a "catch {“).

Throwing a resilient enum is actually a rather defensible way to handle this, since clients will be expected to have a default, and the implementation is allowed to add new cases.

> Programmers often have an instinct to obsess over error taxonomies that is very rarely directed at solving any real problem; it is just self-imposed busy-work.

Indeed, no denying that.

>> One thing that I’m personally very concerned about is in the systems programming domain.  Systems code is sort of the classic example of code that is low-level enough and finely specified enough that there are lots of knowable things, including the failure modes.
> Here we are using "systems" to mean "embedded systems and kernels".  And frankly even a kernel is a large enough system that they don't want to exhaustively switch over failures; they just want the static guarantees that go along with a constrained error type.

I think you’re over focusing on exhaustively handling cases.  If you use errno as one poor proxy for what kernels care about, it is totally reasonable to handle a few of the errors produced by a call (EAGAIN, EINTR are two common examples), but not all of them.  It is useful to get code completion on enum members, and it is also useful to get an indication from a compiler that something can/will never be thrown.  You give up both with full type erasure.

>> JohnMC has some ideas on how to change code generation for ‘throws’ to avoid this problem, but I don’t understand his ideas enough to know if they are practical and likely to happen or not.
> Essentially, you give Error a tagged-pointer representation to allow payload-less errors on non-generic error types to be allocated globally, and then you can (1) tell people to not throw errors that require allocation if it's vital to avoid allocation (just like we would tell them today not to construct classes or indirect enum cases) and (2) allow a special global payload-less error to be substituted if error allocation fails.
> Of course, we could also say that systems code is required to use a typed-throws feature that we add down the line for their purposes.  Or just tell them to not use payloads.  Or force them to constrain their error types to fit within some given size.  (Note that obsessive error taxonomies tend to end up with a bunch of indirect enum cases anyway, because they get recursive, so the allocation problem is very real whatever we do.)

The issue isn’t allocation that can fail, it is the fact that allocation runs in nondeterministic time even when it succeeds.  This is highly problematic for realtime systems.

Your answer basically says to me that you’re ok with offering a vastly limited programming model to people who care about realtime guarantees, and that gives up performance needlessly even in use-cases that probably care about low level perf.  I agree that it would not be the end of the world (that would be for systems programmers to have to keep using C :-), but it certainly falls short of the best possible system that we could build.


More information about the swift-evolution mailing list