[swift-dev] [arc optimization] Why doesn't enum destructuring use guaranteed references?
Félix Cloutier
felixcloutier at icloud.com
Sat Dec 30 12:21:39 CST 2017
> Le 29 déc. 2017 à 20:50, Michael Gottesman <mgottesman at apple.com> a écrit :
>
> No worries. Happy to help = ). If you are interested, I would be happy to help guide you in implementing one of these optimizations.
That sounds fun. I'll have to check with my manager after the holidays.
> The main downside is that in some cases you do want to have +1 parameters, namely places where you are forwarding values into a function for storage. An example of such a case is a value passed to a setter or an initializer. I would think of it more as an engineering trade-off.
>
> I am currently experimenting with changing normal function arguments to +0 for this purpose, leaving initializers and setters as taking values at +1. Regardless of the default, we can always provide an attribute for the purpose of allowing the user to specify such behavior.
It sounds like having flexible parameter ownership rules doesn't have too much overhead if it can be user-specified (in some future). Would it be feasible to use escape analysis to decide if a parameter should be +0 or +1?
More ARC questions! I remember that some time ago, someone else (John McCall?) said that references aren't tied to a scope. This was in the context of deterministic deallocation, and the point was that contrary to C++, an object could potentially be released immediately after its last use in the function rather than at the end of its scope. However, that's not really what you said ("When the assignment to xhat occurs we retain x and at the end of the function [...], we release xhat"), and it's not how Swift 4 works from what I can tell (release is not called "as soon as possible" from my limited testing).
It seems to me that this penalizes functional-style programming in at least two ways:
This kills tail call optimizations, because often the compiler will put a release call in the tail position
Especially in recursion-heavy algorithms, objects can be kept around for longer than they need to
Here's an example where both hit:
> class Foo {
> var value: [Int]
>
> init() {
> value = Array<Int>(repeating: 0, count: 10000)
> }
>
> init(from foo: Foo) {
> value = foo.value
> value[0] += 1
> }
> }
>
> func silly(loop: Int, foo: Foo) -> Foo {
> guard loop != 0 else { return foo }
> let copy = Foo(from: foo)
> return silly(loop: loop - 1, foo: copy)
> }
>
> print(silly(loop: 10000, foo: Foo()).value[0])
I wouldn't call that "expert Swift code" (indeed, if I come to my senses <https://github.com/apple/swift-evolution/blob/master/proposals/0193-cross-module-inlining-and-specialization.md#proposed-solution> and use a regular loop, it does just fine), but in this form, it does need about 800MB of memory, and it can't use tail calls.
The function has the opposite problem from the pattern matching case. It is specialized such that `foo` is passed at +0: it is retained at `return foo` and released (in the caller) after the call to `silly`. However, the optimal implementation would pass it at +1, do nothing for `return foo`, and release it (in the callee) after the call to `Foo(from: foo)`. (Or, even better, it would release it after `value = foo.value` in the init function.)
I'll note that escape analysis would correctly find that +1 is the "optimal" ownership convention for `foo` in `silly` :) but it won't actually solve either the memory use problem or the missed tail call unless the release call is also moved up.
I guess that the question is: what does Swift gain by keeping objects around for longer than they need to? Is it all about matching C++ or is there something else?
Félix
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-dev/attachments/20171230/2b8135c2/attachment.html>
More information about the swift-dev
mailing list