[swift-evolution] [Concurrency] Fixing race conditions in async/await example
Chris Lattner
clattner at nondot.org
Sat Sep 2 13:05:50 CDT 2017
On Aug 31, 2017, at 5:48 PM, Pierre Habouzit via swift-evolution <swift-evolution at swift.org> wrote:
>> Unlike the proposed future code the async code is not naturally parallel, in the running example the following lines from the async code are run in series, i.e. await blocks:
>>
>> let dataResource = await loadWebResource("dataprofile.txt")
>> let imageResource = await loadWebResource("imagedata.dat")
>> The equivalent lines using the proposed Future:
>> let dataResource = loadWebResource("dataprofile.txt")
>> let imageResource = loadWebResource("imagedata.dat")
>> Run in parallel and therefore are potentially faster assuming that resources, like cores and IO, are available.
>>
>> Therefore you would be better using a Future than an async, so why provide an async unless you can make a convincing argument that it allows you to write a better future?
>
> In practice, you would really want all resources that you load from the internet to be handled from the same execution context (whether it's a dispatch queue, a thread or a CFRunloop is absolutely not relevant), because it's very likely that you'll end up contending concurrent executions of these loads through locks and the various shared resources in use (like your NIC and the networking stack).
I completely agree with this point, and this is one of the nice things about the model as described: It allows the programmer to describe concurrent abstractions that make sense for their app, WITHOUT being bound to how the async operations themselves are implemented. I don’t expect the whole world to be reimplemented, but if it was, I’d expect that there would end up being one “network card actor” for each physical nic that is present. No matter how many concurrent requests are started by clients, they’d be serially enqueued on the actors queue, and the requests would presumably be serviced in parallel using efficient shared mutable state WITHIN the network card actor.
> On real systems going wide concurrently is not often a win provided the thing you're doing is complex enough.
> The reason for it is that most of these subsystems, and loadWebResource is a perfect example of this, use constrained resources that use locks and the like.
I agree with you if you’re talking about a single machine, but if you’re talking about distributing across a cluster of a thousand machines, then I disagree.
-Chris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20170902/bcbfbefb/attachment.html>
More information about the swift-evolution
mailing list