[swift-evolution] [Concurrency] async/await + actors

John McCall rjmccall at apple.com
Sat Sep 2 15:15:39 CDT 2017


> On Sep 2, 2017, at 3:19 PM, Pierre Habouzit via swift-evolution <swift-evolution at swift.org> wrote:
> 
>> On Sep 2, 2017, at 11:15 AM, Chris Lattner <clattner at nondot.org <mailto:clattner at nondot.org>> wrote:
>> 
>> On Aug 31, 2017, at 7:24 PM, Pierre Habouzit <pierre at habouzit.net <mailto:pierre at habouzit.net>> wrote:
>>> 
>>> I fail at Finding the initial mail and am quite late to the party of commenters, but there are parts I don't undertsand or have questions about.
>>> 
>>> Scalable Runtime
>>> 
>>> [...]
>>> 
>>> The one problem I anticipate with GCD is that it doesn't scale well enough: server developers in particular will want to instantiate hundreds of thousands of actors in their application, at least one for every incoming network connection. The programming model is substantially harmed when you have to be afraid of creating too many actors: you have to start aggregating logically distinct stuff together to reduce # queues, which leads to complexity and loses some of the advantages of data isolation.
>>> 
>>> 
>>> What do you mean by this?
>> 
>> My understanding is that GCD doesn’t currently scale to 1M concurrent queues / tasks.
> 
> It completely does provided these 1M queues / tasks are organized on several well known independent contexts.
> The place where GCD "fails" at is that if you target your individual serial queues to the global concurrent queues (a.k.a. root queues) which means "please pool, do your job", then yes it doesn't scale, because we take these individual serial queues as proxies for OS threads.
> 
> If however you target these queues to either:
> - new serial queues to segregate your actors per subsystem yourself
> - or some more constrained pool than what the current GCD runtime offers (where we don't create threads to run your work nearly as eagerly)
> 
> Then I don't see why the current implementation of GCD wouldn't scale.

More importantly, the basic interface of GCD doesn't seem to prevent an implementation from scaling to fill the resource constraints of a machine.   The interface to dispatch queues does not imply any substantial persistent state besides the task queue itself, and tasks have pretty minimal quiescent storage requirements.  Queue-hopping is an unfortunate overhead, but a constant-time overhead doesn't really damage scalability and can be addressed without a major overhaul of the basic runtime interface.  OS threads can be blocked by tasks, but that's not a Dispatch-specific problem, and any solution that would fix it in other runtimes would equally fix it in Dispatch.

Now, an arbitrarily-scaling concurrent system is probably a system that's destined to eventually become distributed, and there's a strong argument that unbounded queues are an architectural mistake in a distributed system: instead, every channel of communication should have an opportunity to refuse further work, and the entire system should be architected to handle such failures gracefully.  But I think that can be implemented reasonably on top of a runtime where most local queues are still unbounded and "optimistic".

John.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20170902/4034c1d5/attachment.html>


More information about the swift-evolution mailing list