[swift-evolution] [Concurrency] async/await + actors
mailing at xenonium.com
Mon Sep 4 11:05:34 CDT 2017
> Le 4 sept. 2017 à 17:54, Chris Lattner via swift-evolution <swift-evolution at swift.org> a écrit :
> On Sep 4, 2017, at 4:19 AM, Daniel Vollmer <lists at maven.de <mailto:lists at maven.de>> wrote:
>> first off, I’m following this discussion with great interest, even though my background (simulation software on HPC) has a different focus than the “usual” paradigms Swift seeks to (primarily) address.
>>> On 3. Sep 2017, at 19:26, Chris Lattner via swift-evolution <swift-evolution at swift.org> wrote:
>>>> On Sep 2, 2017, at 11:09 PM, Pierre Habouzit <phabouzit at apple.com> wrote:
>>>>> On Sep 2, 2017, at 12:19 PM, Pierre Habouzit <pierre at habouzit.net> wrote:
>>>>> Is there a specific important use case for being able to target an actor to an existing queue? Are you looking for advanced patterns where multiple actors (each providing disjoint mutable state) share an underlying queue? Would this be for performance reasons, for compatibility with existing code, or something else?
>>>> Mostly for interaction with current designs where being on a given bottom serial queue gives you the locking context for resources naturally attached to it.
>>> Ok. I don’t understand the use-case well enough to know how we should model this. For example, is it important for an actor to be able to change its queue dynamically as it goes (something that sounds really scary to me) or can the “queue to use” be specified at actor initialization time?
>> I’m confused, but that may just be me misunderstanding things again. I’d assume each actor has its own (serial) queue that is used to serialize its messages, so the queue above refers to the queue used to actually process the messages the actor receives, correct?
>> Sometimes, I’d probably make sense (or even be required to fix this to a certain queue (in the thread(-pool?) sense), but at others it may just make sense to execute the messages in-place by the sender if they don’t block so no context switch is incurred.
> Do you mean kernel context switch? With well behaved actors, the runtime should be able to run work items from many different queues on the same kernel thread. The “queue switch cost” is designed to be very very low. The key thing is that the runtime needs to know when work on a queue gets blocked so the kernel thread can move on to servicing some other queues work.
My understanding is that a kernel thread can’t move on servicing a different queue while a block is executing on it. The runtime already know when a queue is blocked, and the only way it has to mitigate the problem is to spawn an other kernel thread to server the other queues. This is what cause the kernel thread explosion.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the swift-evolution