[swift-evolution] [Concurrency] Fixing race conditions in async/await example

Pierre Habouzit phabouzit at apple.com
Thu Aug 31 19:48:26 CDT 2017


> On Aug 27, 2017, at 6:02 PM, Howard Lovatt via swift-evolution <swift-evolution at swift.org> wrote:
> 
> The async/await is very similar to the proposed Future (as I posed earlier) with regard to completion-handler code, they both re-write the imported completion-handler function using a closure, the relevant sentence from the Async Proposal is:
> 
> "Under the hood, the compiler rewrites this code using nested closures ..."
> 
> Unlike the proposed future code the async code is not naturally parallel, in the running example the following lines from the async code are run in series, i.e. await blocks:
> 
>   let dataResource  = await loadWebResource("dataprofile.txt")
>   let imageResource = await loadWebResource("imagedata.dat")
> The equivalent lines using the proposed Future:
>   let dataResource  = loadWebResource("dataprofile.txt")
>   let imageResource = loadWebResource("imagedata.dat")
> Run in parallel and therefore are potentially faster assuming that resources, like cores and IO, are available.
> 
> Therefore you would be better using a Future than an async, so why provide an async unless you can make a convincing argument that it allows you to write a better future?

On real systems going wide concurrently (and I will not call this parallelism on purpose, parallelism is about doing the same compute on different CPUs) is not often a win provided the thing you're doing is complex enough.
The reason for it is that most of these subsystems, and loadWebResource is a perfect example of this, use constrained resources that use locks and the like.

In practice, you would really want all resources that you load from the internet to be handled from the same execution context (whether it's a dispatch queue, a thread or a CFRunloop is absolutely not relevant), because it's very likely that you'll end up contending concurrent executions of these loads through locks and the various shared resources in use (like your NIC and the networking stack).

We've learned over the last 10 years of working on a massively concurrent system because of libdispatch making it too easy (some will say thanks to, but these aren't the one who need to cope with that it does to the OS :P), is that going wide by default is a design mistake and is very difficult to optimize when it goes bad.

The right design is to have silos of execution contexts that are very few, as few as you can, and parallelize (and here I mean parallelize) the operations that benefit from using several compute units when you have proved that you do scale linearly (and scaling linearly is VERY hard, even for low values of NCPU as soon as what you do is complex enough)

To me (but I'm catching up with the thread and maybe there have been refinements in that regard already), the loseness of the "execution context" that will run your completion handler here is a liability for the implementation of the runtime. Go for example is not doing that well, for the longest time the environment they recommended had a very small amount of physical threads because of scalability concerns around their goroutines scheduler. I'm mentioning go as an example of a language which is implementing CSP as a core concept, and has had (I'm not quite sure where they stand these days) quite a lot of trouble scaling the implementation.


IMSHO dispatch_get_global_queue() is in practice on of the worst thing that the dispatch API provides, because despite all the best efforts of the runtime, there aren't enough information at runtime about your operations/actors/... to understand what your intent is and optimize for it. Which means that the language should make sure that (1) the anti-pattern is not the default behavior and (2) the interface provides a way to give and propagate the hints (dependency relationships, ordering, execution context, priorities, ...) the runtime will potentially need upfront.

> 
>   -- Howard.
> 
> On 28 August 2017 at 09:59, Adam Kemp <adam.kemp at apple.com <mailto:adam.kemp at apple.com>> wrote:
> This example still has nested closures (to create a Future), and still relies on a synchronous get method that will block a thread. Async/await does not require blocking any threads.
> 
> I’m definitely a fan of futures, but this example isn’t even a good example of using futures. If you’re using a synchronous get method then you’re not using futures properly. They’re supposed to make it easy to avoid writing blocking code. This example just does the blocking call on some other thread.
> 
> Doing it properly would show the benefits of async/await because it would require more nesting and more complex error handling. By simplifying the code you’ve made a comparison between proper asynchronous code (with async/await) and improper asynchronous code (your example).
> 
> That tendency to want to just block a thread to make it easier is exactly why async/await is so valuable. You get simple code while still doing it correctly. 
> 
> --
> Adam Kemp
> 
> On Aug 27, 2017, at 4:00 PM, Howard Lovatt via swift-evolution <swift-evolution at swift.org <mailto:swift-evolution at swift.org>> wrote:
> 
>> The running example used in the white paper coded using a Future is:
>> 
>> func processImageData1() -> Future<Image> {
>>     return AsynchronousFuture { _ -> Image in
>>         let dataResource  = loadWebResource("dataprofile.txt") // dataResource and imageResource run in parallel.
>>         let imageResource = loadWebResource("imagedata.dat")
>>         let imageTmp      = decodeImage(dataResource.get ?? Resource(path: "Default data resource or prompt user"), imageResource.get ?? Resource(path: "Default image resource or prompt user"))
>>         let imageResult   =  dewarpAndCleanupImage(imageTmp.get ?? Image(dataPath: "Default image or prompt user", imagePath: "Default image or prompt user"))
>>         return imageResult.get ?? Image(dataPath: "Default image or prompt user", imagePath: "Default image or prompt user")
>>     }
>> }
>> 
>> This also avoids the pyramid of doom; the pyramid is avoided by converting continuation-handlers into either a sync or future, i.e. it is the importer that eliminates the nesting by translating the code automatically. 
>> 
>> This example using Future also demonstrates three advantages of Future: they are naturally parallel (dataResource and imageResource lines run in parallel), they timeout automatically (get returns nil if the Future has taken too long), and if there is a failure (for any reason including timeout) it provides a method of either detecting the failure or providing a default (get returns nil on failure). 
>> 
>> There are a three of other advantages a Future has that this example doesn’t show: control over which thread the Future runs on, Futures can be cancelled, and debugging information is available.
>> 
>> You could imagine `async` as a syntax sugar for Future, e.g. the above Future example could be:
>> 
>> func processImageData1() async -> Image {
>>     let dataResource  = loadWebResource("dataprofile.txt") // dataResource and imageResource run in parallel.
>>     let imageResource = loadWebResource("imagedata.dat")
>>     let imageTmp      = decodeImage(dataResource.get ?? Resource(path: "Default data resource or prompt user"), imageResource.get ?? Resource(path: "Default image resource or prompt user"))
>>     let imageResult   =  dewarpAndCleanupImage(imageTmp.get ?? Image(dataPath: "Default image or prompt user", imagePath: "Default image or prompt user"))
>>     return imageResult.get ?? Image(dataPath: "Default image or prompt user", imagePath: "Default image or prompt user")
>> }
>> 
>> Since an async is sugar for Future the async runs as soon as it is created (as soon as the underlying Future is created) and get returns an optional (also cancel and status would be still be present). Then if you want control over threads and timeout they could be arguments to async:
>> 
>> func processImageData1() async(queue: DispatchQueue.main, timeout: .seconds(5)) -> Image { ... }
>> 
>> On Sat, 26 Aug 2017 at 11:00 pm, Florent Vilmart <florent at flovilmart.com <mailto:florent at flovilmart.com>> wrote:
>> Howard, with async / await, the code is flat and you don’t have to unowned/weak self to prevent hideous cycles in the callbacks.
>> Futures can’t do that
>> 
>> On Aug 26, 2017, 04:37 -0400, Goffredo Marocchi via swift-evolution <swift-evolution at swift.org <mailto:swift-evolution at swift.org>>, wrote:
>>> With both he now built in promises in Node8 as well as libraries like Bluebird there was ample time to evaluate them and convert/auto convert at times libraries that loved callback pyramids of doom when the flow grows complex into promise based chains. Converting to Promises seems magical for the simple case, but can quickly descend in hard to follow flows and hard to debug errors when you move to non trivial multi path scenarios. JS is now solving it with their implementation of async/await, but the point is that without the full picture any single solution would break horribly in real life scenarios.
>>> 
>>> Sent from my iPhone
>>> 
>>> On 26 Aug 2017, at 06:27, Howard Lovatt via swift-evolution <swift-evolution at swift.org <mailto:swift-evolution at swift.org>> wrote:
>>> 
>>>> My argument goes like this:
>>>> 
>>>>   1. You don't need async/await to write a powerful future type; you can use the underlying threads just as well, i.e. future with async/await is no better than future without. 
>>>> 
>>>>   2. Since future is more powerful, thread control, cancel, and timeout, people should be encouraged to use this; instead because async/await are language features they will be presumed, incorrectly, to be the best way, consequently people will get into trouble with deadlocks because they don't have control.
>>>> 
>>>>   3. async/await will require some engineering work and will at best make a mild syntax improvement and at worst lead to deadlocks, therefore they just don't carry their weight in terms of useful additions to Swift.
>>>> 
>>>> Therefore, save some engineering effort and just provide a future library.
>>>> 
>>>> To turn the question round another way, in two forms:
>>>> 
>>>>   1. What can async/wait do that a future can't?
>>>> 
>>>>   2. How will future be improved if async/await is added?
>>>> 
>>>> 
>>>>   -- Howard.
>>>> 
>>>> On 26 August 2017 at 02:23, Joe Groff <jgroff at apple.com <mailto:jgroff at apple.com>> wrote:
>>>> 
>>>>> On Aug 25, 2017, at 12:34 AM, Howard Lovatt <howard.lovatt at gmail.com <mailto:howard.lovatt at gmail.com>> wrote:
>>>>> 
>>>>>  In particular a future that is cancellable is more powerful that the proposed async/await.
>>>> 
>>>> It's not more powerful; the features are to some degree disjoint. You can build a Future abstraction and then use async/await to sugar code that threads computation through futures. Getting back to Jakob's example, someone (maybe the Clang importer, maybe Apple's framework developers in an overlay) will still need to build infrastructure on top of IBActions and other currently ad-hoc signalling mechanisms to integrate them into a more expressive coordination framework.
>>>> 
>>>> -Joe
>>>> 
>>>> _______________________________________________
>>>> swift-evolution mailing list
>>>> swift-evolution at swift.org <mailto:swift-evolution at swift.org>
>>>> https://lists.swift.org/mailman/listinfo/swift-evolution <https://lists.swift.org/mailman/listinfo/swift-evolution>
>> 
>> -- 
>> -- Howard.
>> _______________________________________________
>> swift-evolution mailing list
>> swift-evolution at swift.org <mailto:swift-evolution at swift.org>
>> https://lists.swift.org/mailman/listinfo/swift-evolution <https://lists.swift.org/mailman/listinfo/swift-evolution>
> 
> _______________________________________________
> swift-evolution mailing list
> swift-evolution at swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20170831/3298394e/attachment.html>


More information about the swift-evolution mailing list