[swift-server-dev] HTTP API v0.1.0

Helge Heß me at helgehess.eu
Wed Nov 1 20:12:53 CDT 2017


On Nov 2, 2017, at 1:21 AM, Johannes Weiß <johannesweiss at apple.com> wrote:
> that's really cool, thanks for putting that together and sharing it! And that sounds great about the GCD implementation, by the way based on DispatchSources or DispatchIO?

Channels. (source for listener of course) I mean, it doesn’t really matter that much. With the current API sources may be more efficient as you may be able to avoid copying to DispatchData objects.


I think if we really want async, we need a few API adjustments to make async efficient enough. E.g. maybe pass queues around (probably not a straight DispatchQueue if we don’t want to tie it to GCD, but a context which ensures synchronization - that would be efficient for sync too).

My current async imp has a lot of dispatch overhead because the callbacks can be called from essentially any queue in the current API (affecting every read & write). There are a few models that can be used for doing async:


a) the single ‘worker queue’ model done by Node.js / Noze.io. In Swift this is not as bad as on Node.js because you can dispatch work on a different queue and then back to the single ‘main queue’ which ensures the IO stack synchronization (like shown in here http://noze.io/noze4nonnode/). When calling done callbacks, you dispatch back to a single queue.

This can congest because of the global main queue. In Node they deal w/ this by forking servers. Which is kinda fine and lame at the same time.


b) A complete ‘queue free’ model. I’m doing this in my current approach. It is kinda lock free, but has a lot of async dispatching. The base performance overhead is/should-be pretty high, but scalability is kinda like to optimal (theoretically making use of as many CPUs as possible).

Not sure how well this goes in Linux. Are DispatchQueue’s also cheap on Linux or does the current implementation create a new thread for each?


c) Something like a), but with multiple worker queues. Kinda like the Node resolution, but w/o the different processes. This needs an API change, all the callbacks need get passed ‘their’ main queue (because it is not a global anymore).


I don’t know. a) is the easiest and maybe good enough. Scalability should be way better than the threaded sync setup.
My more complex b) version can be easily changed to a).


While working on this I’m kinda wondering whether we should indeed have multiple implementations that can be used for the specific purposes. E.g. for performance benchmarks (which people seem to like) you would want to have the sync implementation for best raw throughput. For a scalability benchmark a) is probably best for regular setups, and maybe b) for 256 CPU servers.
c) is a sweet spot, but complicates the API (though I think UIKit etc also have the model of passing queues alongside delegates/callbacks).


Oh, and we have those:

  func doSomething(onDone cb : ()->())

This would be way better

  func doSomething(onDone cb : (()->())?)

for the case where cb is nil (no empty block needs to be synchronized).


Maybe I manage to finish it up on the weekend, not sure yet.

hh

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 874 bytes
Desc: Message signed with OpenPGP
URL: <https://lists.swift.org/pipermail/swift-server-dev/attachments/20171102/369583ab/attachment.sig>


More information about the swift-server-dev mailing list