[swift-server-dev] HTTP API v0.1.0

Johannes Weiß johannesweiss at apple.com
Thu Nov 2 14:24:38 CDT 2017


Hi Carl,

> On 2 Nov 2017, at 10:53 am, Carl Brown <carl.brown.swift at linuxswift.com> wrote:
> 
>> 
>> On Nov 2, 2017, at 12:33 PM, Johannes Weiß <johannesweiss at apple.com> wrote:
>> 
>> Hi Carl,
>> 
>>> On 2 Nov 2017, at 9:15 am, Carl Brown via swift-server-dev <swift-server-dev at swift.org> wrote:
>>> 
>>> 
>>>> On Nov 1, 2017, at 8:13 PM, Helge Heß via swift-server-dev <swift-server-dev at swift.org> wrote:
>>>> b) A complete ‘queue free’ model. I’m doing this in my current approach. It is kinda lock free, but has a lot of async dispatching. The base performance overhead is/should-be pretty high, but scalability is kinda like to optimal (theoretically making use of as many CPUs as possible).
>>>> 
>>>> Not sure how well this goes in Linux. Are DispatchQueue’s also cheap on Linux or does the current implementation create a new thread for each?
>>> 
>>> From what I've seen, they're pretty expensive on Linux.
>>> 
>>> I had a previous implementation (prior to https://github.com/carlbrown/HTTPSketch/commit/fd083cd30 ) that created a new `DispatchQueue` for every `accept()` (i.e. each new TCP connection), and it performed horribly on Linux.  It got much, much faster once I created a fixed pool of queues and reused/shared them between connections.
>> 
>> did you have anything that blocks on the queues? If yes, then that's probably why it performed badly (needed to spawn lots of threads).
>> 
>> The queues themselves should really be cheap but if there's fresh threads needed for lots of queues all the time, then performance will be bad. And probably terrible on Linux.
> 
> They shouldn't have been blocked in the "waiting around with nothing to do" sense, but they should all have been really busy (my high load test suite uses 60-80 simultaneous curl processes hitting an 8-core server).  There were lots of threads spawned, according to `top`.  When I constrained the number of queues to be roughly the same as the number of cores, performance got much better.

yes, constraining the number of base queues (ones that aren't targeting others) is a good idea. That way you can give every request its own queue. But that request queue should target a base queue. The basic idea could be:

- spawn a fixed number (best probably around number of CPUs but just using 16 or something should)
- create a new queue per request which targets one of the base queues (round robin should do)

that should give you much better performance as long as nothing blocks one queue for long.


> I should say that this was with Swift 3.1.X. I haven't re-run that test since Swift 4.0 shipped.  I don't know if there were any changes that would make for a different result now.

Definitely retry on Swift 4, the GCD implementation changed from using the kqueue adaptor on Linux to using epoll directly (think that was after Swift 3.1). Also bugs like https://bugs.swift.org/browse/SR-5759 have been fixed for Swift 4.0

-- Johannes

> -Carl
> 
> 
>> 
>> -- Johannes
>> 
>> 
>>> 
>>> -Carl
>>> 
>>> _______________________________________________
>>> swift-server-dev mailing list
>>> swift-server-dev at swift.org
>>> https://lists.swift.org/mailman/listinfo/swift-server-dev



More information about the swift-server-dev mailing list