<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Nov 1, 2017, at 8:13 PM, Helge Heß via swift-server-dev <<a href="mailto:swift-server-dev@swift.org" class="">swift-server-dev@swift.org</a>> wrote:</div><div class=""><div class="">b) A complete ‘queue free’ model. I’m doing this in my current approach. It is kinda lock free, but has a lot of async dispatching. The base performance overhead is/should-be pretty high, but scalability is kinda like to optimal (theoretically making use of as many CPUs as possible).<br class=""><br class="">Not sure how well this goes in Linux. Are DispatchQueue’s also cheap on Linux or does the current implementation create a new thread for each?<br class=""></div></div></blockquote></div><br class=""><div class="">From what I've seen, they're pretty expensive on Linux.</div><div class=""><br class=""></div><div class="">I had a previous implementation (prior to <a href="https://github.com/carlbrown/HTTPSketch/commit/fd083cd30" class="">https://github.com/carlbrown/HTTPSketch/commit/fd083cd30</a> ) that created a new `DispatchQueue` for every `accept()` (i.e. each new TCP connection), and it performed horribly on Linux. It got much, much faster once I created a fixed pool of queues and reused/shared them between connections.</div><div class=""><br class=""></div><div class="">-Carl</div><div class=""><br class=""></div></body></html>