[swift-server-dev] Next HTTP API meeting

Chris Bailey BAILEYC at uk.ibm.com
Mon Mar 27 11:12:40 CDT 2017

Hi Michael:

That's correct - the memory barrier/fence is inserted regardless, but the 
impact of can be greater if multiple processors are being used as there's 
additional cost is you have cache line collision etc. 


From:   Michael Chiu <hatsuneyuji at icloud.com>
To:     Tanner Nelson <tanner at qutheory.io>
Cc:     Chris Bailey/UK/IBM at IBMGB, swift-server-dev 
<swift-server-dev at swift.org>
Date:   27/03/2017 15:54
Subject:        Re: [swift-server-dev] Next HTTP API meeting

Actually according to @hh, the compiler will add the synchronization 
overhead no matter what.

My guess is, despite the fact that the response will always been processed 
by the same thread, but there'll be always a reference back to the main 
event loop, and it is not obvious to the compiler so it will add the 
overhead anyways, probably not lock but compare and swap.

Sent from my iPhone

> On Mar 27, 2017, at 7:42 AM, Tanner Nelson <tanner at qutheory.io> wrote:
> @chris in my experience there's been very little passing of 
request/response between threads. Usually the server accepts, spins up a 
new thread, and all HTTP parsing/serializing happens on that one thread. 
> Could you specify some examples where requests/responses are being 
passed between threads?
> That said, it should be fairly easy to implement threading to see what 
the effects would be. I will look into that. :)
> Tanner

Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number 
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-server-dev/attachments/20170327/d8f60166/attachment.html>

More information about the swift-server-dev mailing list