[swift-server-dev] Prototype of the discussed HTTP API Spec

Michael Chiu hatsuneyuji at icloud.com
Thu Jun 1 12:27:08 CDT 2017


Hi Johannes,

> Yes, Linux, macOS, FreeBSD and so on offer only a 1:1 threading model from the OS (Windows I think has Fibers). But libmill/dill/venice implement something akin to user-level threads themselves, you don't need kernel support for that at all (to schedule them cooperatively). Check out the man pages of setjmp and longjmp. Or check out Go (goroutines), Lua (coroutines), ... These are all basically cooperatively scheduled threads.
> 
> In other words: With setjmp() you can make a snapshot of the current environment and with longjmp() you can replace the current environment with one that you previously saved. That's like cooperatively switching something like a user-level thread/coroutine/green thread.
> 
> Run for example the code in this stackoverflow example: https://stackoverflow.com/a/14685524
> 
> This document explains it pretty well too: http://libdill.org/structured-concurrency.html

This is off topic but interesting stuff, your claim is generally true except when any one of the green threads invoke a blocking system call (in this case, kevent/epoll),
setjmp(), longjmp() only take cares the context swiftching _within_ a real thread and make it act like a lot of concurrent(not really) threads, in orther words it only take care of the time sharing problem. So let’s say if we have a sigle thread, and green threads A. B. C. D, despite they can have independent stack pointer, they are still using the same thread.

When a blocking system call invoked, you actually making an interrupt and cause the real thread associate with the current user thread (fiber, coroutine…) enter the kernel (as context switching), and will only return back to normal routine when the system call returns, so when the real thread is in the kernel, non of the associated “user threads” can do anything nor perform context switching  from user space since the user-space lost control on the thread.


> 
>>>> in fact all the example you listed above all uses events api internally. Hence I don’t think if an api will block a kernel thread is a good argument here.
>>> 
>>> kernel threads are a finite resource and most modern networking APIs try hard to only spawn a finite number of kernel threads way smaller than the number of connections handled concurrently. If you use Dispatch as your concurrency mechanism, your thread pool will have a maximum size of 64 threads by default on Darwin. (Sure you can spawn more using (NS)Thread from Foundation or pthreads or so)
>> 
>> Yes Kernel threads are finite resources especially in 1:1 model but I’m not sure how is it relevant. My concern on not include a synchronous API is that it make people impossible to write synchronous code, with server side swift tools, despite blocking or not, which they might want to. I’m not saying sync is better, I’m just saying we could give them a chance.
> 
> No one's taking anything away from you. Everything you have today will still be available. But I believe the APIs (which is what we're designing here) a web app uses should in today's world in Swift be asynchronous.
> 
> Of course to implement the asynchronous API, synchronous system calls will be used (eg. kevent/epoll). But the user-facing API that is currently proposed is async-only in order for it to be implementable in a performant way. If we were to put synchronous functions in the user-facing API, then we'll struggle to implement them in a performant way).
> 
> Imagine the function to write a HTTP body chunk looked like this:
> 
> func writeBodyChunk(_ data: Data) throws -> Void
> 
> then the user can expect this to only return when the data has been written successfully and that the connection was dropped if it throws.
> But the implementation now has a problem: What to do if we can't write the bytes immediately? The only option we have is block this very thread and wait until we have written the bytes. Then we can return and let the user know if the write worked or not.
> 
> Comparing this to
> 
> func writeBodyChunk(_ data: Data, completion: (Error?) -> Void) -> Void
> 
> we can now register the attempt to write the data and move on with the calling thread. When the data has been written we invoke the completion handler and everything's good.

Well the expected behavior is that is the data cannot write immediately it will throw an error ‘temporary unavailable’ and the user can write again if he want to, exactly the same approach in my cpp code.

> 
> 
>>>> And even if such totally non-blocking programming model it will be expensive since the kernel is constantly scheduling a do-nothing-thread. (( if the io thread of a server-side application need to do something constantly despite there’s no user and no connection it sounds like a ghost story to me )).  
>>> 
>>> what is the do-nothing-thread? The IO thread will only be scheduled if there's something to do and then normally the processing starts on that very thread. In systems like Netty they try very hard to reduce hopping between different threads and to only spawn a few. Node.js is the extreme which handles everything on one thread. It will be able to do thousands of connections with only one thread.
>>> 
>> 
>> The kernel has no idea is a thread have anything to do unless it sleeps/enterKernel, unless a thread fits in these requirements, it will always scheduled by the kernel.
> 
> but that's exactly what epoll/kevent do. They enter the kernel and tell the kernel what the user-space needs next.
> 
> The thread is now scheduled only if an event that kevent/epoll are waiting for turns up. And when kevent/epoll then return, most of the time the user space handler is submitted as a callback.

That’s why I said “a totally non blocking programming model is very expensive”, since both kevent and epoll are __Blocking__ system calls, so an api that doesn’t block _any_ kernel threads automatically exclude kevent and epoll.

> 
> well, it means the write hasn't happened. You will need to do the write again and that is normally done when kevent/epoll tell you to. And that's what I mean by inversion of control.
> 

If you read the code carefully you will see additional write invoked (the cpp one)

> 
>>> Foundation/Cocoa is I guess the Swift standard library and they abandon synchronous&blocking APIs completely. I don't think we should create something different (without it being better) than what people are used to.
>>> 
>>> Again, there are two options for IO at the moment:
>>> 1) synchronous & blocking kernel threads
>>> 2) asynchronous/inversion of control & not blocking kernel threads
>>> 
>>> Even though I would love a synchronous programming model, I'd chose option (2) because the drawbacks of (1) are just too big. The designers of Foundation/Cocoa/Netty/Node.js/many more have made the same decision. Not saying all other options aren't useful but I'd like the API to be implementable with high-performance and not requiring the implementors to block a kernel thread per connection.
>> 
>> To be honest I will choose 2 as well. But we are in not a 2 choose 1 situation. The main difference between we and netty/node.js is that ppl use them to, write a server, what we do is, writing something ppl use to write something like netty and node.js. So it is reasonable to think there’s demand on a lower-level, synchronous api, despite the possible “drawbacks” they might encounter.
> 
> This is the HTTP group so people will only write web servers with it, the API that was proposed it definitely not meant to implement anything netty or node like. It's to implement web apps in Swift.
> 
> There is however also a Networking/Transport group which will be more low-level than this (I assume) and there we do need to consider the lower-level APIs. And those will contain blocking system calls, namely kevent/epoll (if it won't be based on top of DispatchSources/DispatchIO which do the eventing out of the box, obviously also implemented with kevent/epoll).
> 
> 
>> Maybe we have some misunderstanding here. I’m not saying a synchronous api that happens to be able to handle a vector of sockets in single call without blocking anything, I’m saying a synchronous api that can just do one simple thing, which is, read/write in a synchronous way despite block or not, if it will block, just let them know by throwing an exception, the api call itself, will not block anything that way. 
> 
> there may well be a misunderstanding here. No one wants to take all synchronous APIs away from you. They are available in Swift today and will remain there tomorrow.
> 
> But we're trying to design a HTTP API that is implementable with reasonable performance and that I believe should be done by only offering async APIs.

That’s why I said “Ppl can always fall back to C API but that won’t do any good for serverside swift”. What the only thing I think a very simple synchronous API should at least provide is that people can use it and play nicely with the provided API without breaking anything. (read/write that's transparent to the library, messing up socket options etc).

Cheers,
Michael






More information about the swift-server-dev mailing list