[swift-evolution] Swift null safety questions
Elijah Johnson
ejrx7753 at gmail.com
Sun Mar 26 22:25:56 CDT 2017
On March 26, 2017 at 12:09:55 AM, Brent Royal-Gordon (brent at architechies.com)
wrote:
On Mar 23, 2017, at 9:01 AM, Joe Groff via swift-evolution <
swift-evolution at swift.org> wrote:
setjmp and longjmp do *not* work well with Swift since they need compiler
support to implement their semantics, and since Swift does not provide this
support, setjmp-ing from Swift is undefined behavior. Empirical evidence
that small examples appear to work is not a good way of evaluating UB,
since any changes to Swift or LLVM optimizations may break it. Ease of
implementation is also not a good criterion for designing things.
*Supporting* a trap hook is not easy; it still has serious language
semantics and runtime design issues, and may limit our ability to do
something better.
Could we do something useful here without setjmp/longjmp? Suppose we had a
function in the standard library that was roughly equivalent to this:
typealias UnsafeFatalErrorCleanupHandler = () -> Void
// Note the imaginary @_thread_local property behavior
@_thread_local var _fatalErrorCleanupHandlers:
[UnsafeFatalErrorCleanupHandler] = []
func withUnsafeFatalErrorCleanupHandler<R>(_ handler:
UnsafeFatalErrorCleanupHandler, do body: () throws -> R) rethrows -> R {
_fatalErrorCleanupHandlers.append(handler)
defer { _fatalErrorCleanupHandlers.removeLast() }
return try body()
}
And then, just before we trap, we do something like this:
// Mutating it this way ensures that, if we reenter fatalError() in one of
the handlers,
// it will pick up with the next handler.
while let handler = _fatalErrorCleanupHandlers.popLast() {
handler()
}
I think that would allow frameworks to do something like:
class Worker {
let queue = DispatchQueue(label: "Worker")
typealias RequestCompletion = (RequestStatus) -> Void
enum RequestStatus {
case responded
case crashing
}
func beginRequest(from conn: IOHandle, then completion: RequestCompletion) {
queue.async {
withUnsafeFatalErrorCleanupHandler(fatalErrorHandler(completion)) {
// Handle the request, potentially crashing
}
}
}
func fatalErrorHandler(_ completion: RequestCompletion) ->
UnsafeFatalErrorCleanupHandler {
return { completion(.crashing) }
}
}
class Supervisor {
let queue = DispatchQueue(label: "Supervisor")
var workerPool: Pool<Worker>
func startListening(to sockets: [IOHandle]) { … }
func stopListening() { … }
func bootReplacement() { … }
func connected(by conn: IOHandle) {
dispatchPrecondition(condition: .onQueue(queue))
let worker = workerPool.reserve()
worker.beginRequest(from: conn) { status in
switch status {
case .responded:
conn.close()
self.queue.sync {
self.workerPool.relinquish(worker)
}
case .crashing:
// Uh oh, we're in trouble.
//
// This process is toast; it will not survive long beyond this stack frame.
// We want to close our listening socket, start a replacement server,
// and then just try to hang on until the other workers have finished their
// current requests.
//
// It'd be nice to send an error message and close the connection,
// but we shouldn't. We don't know what state the connection or the
// worker are in, so we don't want to do anything to them.We risk a
// `!==` check because it only involves a memory address stored in
// our closure context, not the actual object being referenced by it.
// Go exclusive on the supervisor's queue so we don't try to handle
// two crashes at once (or a crash and something else, for that matter).
self.queue.sync {
self.stopListening()
self.bootReplacement()
// Try to keep the process alive until all of the surviving workers have
// finished their current requests. To do this, we'll perform barrier
blocks
// on all of their queues, attached to a single dispatch group, and then
// wait for the group to complete.
let group = DispatchGroup()
for otherWorker in self.workerPool.all where otherWorker !== worker {
// We run this as `.background` to try to let anything else the request
might
// enqueue run first, and `.barrier` to make sure we're the only thing
running.
otherWorker.queue.async(group: group, qos: .background, flags: .barrier) {
// Make sure we don't do anything else.
otherWorker.queue.suspend()
}
}
// We do not use `notify` because we need this stack frame to keep
// running so we don't trap yet.
group.wait(timeout: .now() + .seconds(15))
}
// Okay, we can now return, and probably crash.
}
}
}
}
It's definitely not a full actor model, and you have to be very careful,
but it might be a useful subset.
--
Brent Royal-Gordon
Architechies
Well, it seems like it would work pretty well since a thread would be
needed anyway to do the cleanup. At the end it still boils down to a
fatalError handling proposal and the web framework developer(s) put the
implmentation.
If the actor model sufficiently replaces it, then it makes still a good
bridge between now and then ie. Swift 5-7. If a shared actor trips, then it
also trips the rest of the non-shared actors who use it, so it still has
some potential to hold that shared actor open until the requests using it
finish execution.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20170326/399e1b79/attachment.html>
More information about the swift-evolution
mailing list