<html><head><style>body{font-family:Helvetica,Arial;font-size:13px}</style></head><body style="word-wrap:break-word"><div id="bloop_customfont" style="font-family:Helvetica,Arial;font-size:13px;color:rgba(0,0,0,1.0);margin:0px;line-height:auto"><br>On March 22, 2017 at 8:48:27 PM, Joe Groff (<a href="mailto:jgroff@apple.com">jgroff@apple.com</a>) wrote:</div> <div><blockquote type="cite" class="clean_bq" style="font-family:Helvetica,Arial;font-size:13px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><span><div><div></div><div><br>> On Mar 21, 2017, at 6:27 AM, Elijah Johnson via swift-evolution <<a href="mailto:swift-evolution@swift.org">swift-evolution@swift.org</a>> wrote:<span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>> I still like the idea of shared memory, but since without additional threading it can’t have write access inside the new process, I don’t think that it is a solution for a webserver.<span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>> The main concern was just with developers using these universal exceptions deliberately, along with “inconsistent states” and memory leaks.<span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>> So here’s a simple proposal:<span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>> func unsafeCatchFatalErrorWithMemoryLeak(_ block: ()->Void) -> FatalError?<span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>> What it does is execute the block, and when the fatalError function is invoked (as is the case with logic errors), the fatalError checks some thread local for the existence of this handler and performs a goto. “unsafeCatchFatalErrorWithMemoryLeak” then returns a small object with the error message. The can only be one per-call stack, and it leaks deliberately leaks the entire stack from “fatalError” back down to “unsafeCatchFatalErrorWithMemoryLeak”, and that is one reason why it is labelled “unsafe” and “withMemoryLeak”.<span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>> The idea is that this is a function expressly for “high availability” applications like web servers. The server will now have some leaked objects posing no immediate danger, and some inconsistencies, primarily or entirely inside the leaked objects (It is the developer’s responsibility how this is used).<span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>> The “high availability sytem” is then expected to stop accepting incoming socket connections, generate another instance of itself, handle any open connections, and exit.<span class="Apple-converted-space"> </span><br>><span class="Apple-converted-space"> </span><br>> The interesting thing about this is that it is very easy to implement. Being an unsafe function, it is not part of the language as a catch block is, and doesn’t entirely preclude another solution in the future.<span class="Apple-converted-space"> </span><br><br>How much of the process can keep running after the fatal error is caught? A thread might still be too coarse-grained for a system based on workqueues. This also isn't particularly safe with objects accessed concurrently across multiple threads. If you have a method that temporarily breaks invariants on its instance, but crashes before it has a chance to reset them, then the object will still be in an inconsistent state when accessed later from surviving threads.<span class="Apple-converted-space"> </span><br><br>-Joe</div></div></span></blockquote></div><p><br></p><p>Well, I can only speak to the programming models that I am familiar with, so I will list three types of servers - front-end servers, cache servers, and job servers. On a typical Java application server, all these servers run in the JVM, which is one process that uses threads.</p><p>So lets take the three examples:</p><p>The "Front end server” element queries the cache and starts jobs. Except for its interactions with the cache and the jobs it starts, it is pretty isolated.</p><p>The “cache” element (that fetches DB requests and might store them in memory) might run on the same thread as the request, might not, but its not going to update the shared memory dictionary until it has got its object. For the most part these objects are not mutable after storage.</p><p>The “job server” element are like requests basically. They might use the cache, but are otherwise isolated. Processes a queue of relatively identical requests.</p><p><br></p><p>So the risk here is with a cache or data store of shared mutable objects. You would be envisioning that the developer has placed invariants inside a method that does not throw, inside of a shared object, and now the method has triggered the fault and there is no recovery plan.</p><p>Lets make things even worse, and say that this shared object locked a mutex before triggering the invariant (as they must and ought to do when updating a MT shared object). Now the result is that whoever takes the mutex will hang forever. Actually, this prevents the misuse of the data as no one can query it. But in any case, it is just bad programming to do this. In constrast to systems programming, web applications mostly use mutable objects, or objects that whose mutablility isn’t so complicated.</p><p>The main reason for this is that the objects have to mirror the database contents. So lets say you have cached a database row and you want to update the string contents. You have to update the database and then update the cache. Its a complex operation, but there are no invariants to trigger. The state has to be invalidated or locked before the DB operation starts, and cleared when after the cache gets its updated state. If something goes wrong it is dead.</p><p><br></p><p>So, to answer your exact question, in standard webserver programming as I understand it, there are no shard objects that ever have an invalid state that are not expressly regognizable as such by a flag and/or a locked mutex. Therefore the worst case scenario is a locked mutex and I see no objection whatsoever to the remaining requests accessing shared data, neither to accepting new requests. Especially if well designed, I don’t see these optionals occuring in the shared layer at all, only in the rather careless request side of things, where developers are going to want to code quickly, or burried inside some cron. The shared memory cache for a web application is too simple thing to fail, unless coded with absolute stupidity.</p><p>It definitely shouldn’t be used with bounds checking disabled on arrays, but other than than I see no danger in this. If there was, then developers would be writing System.exit() calls everywhere in Java. Web developers are used to this stuff and have already 20 years experience in working with this suff (see <a href="http://javarevisited.blogspot.com/2014/11/dont-use-systemexit-on-java-web-application.html">http://javarevisited.blogspot.com/2014/11/dont-use-systemexit-on-java-web-application.html</a> for example). By contrast, C/C++ developers must often spend their days rooting out segfaults. This might make better code, but it is 90% of why nearly no one uses it for web development.</p><div></div></body></html>