[swift-evolution] JIT compilation for server-side Swift

Michael Ilseman milseman at apple.com
Mon Jul 10 12:53:18 CDT 2017

> On Jul 10, 2017, at 9:40 AM, Younes Manton via swift-evolution <swift-evolution at swift.org> wrote:
> Hi,
> Last year a small group of developers from the IBM Runtimes compiler team undertook a project to explore JIT compilation for Swift, primarily aimed at server-side Swift. The compilation model we settled on was a hybrid approach that combined static compilation via swiftc with dynamic compilation via a prototype JIT compiler based on Eclipse OMR.[1]
> This prototype JIT compiler (targeting Linux specifically) functioned by having itself loaded by a Swift process at runtime, patching Swift functions so that they may be intercepted, recompiling them from their SIL representations, and redirecting callers to the JIT compiled version. In order to accomplish this we needed to make some changes to the static compiler and the target program's build process.
> * First, we modified the compiler to emit code at the beginning of main() that will attempt to dlopen() the JIT compiler, and if successful, call its initialization routine. If unsuccessful the program would simply carry on executing the rest of main().
> * Second, we modified all Swift functions to be patchable by giving them the "patchable-function" LLVM attribute (making the first instruction suitable to be patched over with a short jump) and attaching 32 bytes of prefix data (suitable to hold a long jump to a JIT hook function and some extra data) to the function's code. This was controlled by a frontend "-enable-jit" switch.
> * Third, when building the target program we first compiled the Swift sources to a .sib (binary SIL) file, then via ld and objcopy turned the .sib into a .o containing a .sib data section, then compiled the sources again into an executable, this time linking with the .o containing the binary SIL. This embedded SIL is what was consumed at runtime by the JIT compiler in order to recompile Swift functions on the fly. (Ideally this step would be done by the static compiler itself (and is not unlike the embedding of LLVM bitcode in a .llvmbc section), but that would have been a significant undertaking so for prototyping purposes we did it at target program build time.)
> That's the brief, high level description of what we did, particularly as it relates to the static side of this hybrid approach. The resulting prototype JIT was able to run and fully recompile a non-trivial (but constrained) program at comparable performance to the purely static version. For anyone interested in more details about the project as a whole, including how the prototype JIT functioned, the overhead it introduced, and the quality of code it emitted, I'll point you to Mark Stoodley's recent tech talk.[2]
> Having said that, it is with the static side in mind that I'm writing this email. Despite the prototype JIT being built on OMR, the changes to the static side outlined above are largely compiler agnostic APIs/ABIs that anyone can use to build similar hybrid JITs or other runtime tools that make sense for the server space.

Do you have example APIs to discuss in more detail?

> As such, we felt that it was a topic that was worth discussing early and in public in order to allow any and all potentially interested parties an opportunity to weigh in. With this email we wanted to introduce ourselves to the wider Swift community and solicit feedback on 1) the general idea of JIT compilation for server-side Swift, 2) the hybrid approach in particular, and 3) the changes mentioned above and future work in the static compiler to facilitate 1) and 2). To that end, we'd be happy to take questions and welcome any discussion on this subject.

I think that there’s a lot of potential gains for runtime optimization of Swift programs, but the vast majority of benefits will likely fall out from:

1. Smashing resilience barriers at runtime.
2. Specializing frequently executed generic code, enabling subsequent inlining and further optimization.

These involve deep knowledge of Swift-specific semantics. They are probably better handled by running Swift’s own optimizer at runtime rather than teaching OMR or some other system about Swift. This is because Swift’s SIL representation is constantly evolving, and the optimizations already in the compiler are always up to date. I’m curious, what benefits of OMR are you hoping to gain, and how does that weigh against the complexity of making the two systems interact?

> (As for the prototype itself, we intend to open source it either in its current state [based on Swift 3.0 and an early version of OMR] or in a more up-to-date state in the very near future.)
> Thank you kindly,
> Younes Manton
> [1] http://www.eclipse.org/omr/ <http://www.eclipse.org/omr/> & https://github.com/eclipse/omr <https://github.com/eclipse/omr>
> [2] http://www.ustream.tv/recorded/105013815 <http://www.ustream.tv/recorded/105013815> (Swift JIT starts at ~28:20)
> _______________________________________________
> swift-evolution mailing list
> swift-evolution at swift.org
> https://lists.swift.org/mailman/listinfo/swift-evolution

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20170710/a68b4bfb/attachment.html>

More information about the swift-evolution mailing list