[swift-dev] "available externally" vs build time

Chris Lattner clattner at nondot.org
Thu Dec 28 18:32:40 CST 2017


Folks working on the SIL optimizer, particularly those interested in faster builds:

If I understand the SIL optimizer correctly, it seems that when the current program references an external symbol declared as @_inlinable, that SILModule::linkFunction eagerly deserializes the @_inlinable body and splat it into the current module.  That SIL function exists in the current module, gets optimized, inlined, etc along with existing functions, then gets dropped on the floor at IRGen time if it still exists.

If this is true, this seems like an incredibly wasteful approach, particularly given how many @_inlinable functions exist in the standard library, and particularly for programs that have lots of small files.  Try this:

$ cat u.swift 
func f() {
  print("hello")
}

$ swiftc u.swift -emit-sil -o - | wc -l
    7191

That is a *TON* of SIL, most having to do with array internals, string internals, and other stuff.  This eats memory and costs a ton of compile time to deserialize and slog this around, which gets promptly dropped on the floor by IRGen.  It also makes the -emit-sil output more difficult to work with...


Optimized builds are also bad:
$ swiftc u.swift -emit-sil -o - -O | wc -l
     861

If you look at it, only about 70 lines of that is the actual program being compiled, the rest is dropped on the floor by IRGen. This costs a ton of memory and compile time to deserialize and represent this, then even more is wasted running the optimizer on code which was presumably optimized when the stdlib was built.

I imagine that this approach was inspired by LLVM’s available_externally linkage, which does things the same way.  This is a simple way to make sure that interprocedural optimizations can see the bodies of external functions to inline them, etc.   However, LLVM doesn’t have the benefit of a module system like Swift’s, so it has no choice.


So here are the questions:  :-)

1. It looks like the MandatoryInliner is the biggest culprit at -O0 here: it deserializes the referenced function (MandatoryInlining.cpp:384) and *then* checks to see if the callee is @_transparent.  Would it make sense to change this to check for @_transparent first (which might require a SIL change?), and only deserialize if so?

2. The performance inliner will have the same issue after this, and deserializing the bodies of all inlinable referenced functions is unavoidable for it.  However, we don’t have to copy the SIL into the current module and burn compile time by subjecting it to all of the standard optimizations again.  Would it make sense to put deserialized function bodies into a separate SIL module, and teach the (few) IPA/IPO optimizations about this fact?  This should be very straight-forward to do for all of the optimizations I’m aware of.

I haven’t done any measurements, but this seems like it could be a big speedup, particularly for programs containing a bunch of relatively small files and not using WMO.

-Chris

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-dev/attachments/20171228/68fb0996/attachment.html>


More information about the swift-dev mailing list