[swift-dev] "available externally" vs build time

Chris Lattner clattner at nondot.org
Thu Jan 4 18:57:01 CST 2018


> On Jan 4, 2018, at 1:08 PM, Erik Eckstein <eeckstein at apple.com> wrote:
>>> 1. It looks like the MandatoryInliner is the biggest culprit at -O0 here: it deserializes the referenced function (MandatoryInlining.cpp:384) and *then* checks to see if the callee is @_transparent.  Would it make sense to change this to check for @_transparent first (which might require a SIL change?), and only deserialize if so?
>> 
>> This seems like a clear win.
> 
> +1
> 
> It should be a trivial change and I’m wondering why we haven’t done this yet.
> I filed https://bugs.swift.org/browse/SR-6697 <https://bugs.swift.org/browse/SR-6697>

Thanks!

>>> 2. The performance inliner will have the same issue after this, and deserializing the bodies of all inlinable referenced functions is unavoidable for it.  However, we don’t have to copy the SIL into the current module and burn compile time by subjecting it to all of the standard optimizations again.  Would it make sense to put deserialized function bodies into a separate SIL module, and teach the (few) IPA/IPO optimizations about this fact?  This should be very straight-forward to do for all of the optimizations I’m aware of.
>> 
>> What if we deserialized function bodies lazily instead of deserializing the transitive closure of all serialized functions referenced from a function?
> 
> Well, with our pass pipeline architecture I suspect it will not make a difference. We process functions bottom-up. For example, the performance inliner optimizes the callee first before trying to inline it (because it influences the inlining decision). So the performance inliner actually visits the whole call tree.
> 
>>> Would it make sense to put deserialized function bodies into a separate SIL module
> 
> We serialize early in the pipeline, i.e. serialized functions are not (fully) optimized.

Really?  The serialized functions in the standard library aren’t optimized?  That itself seems like a significant issue: you’re pushing optimized compile time cost onto every user’s source file that uses an unoptimized stdlib symbol.

> And at least the performance inliner needs functions to be optimized to make good inlining decisions. So it makes sense to also optimize deserialized functions.
> 
> That said, I’m sure there is still potential for improvements. For example, we could exclude deserialized generic functions from optimizations, because we only inline specialized functions.

If the serialized functions are in fact optimized, you have a lot of ways to avoid deserializing in practice.  There just aren’t that many IPO/IPA passes in the compiler, so you can build in summaries that they need into the serialized sil code.  If they aren’t optimized, then there are bigger problems.

-Chris

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-dev/attachments/20180104/3cacd223/attachment.html>


More information about the swift-dev mailing list