[swift-dev] Potential contributions to compilation time reporting?

Mark Lacey mark.lacey at apple.com
Tue Nov 28 19:01:26 CST 2017



> On Nov 19, 2017, at 9:02 PM, Brian Gesiak via swift-dev <swift-dev at swift.org> wrote:
> 
> Thanks for the reply, Graydon, and for your other email on the topic <https://lists.swift.org/pipermail/swift-dev/Week-of-Mon-20171113/006001.html>!
> 
> I need to take more time to look into some of the things you mentioned, but I won't be able to do so in earnest for another few days. In the meantime, I'll just reply with a few uninformed opinions -- feel free not to respond to these :)
> 
> On Fri, Nov 17, 2017 at 12:54 AM, Graydon Hoare <ghoare at apple.com <mailto:ghoare at apple.com>> wrote:
> Sadly, there's not always a 1:1 mapping between source entities and time like that. Certainly _some_ cases can be so egregious (say, typechecking time on an expression that's triggering an exponential-time inference) that they dominate compile time, and can be identified-and-fixed in isolation; but often the total amount of work attributable to a given source entity is spread around the compilation, occurs in multiple phases, emerges out of interaction between entities, overlaps with others, etc.
> 
> It's these particularly egregious cases that I had in mind when I wrote my email. Many blog posts on "how to reduce your Swift project compile times" suggest one of the following approaches:
> 
> 1. Use `-debug-time-function-bodies` and `-debug-time-expression-checking` to print a list of times, sort those lists in descending order of time spent, and add explicit types to, or otherwise simplify, the slowest functions and expressions.
> 2. Use `-warn-long-function-bodies=` and `-warn-long-expression-type-checking=`. Compile the project several times, gradually lowering the thresholds passed to these options, in order to surface the slowest functions and expressions. Add explicit types to or otherwise simplify the slowest functions and expressions.

One of the drawbacks with the type checker timers is that they do not exclude the time spent in client code called by the type checker, e.g. in deserialization code. The result is that you can e.g. have a simple expression involving a stdlib type like String, and have it be reported as taking 500ms to type check, when in fact it’s nearly instantaneous to type check, but we spent 500ms deserializing things while type checking that expression. What gets really confusing about this is some of this deserializing only happens if you don’t use explicit types, so adding an explicit type to one of these expressions can mean that this expression is now fast, but some other expression which was previously reported as being fast now suddenly “regresses” and takes 500ms due to that deserialization happening later.

I haven’t looked closely, but offhand I don’t know of a clean, robust, maintainable way of excluding this time.

I’m a bit concerned about productizing these without solving that problem.

It’s also not necessarily reasonable that we’re doing this deserialization when an explicit type is not used so there might be something useful to investigate and fix there as well.

Mark

> 
> My original idea was to expand or improve these options, since I think they'll continue to be useful for large Swift projects. But I'm certainly open to working on something else instead.
>  
> I don't have an especially strong feeling about the degree-of-support / stability of such features; I'm going to have to leave that part of your question to others.
> 
> Yeah, this is definitely still an open question of mine. In the meantime I'll try looking at some of the other approaches you've suggested (thanks!).
> 
>   1. See if you can leverage the existing counters and stats-gathering / reporting machinery in the compiler; for example the -trace-stats-events infrastructure lets you bundle together work done on behalf of a given source range and any changes to compiler counters during that work, as a single virtual "stats-event" in processing, and trace those events to a .csv file. Maybe something related to that would be helpful for the task you're interested in?
> 
> I wasn't aware of `-trace-stats-events`, thanks! When using `-stats-output-dir` with a primary-file compilation mode, I can see that the stats include the amount of time spent on each Swift module being produced. I'll need to take a closer look at how `-trace-stats-events` works, though -- I get an empty .csv file when I use that option in conjunction with `-stats-output-dir`, and I'm not sure yet how to use it with source ranges. I'll look into this further.
>  
>   2. Consider going up a level from declarations or functions to _files_, and see if there's a useful way to visualize hot-spots in the inter-file dependency graph that the driver interprets, during incremental compilation. The units of work at this level are likely to be large (especially if they involve cascading dependencies, that invalidate "downstream" files) and often cutting or changing an edge in that graph can be a simpler matter of moving a declaration, or changing its visibility: reasonably easy changes that don't cost the user much to experiment with.
> 
> Hope that helps! Happy to discuss any of this further.
> 
> Yes, thank you! I'm also interested in the "improving incremental mode" section of your other email, so thanks for writing all that down.
> 
> - Brian Gesiak
> 
> _______________________________________________
> swift-dev mailing list
> swift-dev at swift.org
> https://lists.swift.org/mailman/listinfo/swift-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-dev/attachments/20171128/ba47adff/attachment.html>


More information about the swift-dev mailing list