[swift-dev] Using gyb with benchmarks
mgottesman at apple.com
Thu Apr 6 16:05:19 CDT 2017
> On Apr 6, 2017, at 1:14 PM, Pavol Vaskovic <pali at pali.sk> wrote:
> On Thursday, 6 April 2017 at 19:44, Michael Gottesman wrote:
>>> On Apr 6, 2017, at 6:16 AM, Pavol Vaskovic via swift-dev <swift-dev at swift.org (mailto:swift-dev at swift.org)> wrote:
>>> How do I make benchmarks work with gyb?
> Upon further inspection, one needs to regenerate harness _after_ modifying the gyb anyway, so manually invoking gyb works fine for now, given the [FILE].swift.gyb is ignored by the benchmarking and compilation machinery.
> Is that so bad to add .gyb templates alongside the .swift sources and generate the boilerplate as a first step when generating harness?
I think in the short term this is fine. And even if we switch to use swiftpm, we could use this same gyb approach. I do have 1 request though. My concern overall with generating the harness how we are doing it today is that sometimes people do not know about it and do not regenerate the file when they need to.
Do you think you could add a test to the validation suite that locally generates the gyb/harness files and performs a diff? This will ensure that at least the bots will catch if someone forgets to run the update.
Specifically, if you look in ./validation-test/Python/, you will see a test called bug-reducer.test-sh.
I would create a separate folder called ./validation-test/benchmarks and in that folder I would create a file called "generate-harness.test-sh". This will just be a shell script along the lines of bug-reducer.test-sh that will generate the harness/gyb files in a temp directory and make sure that the diff to what is checked into tree is empty. Then at least we will know if someone forgets to regenerate the harness or gyb files.
>> Please do not do this. We have been talking about switching the benchmarks to use swiftpm instead of our own custom cmake goop. swiftpm does not support using custom things like gyb.
> What’s the motivation here? I’m guessing GYB will not be removed from other parts of project… The benchmark files for sequence operations I’ve been looking at are ripe for templating, reducing possibility to make accidental errors when adding new variations.
> I’m about to add coverage for a lot more of sequence operations, as their current performance is horrible. These are almost identical, varying only by the concrete type of sequence/collection tested, plus lazy variants. Being able to automate this seems vital to me.
> Best regards
> Pavol Vaskovic
More information about the swift-dev