summaryrefslogtreecommitdiff
path: root/t/perf
AgeCommit message (Collapse)AuthorFilesLines
2019-05-13Merge branch 'jk/p5302-avoid-collision-check-cost'Libravatar Junio C Hamano1-13/+18
Fix index-pack perf test so that the repeated invocations always run in an empty repository, which emulates the initial clone situation better. * jk/p5302-avoid-collision-check-cost: p5302: create the repo in each index-pack test
2019-05-13Merge branch 'ew/repack-with-bitmaps-by-default'Libravatar Junio C Hamano2-3/+1
The connectivity bitmaps are created by default in bare repositories now; also the pathname hash-cache is created by default to avoid making crappy deltas when repacking. * ew/repack-with-bitmaps-by-default: pack-objects: default to writing bitmap hash-cache t5310: correctly remove bitmaps for jgit test repack: enable bitmaps by default on bare repos
2019-05-13Merge branch 'js/partial-clone-connectivity-check'Libravatar Junio C Hamano1-0/+26
During an initial "git clone --depth=..." partial clone, it is pointless to spend cycles for a large portion of the connectivity check that enumerates and skips promisor objects (which by definition is all objects fetched from the other side). This has been optimized out. * js/partial-clone-connectivity-check: t/perf: add perf script for partial clones clone: do faster object check for partial clones
2019-05-05t/perf: add perf script for partial clonesLibravatar Jeff King1-0/+26
We don't cover the partial clone feature at all in t/perf. Let's at least run a few basic tests so that we'll notice any regressions. We'll do a no-blob clone, and split it into two parts: the actual object transfer, and the subsequent checkout (which will of course require another transfer to get the blobs). That will help us more clearly assess the performance of each. There are obviously a lot more possibilities besides just a no-blob partial clone, but this should serve as a canary that alerts us to any generic slow-downs (and we can add more tests later for cases that aren't exercised here). There are a few non-ideal things here that make this not an entirely accurate test, but are probably OK for our purposes: 1. We have to do some extra prep/cleanup work inside the timing tests, since they impact the on-disk state and the perf harness may run each one multiple times. In practice this is probably OK, since these bits should be much less expensive than the operations we are measuring. 2. The clone time is likely to be dominated by the server's object enumeration. In the real world, a repo large enough to drive people to partial clones is likely to have reachability bitmaps enabled. And in the opposite direction, our object transfer is happening at the speed of a local pipe, whereas in the real world it would bottle-neck on the network. So any percentage speedups should be taken with a grain of salt. But hopefully any regressions will produce enough of an effect to be noticeable. This script also demonstrates the recent improvement from dfa33a298d (clone: do faster object check for partial clones, 2019-04-19): Test dfa33a298d^ dfa33a298d ------------------------------------------------------------------------- 5600.2: clone without blobs 18.41(22.72+1.09) 6.83(11.65+0.50) -62.9% 5600.3: checkout of result 1.82(3.24+0.26) 1.84(3.24+0.26) +1.1% Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-25Merge branch 'jk/revision-rewritten-parents-in-prio-queue'Libravatar Junio C Hamano1-0/+18
Performance fix for "rev-list --parents -- pathspec". * jk/revision-rewritten-parents-in-prio-queue: revision: use a prio_queue to hold rewritten parents
2019-04-23p5302: create the repo in each index-pack testLibravatar Jeff King1-13/+18
The p5302 script runs "index-pack --stdin" in each timing test. It does two things to try to get good timings: 1. we do the repo creation in a separate (non-timed) setup test, so that our timing is purely the index-pack run 2. we use a separate repo for each test; this is important because the presence of existing objects in the repo influences the result (because we'll end up doing collision checks against them) But this forgets one thing: we generally run each timed test multiple times to reduce the impact of noise. Which means that repeats of each test after the first will be subject to the collision slowdown from point 2, and we'll generally just end up taking the first time anyway. Instead, let's create the repo in the test (effectively undoing point 1). That does add a constant amount of extra work to each iteration, but it's quite small compared to the actual effects we're interested in measuring. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-04revision: use a prio_queue to hold rewritten parentsLibravatar Jeff King1-0/+18
This patch fixes a quadratic list insertion in rewrite_one() when pathspec limiting is combined with --parents. What happens is something like this: 1. We see that some commit X touches the path, so we try to rewrite its parents. 2. rewrite_one() loops forever, rewriting parents, until it finds a relevant parent (or hits the root and decides there are none). The heavy lifting is done by process_parent(), which uses try_to_simplify_commit() to drop parents. 3. process_parent() puts any intermediate parents into the &revs->commits list, inserting by commit date as usual. So if commit X is recent, and then there's a large chunk of history that doesn't touch the path, we may add a lot of commits to &revs->commits. And insertion by commit date is O(n) in the worst case, making the whole thing quadratic. We tried to deal with this long ago in fce87ae538 (Fix quadratic performance in rewrite_one., 2008-07-12). In that scheme, we cache the oldest commit in the list; if the new commit to be added is older, we can start our linear traversal there. This often works well in practice because parents are older than their descendants, and thus we tend to add older and older commits as we traverse. But this isn't guaranteed, and in fact there's a simple case where it is not: merges. Imagine we look at the first parent of a merge and see a very old commit (let's say 3 years old). And on the second parent, as we go back 3 years in history, we might have many commits. That one first-parent commit has polluted our oldest-commit cache; it will remain the oldest while we traverse a huge chunk of history, during which we have to fall back to the slow, linear method of adding to the list. Naively, one might imagine that instead of caching the oldest commit, we'd start at the last-added one. But that just makes some cases faster while making others slower (and indeed, while it made a real-world test case much faster, it does quite poorly in the perf test include here). Fundamentally, these are just heuristics; our worst case is still quadratic, and some cases will approach that. Instead, let's use a data structure with better worst-case performance. Swapping out revs->commits for something else would have repercussions all over the code base, but we can take advantage of one fact: for the rewrite_one() case, nobody actually needs to see those commits in revs->commits until we've finished generating the whole list. That leaves us with two obvious options: 1. We can generate the list _unordered_, which should be O(n), and then sort it afterwards, which would be O(n log n) total. This is "sort-after" below. 2. We can insert the commits into a separate data structure, like a priority queue. This is "prio-queue" below. I expected that sort-after would be the fastest (since it saves us the extra step of copying the items into the linked list), but surprisingly the prio-queue seems to be a bit faster. Here are timings for the new p0001.6 for all three techniques across a few repositories, as compared to master: master cache-last sort-after prio-queue -------------------------------------------------------------------------------------------- GIT_PERF_REPO=git.git 0.52(0.50+0.02) 0.53(0.51+0.02) +1.9% 0.37(0.33+0.03) -28.8% 0.37(0.32+0.04) -28.8% GIT_PERF_REPO=linux.git 20.81(20.74+0.07) 20.31(20.24+0.07) -2.4% 0.94(0.86+0.07) -95.5% 0.91(0.82+0.09) -95.6% GIT_PERF_REPO=llvm-project.git 83.67(83.57+0.09) 4.23(4.15+0.08) -94.9% 3.21(3.15+0.06) -96.2% 2.98(2.91+0.07) -96.4% A few items to note: - the cache-list tweak does improve the bad case for llvm-project.git that started my digging into this problem. But it performs terribly on linux.git, barely helping at all. - the sort-after and prio-queue techniques work well. They approach the timing for running without --parents at all, which is what you'd expect (see below for more data). - prio-queue just barely outperforms sort-after. As I said, I'm not really sure why this is the case, but it is. You can see it even more prominently in this real-world case on llvm-project.git: git rev-list --parents 07ef786652e7 -- llvm/test/CodeGen/Generic/bswap.ll where prio-queue routinely outperforms sort-after by about 7%. One guess is that the prio-queue may just be more efficient because it uses a compact array. There are three new perf tests: - "rev-list --parents" gives us a baseline for running with --parents. This isn't sped up meaningfully here, because the bad case is triggered only with simplification. But it's good to make sure we don't screw it up (now, or in the future). - "rev-list -- dummy" gives us a baseline for just traversing with pathspec limiting. This gives a lower bound for the next test (and it's also a good thing for us to be checking in general for regressions, since we don't seem to have any existing tests). - "rev-list --parents -- dummy" shows off the problem (and our fix) Here are the timings for those three on llvm-project.git, before and after the fix: Test master prio-queue ------------------------------------------------------------------------------ 0001.3: rev-list --parents 2.24(2.12+0.12) 2.22(2.11+0.11) -0.9% 0001.5: rev-list -- dummy 2.89(2.82+0.07) 2.92(2.89+0.03) +1.0% 0001.6: rev-list --parents -- dummy 83.67(83.57+0.09) 2.98(2.91+0.07) -96.4% Changes in the first two are basically noise, and you can see we approach our lower bound in the final one. Note that we can't fully get rid of the list argument from process_parents(). Other callers do have lists, and it would be hard to convert them. They also don't seem to have this problem (probably because they actually remove items from the list as they loop, meaning it doesn't grow so large in the first place). So this basically just drops the "cache_ptr" parameter (which was used only by the one caller we're fixing here) and replaces it with a prio_queue. Callers are free to use either data structure, depending on what they're prepared to handle. Reported-by: Björn Pettersson A <bjorn.a.pettersson@ericsson.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-18perf-lib.sh: rely on test-lib.sh for --tee handlingLibravatar Jeff King1-23/+11
Since its inception, the perf-lib.sh script has manually handled the "--tee" option (and other options which imply it, like "--valgrind") with a cut-and-pasted block from test-lib.sh. That block has grown stale over the years, and has at least three problems: 1. It uses $SHELL to re-exec the script, whereas the version in test-lib.sh learned to use $TEST_SHELL_PATH. 2. It does an ad-hoc search of the "$*" string, whereas test-lib.sh learned to carefully parse the arguments left to right. 3. It never learned about --verbose-log (which also implies --tee), so it would not trigger for that option. This last one was especially annoying, because t/perf/run uses the GIT_TEST_OPTS from your config.mak to run the perf scripts. So if you've set, say, "-x --verbose-log" there, it will be passed as part of most perf runs. And while this script doesn't recognize the option, the test-lib.sh that we source _does_, and the behavior ends up being much more annoying: - as the comment at the top of the block says, we have to run this tee code early, before we start munging variables (it says GIT_BUILD_DIR, but the problematic variable is actually GIT_TEST_INSTALLED). - since we don't recognize --verbose-log, we don't trigger the block. We go on to munge GIT_TEST_INSTALLED, converting it from a relative to an absolute path. - then we source test-lib.sh, which _does_ recognize --verbose-log. It re-execs the script, which runs again. But this time with an absolute version of GIT_TEST_INSTALLED. - As a result, we copy the absolute version of GIT_TEST_INSTALLED into perf_results_prefix. Instead of writing our results to the expected "test-results/build_1234abcd.p1234-whatever.times", we instead write them to "test-results/_full_path_to_repo_t_perf_build_1234...". The aggregate.perl script doesn't expect this, and so it prints "<missing>" for each result (even though it spent considerable time running the tests!). We can solve all of these in one blow by just deleting our custom handling, and relying on the inclusion of test-lib.sh to handle --tee, --verbose-log, etc. There's one catch, though. We want to handle GIT_TEST_INSTALLED after we've included test-lib.sh, since we want it un-munged in the re-exec'd version of the script. But if we want to convert it from a relative to an absolute path, we must do so before we load test-lib.sh, since it will change our working directory. So we compute the absolute directory first, store it away, then include test-lib.sh, and finally assign to GIT_TEST_INSTALLED as appropriate. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-18pack-objects: default to writing bitmap hash-cacheLibravatar Jeff King2-3/+1
Enabling pack.writebitmaphashcache should always be a performance win. It costs only 4 bytes per object on disk, and the timings in ae4f07fbcc (pack-bitmap: implement optional name_hash cache, 2013-12-21) show it improving fetch and partial-bitmap clone times by 40-50%. The only reason we didn't enable it by default at the time is that early versions of JGit's bitmap reader complained about the presence of optional header bits it didn't understand. But that was changed in JGit's d2fa3987a (Use bitcheck to check for presence of OPT_FULL option, 2013-10-30), which made it into JGit v3.5.0 in late 2014. So let's turn this option on by default. It's backwards-compatible with all versions of Git, and if you are also using JGit on the same repository, you'd only run into problems using a version that's almost 5 years old. We'll drop the manual setting from all of our test scripts, including perf tests. This isn't strictly necessary, but it has two advantages: 1. If the hash-cache ever stops being enabled by default, our perf regression tests will notice. 2. We can use the modified perf tests to show off the behavior of an otherwise unconfigured repo, as shown below. These are the results of a few of a perf tests against linux.git that showed interesting results. You can see the expected speedup in 5310.4, which was noted in ae4f07fbcc. Curiously, 5310.8 did not improve (and actually got slower), despite seeing the opposite in ae4f07fbcc. I don't have an explanation for that. The tests from p5311 did not exist back then, but do show improvements (a smaller pack due to better deltas, which we found in less time). Test HEAD^ HEAD ------------------------------------------------------------------------------------- 5310.4: simulated fetch 7.39(22.70+0.25) 5.64(11.43+0.22) -23.7% 5310.8: clone (partial bitmap) 18.45(24.83+1.19) 19.94(28.40+1.36) +8.1% 5311.31: server (128 days) 0.41(1.13+0.05) 0.34(0.72+0.02) -17.1% 5311.32: size (128 days) 7.4M 7.0M -4.8% 5311.33: client (128 days) 1.33(1.49+0.06) 1.29(1.37+0.12) -3.0% Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-14prune: use bitmaps for reachability traversalLibravatar Jeff King1-0/+11
Pruning generally has to traverse the whole commit graph in order to see which objects are reachable. This is the exact problem that reachability bitmaps were meant to solve, so let's use them (if they're available, of course). Here are timings on git.git: Test HEAD^ HEAD ------------------------------------------------------------------------ 5304.6: prune with bitmaps 3.65(3.56+0.09) 1.01(0.92+0.08) -72.3% And on linux.git: Test HEAD^ HEAD -------------------------------------------------------------------------- 5304.6: prune with bitmaps 35.05(34.79+0.23) 3.00(2.78+0.21) -91.4% The tests show a pretty optimal case, as we'll have just repacked and should have pretty good coverage of all refs with our bitmaps. But that's actually pretty realistic: normally prune is run via "gc" right after repacking. A few notes on the implementation: - the change is actually in reachable.c, so it would improve reachability traversals by "reflog expire --stale-fix", as well. Those aren't performed regularly, though (a normal "git gc" doesn't use --stale-fix), so they're not really worth measuring. There's a low chance of regressing that caller, since the use of bitmaps is totally transparent from the caller's perspective. - The bitmap case could actually get away without creating a "struct object", and instead the caller could just look up each object id in the bitmap result. However, this would be a marginal improvement in runtime, and it would make the callers much more complicated. They'd have to handle both the bitmap and non-bitmap cases separately, and in the case of git-prune, we'd also have to tweak prune_shallow(), which relies on our SEEN flags. - Because we do create real object structs, we go through a few contortions to create ones of the right type. This isn't strictly necessary (lookup_unknown_object() would suffice), but it's more memory efficient to use the correct types, since we already know them. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-14prune: lazily perform reachability traversalLibravatar Jeff King1-0/+24
The general strategy of "git prune" is to do a full reachability walk, then for each loose object see if we found it in our walk. But if we don't have any loose objects, we don't need to do the expensive walk in the first place. This patch postpones that walk until the first time we need to see its results. Note that this is really a specific case of a more general optimization, which is that we could traverse only far enough to find the object under consideration (i.e., stop the traversal when we find it, then pick up again when asked about the next object, etc). That could save us in some instances from having to do a full walk. But it's actually a bit tricky to do with our traversal code, and you'd need to do a full walk anyway if you have even a single unreachable object (which you generally do, if any objects are actually left after running git-repack). So in practice this lazy-load of the full walk catches one easy but common case (i.e., you've just repacked via git-gc, and there's nothing unreachable). The perf script is fairly contrived, but it does show off the improvement: Test HEAD^ HEAD ------------------------------------------------------------------------- 5304.4: prune with no objects 3.66(3.60+0.05) 0.00(0.00+0.00) -100.0% and would let us know if we accidentally regress this optimization. Note also that we need to take special care with prune_shallow(), which relies on us having performed the traversal. So this optimization can only kick in for a non-shallow repository. Since this is easy to get wrong and is not covered by existing tests, let's add an extra test to t5304 that covers this case explicitly. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-20tests: send "bug in the test script" errors to the script's stderrLibravatar SZEDER Gábor1-2/+2
Some of the functions in our test library check that they were invoked properly with conditions like this: test "$#" = 2 || error "bug in the test script: not 2 parameters to test-expect-success" If this particular condition is triggered, then 'error' will abort the whole test script with a bold red error message [1] right away. However, under certain circumstances the test script will be aborted completely silently, namely if: - a similar condition in a test helper function like 'test_line_count' is triggered, - which is invoked from the test script's "main" shell [2], - and the test script is run manually (i.e. './t1234-foo.sh' as opposed to 'make t1234-foo.sh' or 'make test') [3] - and without the '--verbose' option, because the error message is printed from within 'test_eval_', where standard output is redirected either to /dev/null or to a log file. The only indication that something is wrong is that not all tests in the script are executed and at the end of the test script's output there is no "# passed all N tests" message, which are subtle and can easily go unnoticed, as I had to experience myself. Send these "bug in the test script" error messages directly to the test scripts standard error and thus to the terminal, so those bugs will be much harder to overlook. Instead of updating all ~20 such 'error' calls with a redirection, let's add a BUG() function to 'test-lib.sh', wrapping an 'error' call with the proper redirection and also including the common prefix of those error messages, and convert all those call sites [4] to use this new BUG() function instead. [1] That particular error message from 'test_expect_success' is printed in color only when running with or without '--verbose'; with '--tee' or '--verbose-log' the error is printed without color, but it is printed to the terminal nonetheless. [2] If such a condition is triggered in a subshell of a test, then 'error' won't be able to abort the whole test script, but only the subshell, which in turn causes the test to fail in the usual way, indicating loudly and clearly that something is wrong. [3] Well, 'error' aborts the test script the same way when run manually or by 'make' or 'prove', but both 'make' and 'prove' pay attention to the test script's exit status, and even a silently aborted test script would then trigger those tools' usual noticable error messages. [4] Strictly speaking, not all those 'error' calls need that redirection to send their output to the terminal, see e.g. 'test_expect_success' in the opening example, but I think it's better to be consistent. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-12p3400: replace calls to `git checkout -b' by `git checkout -B'Libravatar Alban Gruin1-5/+5
p3400 makes a copy of the current repository to test git-rebase performance, and creates new branches in the copy with `git checkout -b'. If the original repository has branches with the same name as the script is trying to create, this operation will fail. This replaces these calls by `git checkout -B' to force the creation and update of these branches. Signed-off-by: Alban Gruin <alban.gruin@gmail.com> Acked-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-10Merge branch 'ab/fsck-skiplist'Libravatar Junio C Hamano2-0/+53
Update fsck.skipList implementation and documentation. * ab/fsck-skiplist: fsck: support comments & empty lines in skipList fsck: use oidset instead of oid_array for skipList fsck: use strbuf_getline() to read skiplist file fsck: add a performance test for skipList fsck: add a performance test fsck: document that skipList input must be unabbreviated fsck: document and test commented & empty line skipList input fsck: document and test sorted skipList input fsck tests: add a test for no skipList input fsck tests: setup of bogus commit object
2018-09-12fsck: add a performance test for skipListLibravatar René Scharfe1-0/+40
Create a performance test to see how the skipList implementation performs. First we setup N bad commits, then we see how progressively working our way up to 0..N in increments of 10x does. I.e. the needle(s) in the haystack get progressively more numerous. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-09-12fsck: add a performance testLibravatar Ævar Arnfjörð Bjarmason1-0/+13
Add a plain performance test for "fsck". This test will not be used to / referred to in any upcoming commit of mine in this series, but having a simple test for fsck performance is valuable, so let's add it while we're at it. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-20t/perf: add perf tests for fetches from a bitmapped serverLibravatar Jeff King1-0/+45
A server with bitmapped packs can serve a clone very quickly. However, fetches are not necessarily made any faster, because we spend a lot less time in object traversal (which is what bitmaps help with) and more time finding deltas (because we may have to throw out on-disk deltas if the client does not have the base). As a first step to making this faster, this patch introduces a new perf script to measure fetches into a repo of various ages from a fully-bitmapped server. We separately measure the work done by the server (in pack-objects) and that done by the client (in index-pack). Furthermore, we measure the size of the resulting pack. Breaking it down like this (instead of just doing a regular "git fetch") lets us see how much each side benefits from any changes. And since we know the pack size, if we estimate the network speed, then one could calculate a complete wall-clock time for the operation (though the script does not do this automatically). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-20t/perf: add infrastructure for measuring sizesLibravatar Jeff King3-5/+81
The main objective of scripts in the perf framework is to run "test_perf", which measures the time it takes to run some operation. However, it can also be interesting to see the change in the output size of certain operations. This patch introduces test_size, which records a single numeric output from the test and shows it in the aggregated output (with pretty printing and relative size comparison). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-20t/perf: factor out percent calculationsLibravatar Jeff King1-9/+12
This will let us reuse the code when we add new values to aggregate besides times. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-20t/perf: factor boilerplate out of test_perfLibravatar Jeff King1-26/+35
About half of test_perf() is boilerplate preparing to run _any_ test, and the other half is specifically running a timing test. Let's split it into two functions, so that we can reuse the boilerplate in future commits. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-23Merge branch 'cc/perf-bisect'Libravatar Junio C Hamano1-0/+6
Performance test updates. * cc/perf-bisect: perf/bisect_run_script: disable codespeed
2018-05-06perf/bisect_run_script: disable codespeedLibravatar Christian Couder1-0/+6
When bisecting a performance regression using a config file, `./bisect_regression --config my_perf.conf` for example, the config file can contain Codespeed configuration which would instruct the 'aggregate.perl' script called by the 'run' script to output results in the Codespeed format and maybe to try to send this output to a Codespeed server. This is unfortunate because the 'bisect_run_script' relies on the regular output from 'aggregate.perl' to mesure performance, so let's disable Codespeed output and sending results to a Codespeed server. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-26perf/aggregate: use Getopt::Long for option parsingLibravatar Christian Couder1-36/+26
When passing an option '--foo' that it does not recognize, the aggregate.perl script should die with an helpful error message like: Unknown option: foo ./aggregate.perl [options] [--] [<dir_or_rev>...] [--] \ [<test_script>...] > Options: --codespeed * Format output for Codespeed --reponame <str> * Send given reponame to codespeed --sort-by <str> * Sort output (only "regression" \ criteria is supported) rather than: fatal: Needed a single revision rev-parse --verify --foo: command returned error: 128 To implement that let's use Getopt::Long for option parsing instead of the current manual and sloppy parsing. This should save some code and make option parsing simpler, tighter and safer. This will avoid something like 'foo--sort-by=regression' to be handled as if '--sort-by=regression' had been used, for example. As Getopt::Long eats '--' at the end of options, this changes a bit the way '--' is handled as we can now have '--' both after the options and before the scripts. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-25Merge branch 'cc/perf-bisect'Libravatar Junio C Hamano3-10/+166
Performance measuring framework in t/perf learned to help bisecting performance regressions. * cc/perf-bisect: t/perf: add scripts to bisect performance regressions perf/run: add --subsection option
2018-04-11t/perf: add scripts to bisect performance regressionsLibravatar Christian Couder2-0/+120
The new bisect_regression script can be used to automatically bisect performance regressions. It will pass the new bisect_run_script to `git bisect run`. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-11perf/run: add --subsection optionLibravatar Christian Couder1-10/+46
This new option makes it possible to run perf tests as defined in only one subsection of a config file. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-11Merge branch 'nd/combined-test-helper'Libravatar Junio C Hamano5-13/+13
Small test-helper programs have been consolidated into a single binary. * nd/combined-test-helper: (36 commits) t/helper: merge test-write-cache into test-tool t/helper: merge test-wildmatch into test-tool t/helper: merge test-urlmatch-normalization into test-tool t/helper: merge test-subprocess into test-tool t/helper: merge test-submodule-config into test-tool t/helper: merge test-string-list into test-tool t/helper: merge test-strcmp-offset into test-tool t/helper: merge test-sigchain into test-tool t/helper: merge test-sha1-array into test-tool t/helper: merge test-scrap-cache-tree into test-tool t/helper: merge test-run-command into test-tool t/helper: merge test-revision-walking into test-tool t/helper: merge test-regex into test-tool t/helper: merge test-ref-store into test-tool t/helper: merge test-read-cache into test-tool t/helper: merge test-prio-queue into test-tool t/helper: merge test-path-utils into test-tool t/helper: merge test-online-cpus into test-tool t/helper: merge test-mktemp into test-tool t/helper: merge (unused) test-mergesort into test-tool ...
2018-03-27perf/aggregate: add --sort-by=regression optionLibravatar Christian Couder1-1/+58
One of the most interesting thing one can be interested in when looking at performance test results is possible performance regressions. This new option makes it easy to spot such possible regressions. This new option is named '--sort-by=regression' to make it possible and easy to add other ways to sort the results, like for example '--sort-by=utime'. If we would like to sort according to how much the stime regressed we could also add a new option called '--sort-by=regression:stime'. Then '--sort-by=regression' could become a synonym for '--sort-by=regression:rtime'. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-27perf/aggregate: add display_dir()Libravatar Christian Couder1-4/+7
This new helper function will be reused in a subsequent commit. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-27t/helper: merge test-write-cache into test-toolLibravatar Nguyễn Thái Ngọc Duy1-1/+1
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-27t/helper: merge test-string-list into test-toolLibravatar Nguyễn Thái Ngọc Duy1-1/+1
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-27t/helper: merge test-read-cache into test-toolLibravatar Nguyễn Thái Ngọc Duy1-1/+1
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-27t/helper: merge test-drop-caches into test-toolLibravatar Nguyễn Thái Ngọc Duy1-6/+6
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-27t/helper: merge test-lazy-init-name-hash into test-toolLibravatar Nguyễn Thái Ngọc Duy1-4/+4
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-27perf: use GIT_PERF_REPEAT_COUNT=3 by default even without config fileLibravatar René Scharfe1-5/+3
9ba95ed23c (perf/run: update get_var_from_env_or_config() for subsections) stopped setting a default value for GIT_PERF_REPEAT_COUNT if no perf config file is present, because get_var_from_env_or_config returns early in that case. Fix it by setting the default value after calling this function. Its fifth parameter is not used for any other variable, so remove the associated code. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-15Merge branch 'cc/perf-aggregate'Libravatar Junio C Hamano1-11/+37
"make perf" enhancement. * cc/perf-aggregate: perf/aggregate: sort JSON fields in output perf/aggregate: add --reponame option perf/aggregate: add --subsection option
2018-02-13Merge branch 'ab/simplify-perl-makefile'Libravatar Junio C Hamano1-1/+1
The build procedure for perl/ part has been greatly simplified by weaning ourselves off of MakeMaker. * ab/simplify-perl-makefile: perl: treat PERLLIB_EXTRA as an extra path again perl: avoid *.pmc and fix Error.pm further Makefile: replace perl/Makefile.PL with simple make rules
2018-02-02perf/aggregate: sort JSON fields in outputLibravatar Christian Couder1-1/+1
It is much easier to diff the output against a previous one when the fields are sorted. Helped-by: Philip Oakley <philipoakley@iee.org> Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-02perf/aggregate: add --reponame optionLibravatar Christian Couder1-2/+13
This makes it easier to use the aggregate script on the command line when one wants to get the "environment" fields set in the codespeed output. Previously setting GIT_REPO_NAME was needed for this purpose. Helped-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-02perf/aggregate: add --subsection optionLibravatar Christian Couder1-9/+24
This makes it easier to use the aggregate script on the command line, to get results from subsections. Previously setting GIT_PERF_SUBSECTION was needed for this purpose. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-23Merge branch 'cc/codespeed'Libravatar Junio C Hamano2-54/+137
"perf" test output can be sent to codespeed server. * cc/codespeed: perf/run: read GIT_PERF_REPO_NAME from perf.repoName perf/run: learn to send output to codespeed server perf/run: learn about perf.codespeedOutput perf/run: add conf_opts argument to get_var_from_env_or_config() perf/aggregate: implement codespeed JSON output perf/aggregate: refactor printing results perf/aggregate: fix checking ENV{GIT_PERF_SUBSECTION}
2018-01-05perf/run: read GIT_PERF_REPO_NAME from perf.repoNameLibravatar Christian Couder1-0/+3
The GIT_PERF_REPO_NAME env variable is used in the `aggregate.perl` script to set the 'environment' field in the JSON Codespeed output. Let's make it easy to set this variable by setting it in a config file. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-05perf/run: learn to send output to codespeed serverLibravatar Christian Couder1-1/+11
Let's make it possible to set in a config file the URL of a codespeed server. And then let's make the `run` script send the perf test results to this URL at the end of the tests. This should make is possible to easily automate the process of running perf tests and having their results available in Codespeed. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-05perf/run: learn about perf.codespeedOutputLibravatar Christian Couder1-1/+6
Let's make it possible to set in a config file the output format (regular or codespeed) of the perf tests. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-05perf/run: add conf_opts argument to get_var_from_env_or_config()Libravatar Christian Couder1-5/+6
Let's make it possible to use `git config` type specifiers like `--int` or `--bool`, so that config values are converted to the canonical form and easier to use. This additional argument is now the fourth argument of get_var_from_env_or_config() instead of the fifth because we want the default value argument to be unset if it is not passed, and this is simpler if it is the last argument. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-05perf/aggregate: implement codespeed JSON outputLibravatar Christian Couder1-2/+62
Codespeed (https://github.com/tobami/codespeed/) is an open source project that can be used to track how some software performs over time. It stores performance test results in a database and can show nice graphs and charts on a web interface. As it can be interesting to use Codespeed to see how Git performance evolves over time and releases, let's implement a Codespeed output in "perf/aggregate.perl". Helped-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-05perf/aggregate: refactor printing resultsLibravatar Christian Couder1-46/+50
As we want to implement another kind of output than the current output for the perf test results, let's refactor the existing code that outputs the results in its own print_default_results() function. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-05perf/aggregate: fix checking ENV{GIT_PERF_SUBSECTION}Libravatar Christian Couder1-1/+1
The way we check ENV{GIT_PERF_SUBSECTION} could trigger comparison between undef and "" that may be flagged by use of strict & warnings. Let's fix that. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-04perf: amend the grep tests to test grep.threadsLibravatar Ævar Arnfjörð Bjarmason2-21/+86
Ever since 5b594f457a ("Threaded grep", 2010-01-25) the number of threads git-grep uses under PTHREADS has been hardcoded to 8, but there's no performance test to check whether this is an optimal setting. Amend the existing tests for the grep engines to support a mode where this can be tested, e.g.: GIT_PERF_GREP_THREADS='1 8 16' GIT_PERF_LARGE_REPO=~/g/linux ./run p782* Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-28Merge branch 'bp/fsmonitor'Libravatar Junio C Hamano1-2/+1
Test fix. * bp/fsmonitor: p7519: improve check for prerequisite WATCHMAN