summaryrefslogtreecommitdiff
path: root/t/helper
AgeCommit message (Collapse)AuthorFilesLines
2021-04-13lookup_unknown_object(): take a repository argumentLibravatar Jeff King1-3/+3
All of the other lookup_foo() functions take a repository argument, but lookup_unknown_object() was never converted, and it uses the_repository internally. Let's fix that. We could leave a wrapper that uses the_repository, but there aren't that many calls, so we'll just convert them all. I looked briefly at each site to see if we had a repository struct (besides the_repository) we could pass, but none of them do (so this conversion to pass the_repository is a pure noop in each case, though it does take us one step closer to eventually getting rid of the_repository). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-08Merge branch 'tb/reverse-midx'Libravatar Junio C Hamano1-4/+20
An on-disk reverse-index to map the in-pack location of an object back to its object name across multiple packfiles is introduced. * tb/reverse-midx: midx.c: improve cache locality in midx_pack_order_cmp() pack-revindex: write multi-pack reverse indexes pack-write.c: extract 'write_rev_file_order' pack-revindex: read multi-pack reverse indexes Documentation/technical: describe multi-pack reverse indexes midx: make some functions non-static midx: keep track of the checksum midx: don't free midx_name early midx: allow marking a pack as preferred t/helper/test-read-midx.c: add '--show-objects' builtin/multi-pack-index.c: display usage on unrecognized command builtin/multi-pack-index.c: don't enter bogus cmd_mode builtin/multi-pack-index.c: split sub-commands builtin/multi-pack-index.c: define common usage with a macro builtin/multi-pack-index.c: don't handle 'progress' separately builtin/multi-pack-index.c: inline 'flags' with options
2021-04-02Merge branch 'jh/simple-ipc'Libravatar Junio C Hamano3-0/+789
A simple IPC interface gets introduced to build services like fsmonitor on top. * jh/simple-ipc: t0052: add simple-ipc tests and t/helper/test-simple-ipc tool simple-ipc: add Unix domain socket implementation unix-stream-server: create unix domain socket under lock unix-socket: disallow chdir() when creating unix domain sockets unix-socket: add backlog size option to unix_stream_listen() unix-socket: eliminate static unix_stream_socket() helper function simple-ipc: add win32 implementation simple-ipc: design documentation for new IPC mechanism pkt-line: add options argument to read_packetized_to_strbuf() pkt-line: add PACKET_READ_GENTLE_ON_READ_ERROR option pkt-line: do not issue flush packets in write_packetized_*() pkt-line: eliminate the need for static buffer in packet_write_gently()
2021-03-30t/helper/test-read-midx.c: add '--show-objects'Libravatar Taylor Blau1-4/+20
The 'read-midx' helper is used in places like t5319 to display basic information about a multi-pack-index. In the next patch, the MIDX writing machinery will learn a new way to choose from which pack an object is selected when multiple copies of that object exist. To disambiguate which pack introduces an object so that this feature can be tested, add a '--show-objects' option which displays additional information about each object in the MIDX. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-24Merge branch 'nk/diff-index-fsmonitor'Libravatar Junio C Hamano1-2/+2
"git diff-index" codepath has been taught to trust fsmonitor status to reduce number of lstat() calls. * nk/diff-index-fsmonitor: fsmonitor: add perf test for git diff HEAD fsmonitor: add assertion that fsmonitor is valid to check_removed fsmonitor: skip lstat deletion check during git diff-index
2021-03-22t0052: add simple-ipc tests and t/helper/test-simple-ipc toolLibravatar Jeff Hostetler3-0/+789
Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism. Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc" functions. When the tool is invoked with "run-daemon", it runs a server to listen for "simple-ipc" connections on a test socket or named pipe and responds to a set of commands to exercise/stress the communication setup. When the tool is invoked with "start-daemon", it spawns a "run-daemon" command in the background and waits for the server to become ready before exiting. (This helps make unit tests in t0052 more predictable and avoids the need for arbitrary sleeps in the test script.) The tool also has a series of client "send" commands to send commands and data to a server instance. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-19Merge branch 'jc/calloc-fix'Libravatar Junio C Hamano1-1/+1
Code clean-up. * jc/calloc-fix: xcalloc: use CALLOC_ARRAY() when applicable
2021-03-18fsmonitor: add perf test for git diff HEADLibravatar Nipunn Koorapati1-2/+2
Update the xargs call so that if your large repo contains symlinks, test-tool chmtime failure does not end the script. On Linux Test this tree upstream/master --------------------------------------------------------------------------------------------------------- 7519.4: status (fsmonitor=fsmonitor-watchman) 0.52(0.43+0.10) 0.53(0.49+0.05) +1.9% 7519.5: status -uno (fsmonitor=fsmonitor-watchman) 0.21(0.15+0.07) 0.22(0.13+0.09) +4.8% 7519.6: status -uall (fsmonitor=fsmonitor-watchman) 1.65(0.93+0.71) 1.69(1.03+0.65) +2.4% 7519.7: status (dirty) (fsmonitor=fsmonitor-watchman) 11.99(11.34+1.58) 11.95(11.02+1.79) -0.3% 7519.8: diff (fsmonitor=fsmonitor-watchman) 0.25(0.17+0.26) 0.25(0.18+0.26) +0.0% 7519.9: diff HEAD (fsmonitor=fsmonitor-watchman) 0.39(0.25+0.34) 0.89(0.35+0.74) +128.2% 7519.10: diff -- 0_files (fsmonitor=fsmonitor-watchman) 0.16(0.13+0.04) 0.16(0.12+0.05) +0.0% 7519.11: diff -- 10_files (fsmonitor=fsmonitor-watchman) 0.16(0.12+0.05) 0.16(0.12+0.05) +0.0% 7519.12: diff -- 100_files (fsmonitor=fsmonitor-watchman) 0.16(0.12+0.05) 0.16(0.12+0.05) +0.0% 7519.13: diff -- 1000_files (fsmonitor=fsmonitor-watchman) 0.16(0.11+0.06) 0.16(0.12+0.05) +0.0% 7519.14: diff -- 10000_files (fsmonitor=fsmonitor-watchman) 0.18(0.13+0.06) 0.17(0.10+0.08) -5.6% 7519.15: add (fsmonitor=fsmonitor-watchman) 2.25(1.53+0.68) 2.25(1.47+0.74) +0.0% 7519.18: status (fsmonitor=disabled) 0.88(0.73+1.03) 0.89(0.67+1.08) +1.1% 7519.19: status -uno (fsmonitor=disabled) 0.45(0.43+0.89) 0.45(0.34+0.98) +0.0% 7519.20: status -uall (fsmonitor=disabled) 1.88(1.16+1.58) 1.88(1.22+1.51) +0.0% 7519.21: status (dirty) (fsmonitor=disabled) 7.53(7.05+2.11) 7.53(6.98+2.04) +0.0% 7519.22: diff (fsmonitor=disabled) 0.42(0.37+0.92) 0.42(0.38+0.91) +0.0% 7519.23: diff HEAD (fsmonitor=disabled) 0.44(0.41+0.90) 0.44(0.40+0.91) +0.0% 7519.24: diff -- 0_files (fsmonitor=disabled) 0.13(0.09+0.05) 0.13(0.09+0.05) +0.0% 7519.25: diff -- 10_files (fsmonitor=disabled) 0.13(0.10+0.04) 0.13(0.10+0.04) +0.0% 7519.26: diff -- 100_files (fsmonitor=disabled) 0.13(0.09+0.05) 0.13(0.10+0.04) +0.0% 7519.27: diff -- 1000_files (fsmonitor=disabled) 0.13(0.09+0.06) 0.13(0.09+0.05) +0.0% 7519.28: diff -- 10000_files (fsmonitor=disabled) 0.14(0.11+0.05) 0.14(0.10+0.05) +0.0% 7519.29: add (fsmonitor=disabled) 2.43(1.61+1.64) 2.43(1.69+1.57) +0.0% On linux (2.29.2 vs w/ this patch): nipunn@nipunn-dbx:~/src/server3$ strace -f -c git diff 2>&1 | grep lstat 0.04 0.000063 3 20 6 lstat nipunn@nipunn-dbx:~/src/server3$ strace -f -c git diff HEAD 2>&1 | grep lstat 94.98 5.242262 10 523783 13 lstat nipunn@nipunn-dbx:~/src/server3$ strace -f -c ../git/bin-wrappers/git diff 2>&1 | grep lstat 0.38 0.000032 5 7 3 lstat nipunn@nipunn-dbx:~/src/server3$ strace -f -c ../git/bin-wrappers/git diff HEAD 2>&1 | grep lstat 99.44 0.741892 9 81634 10 lstat On mac (2.29.2 vs w/ this patch): nipunn-mbp:server nipunn$ sudo dtruss -L -f -c git diff 2>&1 | grep "^lstat64 " lstat64 8 nipunn-mbp:server nipunn$ sudo dtruss -L -f -c git diff HEAD 2>&1 | grep "^lstat64 " lstat64 120242 nipunn-mbp:server nipunn$ sudo dtruss -L -f -c ../git/bin-wrappers/git diff 2>&1 | grep "^lstat64 " lstat64 4 nipunn-mbp:server nipunn$ sudo dtruss -L -f -c ../git/bin-wrappers/git diff HEAD 2>&1 | grep "^lstat64 " lstat64 4497 There are still a bunch of lstats - on directories, but not every file. Progress! Signed-off-by: Nipunn Koorapati <nipunn@dropbox.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-15xcalloc: use CALLOC_ARRAY() when applicableLibravatar Junio C Hamano1-1/+1
These are for codebase before Git 2.31 Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-17Merge branch 'jt/trace2-BUG'Libravatar Junio C Hamano1-0/+9
Even though invocations of "die()" were logged to the trace2 system, "BUG()"s were not, which has been corrected. * jt/trace2-BUG: usage: trace2 BUG() invocations
2021-02-17Merge branch 'ak/corrected-commit-date'Libravatar Junio C Hamano1-0/+4
The commit-graph learned to use corrected commit dates instead of the generation number to help topological revision traversal. * ak/corrected-commit-date: doc: add corrected commit date info commit-reach: use corrected commit dates in paint_down_to_common() commit-graph: use generation v2 only if entire chain does commit-graph: implement generation data chunk commit-graph: implement corrected commit date commit-graph: return 64-bit generation number commit-graph: add a slab to store topological levels t6600-test-reach: generalize *_three_modes commit-graph: consolidate fill_commit_graph_info revision: parse parent in indegree_walk_step() commit-graph: fix regression when computing Bloom filters
2021-02-10Merge branch 'ab/grep-pcre-invalid-utf8'Libravatar Junio C Hamano3-0/+14
Update support for invalid UTF-8 in PCRE2. * ab/grep-pcre-invalid-utf8: grep/pcre2: better support invalid UTF-8 haystacks grep/pcre2 tests: don't rely on invalid UTF-8 data test
2021-02-09usage: trace2 BUG() invocationsLibravatar Jonathan Tan1-0/+9
die() messages are traced in trace2, but BUG() messages are not. Anyone tracking die() messages would have even more reason to track BUG(). Therefore, write to trace2 when BUG() is invoked. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-24grep/pcre2: better support invalid UTF-8 haystacksLibravatar Ævar Arnfjörð Bjarmason3-0/+14
Improve the support for invalid UTF-8 haystacks given a non-ASCII needle when using the PCREv2 backend. This is a more complete fix for a bug I started to fix in 870eea8166 (grep: do not enter PCRE2_UTF mode on fixed matching, 2019-07-26), now that PCREv2 has the PCRE2_MATCH_INVALID_UTF mode we can make use of it. This fixes the sort of case described in 8a5999838e (grep: stess test PCRE v2 on invalid UTF-8 data, 2019-07-26), i.e.: - The subject string is non-ASCII (e.g. "ævar") - We're under a is_utf8_locale(), e.g. "en_US.UTF-8", not "C" - We are using --ignore-case, or we're a non-fixed pattern If those conditions were satisfied and we matched found non-valid UTF-8 data PCREv2 might bark on it, in practice this only happened under the JIT backend (turned on by default on most platforms). Ultimately this fixes a "regression" in b65abcafc7 ("grep: use PCRE v2 for optimized fixed-string search", 2019-07-01), I'm putting that in scare-quotes because before then we wouldn't properly support these complex case-folding, locale etc. cases either, it just broke in different ways. There was a bug related to this the PCRE2_NO_START_OPTIMIZE flag fixed in PCREv2 10.36. It can be worked around by setting the PCRE2_NO_START_OPTIMIZE flag. Let's do that in those cases, and add tests for the bug. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-21refs: switch peel_ref() to peel_iterated_oid()Libravatar Jeff King1-13/+0
The peel_ref() interface is confusing and error-prone: - it's typically used by ref iteration callbacks that have both a refname and oid. But since they pass only the refname, we may load the ref value from the filesystem again. This is inefficient, but also means we are open to a race if somebody simultaneously updates the ref. E.g., this: int some_ref_cb(const char *refname, const struct object_id *oid, ...) { if (!peel_ref(refname, &peeled)) printf("%s peels to %s", oid_to_hex(oid), oid_to_hex(&peeled); } could print nonsense. It is correct to say "refname peels to..." (you may see the "before" value or the "after" value, either of which is consistent), but mentioning both oids may be mixing before/after values. Worse, whether this is possible depends on whether the optimization to read from the current iterator value kicks in. So it is actually not possible with: for_each_ref(some_ref_cb); but it _is_ possible with: head_ref(some_ref_cb); which does not use the iterator mechanism (though in practice, HEAD should never peel to anything, so this may not be triggerable). - it must take a fully-qualified refname for the read_ref_full() code path to work. Yet we routinely pass it partial refnames from callbacks to for_each_tag_ref(), etc. This happens to work when iterating because there we do not call read_ref_full() at all, and only use the passed refname to check if it is the same as the iterator. But the requirements for the function parameters are quite unclear. Instead of taking a refname, let's instead take an oid. That fixes both problems. It's a little funny for a "ref" function not to involve refs at all. The key thing is that it's optimizing under the hood based on having access to the ref iterator. So let's change the name to make it clear why you'd want this function versus just peel_object(). There are two other directions I considered but rejected: - we could pass the peel information into the each_ref_fn callback. However, we don't know if the caller actually wants it or not. For packed-refs, providing it is essentially free. But for loose refs, we actually have to peel the object, which would be wasteful in most cases. We could likewise pass in a flag to the callback indicating whether the peeled information is known, but that complicates those callbacks, as they then have to decide whether to manually peel themselves. Plus it requires changing the interface of every callback, whether they care about peeling or not, and there are many of them. - we could make a function to return the peeled value of the current iterated ref (computing it if necessary), and BUG() otherwise. I.e.: int peel_current_iterated_ref(struct object_id *out); Each of the current callers is an each_ref_fn callback, so they'd mostly be happy. But: - we use those callbacks with functions like head_ref(), which do not use the iteration code. So we'd need to handle the fallback case there, anyway. - it's possible that a caller would want to call into generic code that sometimes is used during iteration and sometimes not. This encapsulates the logic to do the fast thing when possible, and fallback when necessary. The implementation is mostly obvious, but I want to call out a few things in the patch: - the test-tool coverage for peel_ref() is now meaningless, as it all collapses to a single peel_object() call (arguably they were pretty uninteresting before; the tricky part of that function is the fast-path we see during iteration, but these calls didn't trigger that). I've just dropped it entirely, though note that some other tests relied on the tags we created; I've moved that creation to the tests where it matters. - we no longer need to take a ref_store parameter, since we'd never look up a ref now. We do still rely on a global "current iterator" variable which _could_ be kept per-ref-store. But in practice this is only useful if there are multiple recursive iterations, at which point the more appropriate solution is probably a stack of iterators. No caller used the actual ref-store parameter anyway (they all call the wrapper that passes the_repository). - the original only kicked in the optimization when the "refname" pointer matched (i.e., not string comparison). We do likewise with the "oid" parameter here, but fall back to doing an actual oideq() call. This in theory lets us kick in the optimization more often, though in practice no current caller cares. It should never be wrong, though (peeling is a property of an object, so two refs pointing to the same object would peel identically). - the original took care not to touch the peeled out-parameter unless we found something to put in it. But no caller cares about this, and anyway, it is enforced by peel_object() itself (and even in the optimized iterator case, that's where we eventually end up). We can shorten the code and avoid an extra copy by just passing the out-parameter through the stack. Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-18commit-graph: implement generation data chunkLibravatar Abhishek Kumar1-0/+4
As discovered by Ævar, we cannot increment graph version to distinguish between generation numbers v1 and v2 [1]. Thus, one of pre-requistes before implementing generation number v2 was to distinguish between graph versions in a backwards compatible manner. We are going to introduce a new chunk called Generation DATa chunk (or GDAT). GDAT will store corrected committer date offsets whereas CDAT will still store topological level. Old Git does not understand GDAT chunk and would ignore it, reading topological levels from CDAT. New Git can parse GDAT and take advantage of newer generation numbers, falling back to topological levels when GDAT chunk is missing (as it would happen with a commit-graph written by old Git). We introduce a test environment variable 'GIT_TEST_COMMIT_GRAPH_NO_GDAT' which forces commit-graph file to be written without generation data chunk to emulate a commit-graph file written by old Git. To minimize the space required to store corrrected commit date, Git stores corrected commit date offsets into the commit-graph file, instea of corrected commit dates. This saves us 4 bytes per commit, decreasing the GDAT chunk size by half, but it's possible for the offset to overflow the 4-bytes allocated for storage. As such overflows are and should be exceedingly rare, we use the following overflow management scheme: We introduce a new commit-graph chunk, Generation Data OVerflow ('GDOV') to store corrected commit dates for commits with offsets greater than GENERATION_NUMBER_V2_OFFSET_MAX. If the offset is greater than GENERATION_NUMBER_V2_OFFSET_MAX, we set the MSB of the offset and the other bits store the position of corrected commit date in GDOV chunk, similar to how Extra Edge List is maintained. We test the overflow-related code with the following repo history: F - N - U / \ U - N - U N \ / N - F - N Where the commits denoted by U have committer date of zero seconds since Unix epoch, the commits denoted by N have committer date of 1112354055 (default committer date for the test suite) seconds since Unix epoch and the commits denoted by F have committer date of (2 ^ 31 - 2) seconds since Unix epoch. The largest offset observed is 2 ^ 31, just large enough to overflow. [1]: https://lore.kernel.org/git/87a7gdspo4.fsf@evledraar.gmail.com/ Signed-off-by: Abhishek Kumar <abhishekkumar8222@gmail.com> Reviewed-by: Taylor Blau <me@ttaylorr.com> Reviewed-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-25Merge branch 'jx/t5411-flake-fix'Libravatar Junio C Hamano1-14/+40
The exchange between receive-pack and proc-receive hook did not carefully check for errors. * jx/t5411-flake-fix: receive-pack: use default version 0 for proc-receive receive-pack: gently write messages to proc-receive t5411: new helper filter_out_user_friendly_and_stable_output
2020-11-21Merge branch 'en/strmap'Libravatar Junio C Hamano1-5/+4
A specialization of hashmap that uses a string as key has been introduced. Hopefully it will see wider use over time. * en/strmap: shortlog: use strset from strmap.h Use new HASHMAP_INIT macro to simplify hashmap initialization strmap: take advantage of FLEXPTR_ALLOC_STR when relevant strmap: enable allocations to come from a mem_pool strmap: add a strset sub-type strmap: split create_entry() out of strmap_put() strmap: add functions facilitating use as a string->int map strmap: enable faster clearing and reusing of strmaps strmap: add more utility functions strmap: new utility functions hashmap: provide deallocation function names hashmap: introduce a new hashmap_partial_clear() hashmap: allow re-use after hashmap_free() hashmap: adjust spacing to fix argument alignment hashmap: add usage documentation explaining hashmap_free[_entries]()
2020-11-18Merge branch 'en/merge-ort-api-null-impl'Libravatar Junio C Hamano3-0/+213
Preparation for a new merge strategy. * en/merge-ort-api-null-impl: merge,rebase,revert: select ort or recursive by config or environment fast-rebase: demonstrate merge-ort's API via new test-tool command merge-ort-wrappers: new convience wrappers to mimic the old merge API merge-ort: barebones API of new merge strategy with empty implementation
2020-11-18Merge branch 'ds/maintenance-part-3'Libravatar Junio C Hamano3-0/+37
Parts of "git maintenance" to ease writing crontab entries (and other scheduling system configuration) for it. * ds/maintenance-part-3: maintenance: add troubleshooting guide to docs maintenance: use 'incremental' strategy by default maintenance: create maintenance.strategy config maintenance: add start/stop subcommands maintenance: add [un]register subcommands for-each-repo: run subcommands on configured repos maintenance: add --schedule option and config maintenance: optionally skip --auto process
2020-11-11Use new HASHMAP_INIT macro to simplify hashmap initializationLibravatar Elijah Newren1-2/+1
Now that hashamp has lazy initialization and a HASHMAP_INIT macro, hashmaps allocated on the stack can be initialized without a call to hashmap_init() and in some cases makes the code a bit shorter. Convert some callsites over to take advantage of this. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-11receive-pack: use default version 0 for proc-receiveLibravatar Jiang Xin1-6/+10
In the verison negotiation phase between "receive-pack" and "proc-receive", "proc-receive" can send an empty flush-pkt to end the negotiation and use default version 0. Capabilities (such as "push-options") are not supported in version 0. Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-11receive-pack: gently write messages to proc-receiveLibravatar Jiang Xin1-10/+32
Johannes found a flaky hang in `t5411/test-0013-bad-protocol.sh` in the osx-clang job of the CI/PR builds, and ran into an issue when using the `--stress` option with the following error messages: fatal: unable to write flush packet: Broken pipe send-pack: unexpected disconnect while reading sideband packet fatal: the remote end hung up unexpectedly In this test case, the "proc-receive" hook sends an error message and dies earlier. While "receive-pack" on the other side of the pipe should forward the error message of the "proc-receive" hook to the client side, but it fails to do so. This is because "receive-pack" uses `packet_write_fmt()` and `packet_flush()` to write pkt-line message to "proc-receive" hook, and these functions die immediately when pipe is broken. Using "gently" forms for these functions will get more predicable output. Add more "--die-*" options to test helper to test different stages of the protocol between "receive-pack" and "proc-receive" hook. Reported-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Suggested-by: Jeff King <peff@peff.net> Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-02Merge branch 'js/avoid-split-sideband-message'Libravatar Junio C Hamano1-0/+23
The side-band status report can be sent at the same time as the primary payload multiplexed, but the demultiplexer on the receiving end incorrectly split a single status report into two, which has been corrected. * js/avoid-split-sideband-message: test-pkt-line: drop colon from sideband identity sideband: report unhandled incomplete sideband messages as bugs sideband: avoid reporting incomplete sideband messages
2020-11-02hashmap: provide deallocation function namesLibravatar Elijah Newren1-3/+3
hashmap_free(), hashmap_free_entries(), and hashmap_free_() have existed for a while, but aren't necessarily the clearest names, especially with hashmap_partial_clear() being added to the mix and lazy-initialization now being supported. Peff suggested we adopt the following names[1]: - hashmap_clear() - remove all entries and de-allocate any hashmap-specific data, but be ready for reuse - hashmap_clear_and_free() - ditto, but free the entries themselves - hashmap_partial_clear() - remove all entries but don't deallocate table - hashmap_partial_clear_and_free() - ditto, but free the entries This patch provides the new names and converts all existing callers over to the new naming scheme. [1] https://lore.kernel.org/git/20201030125059.GA3277724@coredump.intra.peff.net/ Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-10-29fast-rebase: demonstrate merge-ort's API via new test-tool commandLibravatar Elijah Newren3-0/+213
Add a new test-tool command named 'fast-rebase', which is a super-slimmed down and nowhere near as capable version of 'git rebase'. 'test-tool fast-rebase' is not currently planned for usage in the testsuite, but is here for two purposes: 1) Demonstrate the desired API of merge-ort. In particular, fast-rebase takes advantage of the separation of the merging operation from the updating of the index and working tree, to allow it to pick N commits, but only update the index and working tree once at the end. Look for the calls to merge_incore_nonrecursive() and merge_switch_to_result(). 2) Provide a convenient benchmark that isn't polluted by the heavy disk writing and forking of unnecessary processes that comes from sequencer.c and merge-recursive.c. fast-rebase is not meant to replace sequencer.c, just give ideas on how sequencer.c can be changed. Updating sequencer.c with these goals is probably a large amount of work; writing a simple targeted command with no documentation, less-than-useful help messages, numerous limitations in terms of flags it can accept and situations it can handle, and which is flagged off from users is a much easier interim step. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-10-27test-pkt-line: drop colon from sideband identityLibravatar Jeff King1-1/+1
We pass "sideband: " as our identity for errors to recv_sideband(). But it already adds the trailing colon and space. This doesn't invalidate any tests, but it looks funny when you examine the test output. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-10-20sideband: avoid reporting incomplete sideband messagesLibravatar Johannes Schindelin1-0/+23
In 2b695ecd74d (t5500: count objects through stderr, not trace, 2020-05-06) we tried to ensure that the "Total 3" message could be grepped in Git's output, even if it sometimes got chopped up into multiple lines in the trace machinery. However, the first instance where this mattered now goes through the sideband machinery, where it is _still_ possible for messages to get chopped up: it *is* possible for the standard error stream to be sent byte-for-byte and hence it can be easily interrupted. Meaning: it is possible for the single line that we're looking for to be chopped up into multiple sideband packets, with a primary packet being delivered between them. This seems to happen occasionally in the `vs-test` part of our CI builds, i.e. with binaries built using Visual C, but not when building with GCC or clang; The symptom is that t5500.43 fails to find a line matching `remote: Total 3` in the `log` file, which ends in something along these lines: remote: Tota remote: l 3 (delta 0), reused 0 (delta 0), pack-reused 0 This should not happen, though: we have code in `demultiplex_sideband()` _specifically_ to stitch back together lines that were delivered in separate sideband packets. However, this stitching was broken in a subtle way in fbd76cd450 (sideband: reverse its dependency on pkt-line, 2019-01-16): before that change, incomplete sideband lines would not be flushed upon receiving a primary packet, but after that patch, they would be. The subtleness of this bug comes from the fact that it is easy to get confused by the ambiguous meaning of the `break` keyword: after writing the primary packet contents, the `break;` in the original version of `recv_sideband()` does _not_ break out of the `while` loop, but instead only ends the `switch` case: while (!retval) { [...] switch (band) { [...] case 1: /* Write the contents of the primary packet */ write_or_die(out, buf + 1, len); /* Here, we do *not* break out of the loop, `retval` is unchanged */ break; [...] } if (outbuf.len) { /* Write any remaining sideband messages lacking a trailing LF */ strbuf_addch(&outbuf, '\n'); xwrite(2, outbuf.buf, outbuf.len); } In contrast, after fbd76cd450 (sideband: reverse its dependency on pkt-line, 2019-01-16), the body of the `while` loop was extracted into `demultiplex_sideband()`, crucially _including_ the logic to write incomplete sideband messages: switch (band) { [...] case 1: *sideband_type = SIDEBAND_PRIMARY; /* This does not break out of the loop: the loop is in the caller */ break; [...] } cleanup: [...] /* This logic is now no longer _outside_ the loop but _inside_ */ if (scratch->len) { strbuf_addch(scratch, '\n'); xwrite(2, scratch->buf, scratch->len); } The correct way to fix this is to return from `demultiplex_sideband()` early. The caller will then write out the contents of the primary packet and continue looping. The `scratch` buffer for incomplete sideband messages is owned by that caller, and will continue to accumulate the remainder(s) of those messages. The loop will only end once `demultiplex_sideband()` returned non-zero _and_ did not indicate a primary packet, which is the case only when we hit the `cleanup:` path, in which we take care of flushing any unfinished sideband messages and release the `scratch` buffer. To ensure that this does not get broken again, we introduce a pair of subcommands of the `pkt-line` test helper that specifically chop up the sideband message and squeeze a primary packet into the middle. Final note: The other test case touched by 2b695ecd74d (t5500: count objects through stderr, not trace, 2020-05-06) is not affected by this issue because the sideband machinery is not involved there. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-10-05Merge branch 'jk/unused'Libravatar Junio C Hamano2-5/+5
Code cleanup. * jk/unused: dir.c: drop unused "untracked" from treat_path_fast() sequencer: handle ignore_footer when parsing trailers test-advise: check argument count with argc instead of argv sparse-checkout: fill in some options boilerplate sequencer: drop repository argument from run_git_commit() push: drop unused repo argument to do_push() assert PARSE_OPT_NONEG in parse-options callbacks env--helper: write to opt->value in parseopt helper drop unused argc parameters convert: drop unused crlf_action from check_global_conv_flags_eol()
2020-10-05Merge branch 'ds/in-merge-bases-many-optim-bug'Libravatar Junio C Hamano1-0/+2
in_merge_bases_many(), a way to see if a commit is reachable from any commit in a set of commits, was totally broken when the commit-graph feature was in use, which has been corrected. * ds/in-merge-bases-many-optim-bug: commit-reach: fix in_merge_bases_many bug
2020-10-02commit-reach: fix in_merge_bases_many bugLibravatar Derrick Stolee1-0/+2
Way back in f9b8908b (commit.c: use generation numbers for in_merge_bases(), 2018-05-01), a heuristic was used to short-circuit the in_merge_bases() walk. This works just fine as long as the caller is checking only two commits, but when there are multiple, there is a possibility that this heuristic is _very wrong_. Some code moves since then has changed this method to repo_in_merge_bases_many() inside commit-reach.c. The heuristic computes the minimum generation number of the "reference" list, then compares this number to the generation number of the "commit". In a recent topic, a test was added that used in_merge_bases_many() to test if a commit was reachable from a number of commits pulled from a reflog. However, this highlighted the problem: if any of the reference commits have a smaller generation number than the given commit, then the walk is skipped _even if there exist some with higher generation number_. This heuristic is wrong! It must check the MAXIMUM generation number of the reference commits, not the MINIMUM. This highlights a testing gap. t6600-test-reach.sh covers many methods in commit-reach.c, including in_merge_bases() and get_merge_bases_many(), but since these methods either restrict to two input commits or actually look for the full list of merge bases, they don't check this heuristic! Add a possible input to "test-tool reach" that tests in_merge_bases_many() and add tests to t6600-test-reach.sh that cover this heuristic. This includes cases for the reference commits having generation above and below the generation of the input commit, but also having maximum generation below the generation of the input commit. The fix itself is to swap min_generation with a max_generation in repo_in_merge_bases_many(). Reported-by: Srinidhi Kaushik <shrinidhi.kaushik@gmail.com> Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-30test-advise: check argument count with argc instead of argvLibravatar Jeff King1-2/+2
We complain if "test-tool advise" is not given an argument, but we quietly ignore any additional arguments it receives. Let's instead check that we got the expected number. As a bonus, this silences -Wunused-parameter, which notes that we don't ever look at argc. While we're here, we can also fix the indentation in the conditional. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-30drop unused argc parametersLibravatar Jeff King1-3/+3
Many functions take an argv/argc pair, but never actually look at argc. This makes it useless at best (we use the NULL sentinel in argv to find the end of the array), and misleading at worst (what happens if the argc count does not match the argv NULL?). In each of these instances, the argv NULL does match the argc count, so there are no bugs here. But let's tighten the interfaces to make it harder to get wrong (and to reduce some -Wunused-parameter complaints). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-29Merge branch 'tb/bloom-improvements'Libravatar Junio C Hamano2-2/+5
"git commit-graph write" learned to limit the number of bloom filters that are computed from scratch with the --max-new-filters option. * tb/bloom-improvements: commit-graph: introduce 'commitGraph.maxNewFilters' builtin/commit-graph.c: introduce '--max-new-filters=<n>' commit-graph: rename 'split_commit_graph_opts' bloom: encode out-of-bounds filters as non-empty bloom/diff: properly short-circuit on max_changes bloom: use provided 'struct bloom_filter_settings' bloom: split 'get_bloom_filter()' in two commit-graph.c: store maximum changed paths commit-graph: respect 'commitGraph.readChangedPaths' t/helper/test-read-graph.c: prepare repo settings commit-graph: pass a 'struct repository *' in more places t4216: use an '&&'-chain commit-graph: introduce 'get_bloom_filter_settings()'
2020-09-25Merge branch 'jx/proc-receive-hook'Libravatar Junio C Hamano3-0/+178
"git receive-pack" that accepts requests by "git push" learned to outsource most of the ref updates to the new "proc-receive" hook. * jx/proc-receive-hook: doc: add documentation for the proc-receive hook transport: parse report options for tracking refs t5411: test updates of remote-tracking branches receive-pack: new config receive.procReceiveRefs doc: add document for capability report-status-v2 New capability "report-status-v2" for git-push receive-pack: feed report options to post-receive receive-pack: add new proc-receive hook t5411: add basic test cases for proc-receive hook transport: not report a non-head push as a branch
2020-09-25maintenance: add start/stop subcommandsLibravatar Derrick Stolee3-0/+37
Add new subcommands to 'git maintenance' that start or stop background maintenance using 'cron', when available. This integration is as simple as I could make it, barring some implementation complications. The schedule is laid out as follows: 0 1-23 * * * $cmd maintenance run --schedule=hourly 0 0 * * 1-6 $cmd maintenance run --schedule=daily 0 0 * * 0 $cmd maintenance run --schedule=weekly where $cmd is a properly-qualified 'git for-each-repo' execution: $cmd=$path/git --exec-path=$path for-each-repo --config=maintenance.repo where $path points to the location of the Git executable running 'git maintenance start'. This is critical for systems with multiple versions of Git. Specifically, macOS has a system version at '/usr/bin/git' while the version that users can install resides at '/usr/local/bin/git' (symlinked to '/usr/local/libexec/git-core/git'). This will also use your locally-built version if you build and run this in your development environment without installing first. This conditional schedule avoids having cron launch multiple 'git for-each-repo' commands in parallel. Such parallel commands would likely lead to the 'hourly' and 'daily' tasks competing over the object database lock. This could lead to to some tasks never being run! Since the --schedule=<frequency> argument will run all tasks with _at least_ the given frequency, the daily runs will also run the hourly tasks. Similarly, the weekly runs will also run the daily and hourly tasks. The GIT_TEST_CRONTAB environment variable is not intended for users to edit, but instead as a way to mock the 'crontab [-l]' command. This variable is set in test-lib.sh to avoid a future test from accidentally running anything with the cron integration from modifying the user's schedule. We use GIT_TEST_CRONTAB='test-tool crontab <file>' in our tests to check how the schedule is modified in 'git maintenance (start|stop)' commands. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17bloom: use provided 'struct bloom_filter_settings'Libravatar Taylor Blau1-0/+1
When 'get_or_compute_bloom_filter()' needs to compute a Bloom filter from scratch, it looks to the default 'struct bloom_filter_settings' in order to determine the maximum number of changed paths, number of bits per entry, and so on. All of these values have so far been constant, and so there was no need to pass in a pointer from the caller (eg., the one that is stored in the 'struct write_commit_graph_context'). Start passing in a 'struct bloom_filter_settings *' instead of using the default values to respect graph-specific settings (eg., in the case of setting 'GIT_TEST_BLOOM_SETTINGS_MAX_CHANGED_PATHS'). In order to have an initialized value for these settings, move its initialization to earlier in the commit-graph write. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17bloom: split 'get_bloom_filter()' in twoLibravatar Taylor Blau1-1/+2
'get_bloom_filter' takes a flag to control whether it will compute a Bloom filter if the requested one is missing. In the next patch, we'll add yet another parameter to this method, which would force all but one caller to specify an extra 'NULL' parameter at the end. Instead of doing this, split 'get_bloom_filter' into two functions: 'get_bloom_filter' and 'get_or_compute_bloom_filter'. The former only looks up a Bloom filter (and does not compute one if it's missing, thus dropping the 'compute_if_not_present' flag). The latter does compute missing Bloom filters, with an additional parameter to store whether or not it needed to do so. This simplifies many call-sites, since the majority of existing callers to 'get_bloom_filter' do not want missing Bloom filters to be computed (so they can drop the parameter entirely and use the simpler version of the function). While we're at it, instrument the new 'get_or_compute_bloom_filter()' with counters in the 'write_commit_graph_context' struct which store the number of filters that we did and didn't compute, as well as filters that were truncated. It would be nice to drop the 'compute_if_not_present' flag entirely, since all remaining callers of 'get_or_compute_bloom_filter' pass it as '1', but this will change in a future patch and hence cannot be removed. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-09t/helper/test-read-graph.c: prepare repo settingsLibravatar Taylor Blau1-1/+2
The read-graph test-tool is used by a number of the commit-graph test to assert various properties about a commit-graph. Previously, this program never ran 'prepare_repo_settings()'. There was no need to do so, since none of the commit-graph machinery is affected by the repo settings. In the next patch, the commit-graph machinery's behavior will become dependent on the repo settings, and so loading them before running the rest of the test tool is critical. As such, teach the test tool to call 'prepare_repo_settings()'. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-03Merge branch 'jk/slimmed-down'Libravatar Junio C Hamano3-135/+0
Trim an unused binary and turn a bunch of commands into built-in. * jk/slimmed-down: drop vcs-svn experiment make git-fast-import a builtin make git-bugreport a builtin make credential helpers builtins Makefile: drop builtins from MSVC pdb list
2020-08-27Merge branch 'jk/leakfix'Libravatar Junio C Hamano1-1/+1
Code clean-up. * jk/leakfix: submodule--helper: fix leak of core.worktree value config: fix leak in git_config_get_expiry_in_days() config: drop git_config_get_string_const() config: fix leaks from git_config_get_string_const() checkout: fix leak of non-existent branch names submodule--helper: use strbuf_release() to free strbufs clear_pattern_list(): clear embedded hashmaps
2020-08-27receive-pack: add new proc-receive hookLibravatar Jiang Xin3-0/+178
Git calls an internal `execute_commands` function to handle commands sent from client to `git-receive-pack`. Regardless of what references the user pushes, git creates or updates the corresponding references if the user has write-permission. A contributor who has no write-permission, cannot push to the repository directly. So, the contributor has to write commits to an alternate location, and sends pull request by emails or by other ways. We call this workflow as a distributed workflow. It would be more convenient to work in a centralized workflow like what Gerrit provided for some cases. For example, a read-only user who cannot push to a branch directly can run the following `git push` command to push commits to a pseudo reference (has a prefix "refs/for/", not "refs/heads/") to create a code review. git push origin \ HEAD:refs/for/<branch-name>/<session> The `<branch-name>` in the above example can be as simple as "master", or a more complicated branch name like "foo/bar". The `<session>` in the above example command can be the local branch name of the client side, such as "my/topic". We cannot implement a centralized workflow elegantly by using "pre-receive" + "post-receive", because Git will call the internal function "execute_commands" to create references (even the special pseudo reference) between these two hooks. Even though we can delete the temporarily created pseudo reference via the "post-receive" hook, having a temporary reference is not safe for concurrent pushes. So, add a filter and a new handler to support this kind of workflow. The filter will check the prefix of the reference name, and if the command has a special reference name, the filter will turn a specific field (`run_proc_receive`) on for the command. Commands with this filed turned on will be executed by a new handler (a hook named "proc-receive") instead of the internal `execute_commands` function. We can use this "proc-receive" command to create pull requests or send emails for code review. Suggested by Junio, this "proc-receive" hook reads the commands, push-options (optional), and send result using a protocol in pkt-line format. In the following example, the letter "S" stands for "receive-pack" and letter "H" stands for the hook. # Version and features negotiation. S: PKT-LINE(version=1\0push-options atomic...) S: flush-pkt H: PKT-LINE(version=1\0push-options...) H: flush-pkt # Send commands from server to the hook. S: PKT-LINE(<old-oid> <new-oid> <ref>) S: ... ... S: flush-pkt # Send push-options only if the 'push-options' feature is enabled. S: PKT-LINE(push-option) S: ... ... S: flush-pkt # Receive result from the hook. # OK, run this command successfully. H: PKT-LINE(ok <ref>) # NO, I reject it. H: PKT-LINE(ng <ref> <reason>) # Fall through, let 'receive-pack' to execute it. H: PKT-LINE(ok <ref>) H: PKT-LINE(option fall-through) # OK, but has an alternate reference. The alternate reference name # and other status can be given in options H: PKT-LINE(ok <ref>) H: PKT-LINE(option refname <refname>) H: PKT-LINE(option old-oid <old-oid>) H: PKT-LINE(option new-oid <new-oid>) H: PKT-LINE(option forced-update) H: ... ... H: flush-pkt After receiving a command, the hook will execute the command, and may create/update different reference. For example, a command for a pseudo reference "refs/for/master/topic" may create/update different reference such as "refs/pull/123/head". The alternate reference name and other status are given in option lines. The list of commands returned from "proc-receive" will replace the relevant commands that are sent from user to "receive-pack", and "receive-pack" will continue to run the "execute_commands" function and other routines. Finally, the result of the execution of these commands will be reported to end user. The reporting function from "receive-pack" to "send-pack" will be extended in latter commit just like what the "proc-receive" hook reports to "receive-pack". Signed-off-by: Jiang Xin <zhiyou.jx@alibaba-inc.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-17multi-pack-index: use hash version byteLibravatar Derrick Stolee1-2/+6
Similar to the commit-graph format, the multi-pack-index format has a byte in the header intended to track the hash version used to write the file. This allows one to interpret the hash length without having the context of the repository config specifying the hash length. This was not modified as part of the SHA-256 work because the hash length was automatically up-shifted due to that config. Since we have this byte available, we can make the file formats more obviously incompatible instead of relying on other context from the repository. Add a new oid_version() method in midx.c similar to the one in commit-graph.c. This is specifically made separate from that implementation to avoid artificially linking the formats. The test impact requires a few more things than the corresponding change in the commit-graph format. Specifically, 'test-tool read-midx' was not writing anything about this header value to output. Since the value available in 'struct multi_pack_index' is hash_len instead of a version value, we output "20" or "32" instead of "1" or "2". Since we want a user to not have their Git commands fail if their multi-pack-index has the incorrect hash version compared to the repository's hash version, we relax the die() to an error() in load_multi_pack_index(). This has some effect on 'git multi-pack-index verify' as we need to check that a failed parse of a file that exists is actually a verify error. For that test that checks the hash version matches, we change the corrupted byte from "2" to "3" to ensure the test fails for both hash algorithms. Helped-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-14config: fix leaks from git_config_get_string_const()Libravatar Jeff King1-1/+1
There are two functions to get a single config string: - git_config_get_string() - git_config_get_string_const() One might naively think that the first one allocates a new string and the second one just points us to the internal configset storage. But in fact they both allocate a new copy; the second one exists only to avoid having to cast when using it with a const global which we never intend to free. The documentation for the function explains that clearly, but it seems I'm not alone in being surprised by this. Of 17 calls to the function, 13 of them leak the resulting value. We could obviously fix these by adding the appropriate free(). But it would be simpler still if we actually had a non-allocating way to get the string. There's git_config_get_value() but that doesn't quite do what we want. If the config key is present but is a boolean with no value (e.g., "[foo]bar" in the file), then we'll get NULL (whereas the string versions will print an error and die). So let's introduce a new variant, git_config_get_string_tmp(), that behaves as these callers expect. We need a new name because we have new semantics but the same function signature (so even if we converted the four remaining callers, topics in flight might be surprised). The "tmp" is because this value should only be held onto for a short time. In practice it's rare for us to clear and refresh the configset, invalidating the pointer, but hopefully the "tmp" makes callers think about the lifetime. In each of the converted cases here the value only needs to last within the local function or its immediate caller. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-13drop vcs-svn experimentLibravatar Jeff King3-135/+0
The code in vcs-svn was started in 2010 as an attempt to build a remote-helper for interacting with svn repositories (as opposed to git-svn). However, we never got as far as shipping a mature remote helper, and the last substantive commit was e99d012a6bc in 2012. We do have a git-remote-testsvn, and it is even installed as part of "make install". But given the name, it seems unlikely to be used by anybody (you'd have to explicitly "git clone testsvn::$url", and there have been zero mentions of that on the mailing list since 2013, and even that includes the phrase "you might need to hack a bit to get it working properly"[1]). We also ship contrib/svn-fe, which builds on the vcs-svn work. However, it does not seem to build out of the box for me, as the link step misses some required libraries for using libgit.a. Curiously, the original build breakage bisects for me to eff80a9fd9 (Allow custom "comment char", 2013-01-16), which seems unrelated. There was an attempt to fix it in da011cb0e7 (contrib/svn-fe: fix Makefile, 2014-08-28), but on my system that only switches the error message. So it seems like the result is not really usable by anybody in practice. It would be wonderful if somebody wanted to pick up the topic again, and potentially it's worth carrying around for that reason. But the flip side is that people doing tree-wide operations have to deal with this code. And you can see the list with (replace "HEAD" with this commit as appropriate): { echo "--" git diff-tree --diff-filter=D -r --name-only HEAD^ HEAD } | git log --no-merges --oneline e99d012a6bc.. --stdin which shows 58 times somebody had to deal with the code, generally due to a compile or test failure, or a tree-wide style fix or API change. Let's drop it and let anybody who wants to pick it up do so by resurrecting it from the git history. As a bonus, this also reduces the size of a stripped installation of Git from 21MB to 19MB. [1] https://lore.kernel.org/git/CALkWK0mPHzKfzFKKpZkfAus3YVC9NFYDbFnt+5JQYVKipk3bQQ@mail.gmail.com/ Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-11Merge branch 'bc/sha-256-part-3'Libravatar Junio C Hamano1-0/+2
The final leg of SHA-256 transition. * bc/sha-256-part-3: (39 commits) t: remove test_oid_init in tests docs: add documentation for extensions.objectFormat ci: run tests with SHA-256 t: make SHA1 prerequisite depend on default hash t: allow testing different hash algorithms via environment t: add test_oid option to select hash algorithm repository: enable SHA-256 support by default setup: add support for reading extensions.objectformat bundle: add new version for use with SHA-256 builtin/verify-pack: implement an --object-format option http-fetch: set up git directory before parsing pack hashes t0410: mark test with SHA1 prerequisite t5308: make test work with SHA-256 t9700: make hash size independent t9500: ensure that algorithm info is preserved in config t9350: make hash size independent t9301: make hash size independent t9300: use $ZERO_OID instead of hard-coded object ID t9300: abstract away SHA-1-specific constants t8011: make hash size independent ...
2020-07-30strvec: rename struct fieldsLibravatar Jeff King1-4/+4
The "argc" and "argv" names made sense when the struct was argv_array, but now they're just confusing. Let's rename them to "nr" (which we use for counts elsewhere) and "v" (which is rather terse, but reads well when combined with typical variable names like "args.v"). Note that we have to update all of the callers immediately. Playing tricks with the preprocessor is hard here, because we wouldn't want to rewrite unrelated tokens. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-30t: make test-bloom initialize repositoryLibravatar brian m. carlson1-0/+2
The bloom filter code relies on reading object IDs using parse_oid_hex. In order to make that work with an appropriate size, we need to have initialized the repository's hash algorithm. Since the values we're processing depend on the repository in use, let's set up the repository when we run the test helper. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Reviewed-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: fix indentation in renamed callsLibravatar Jeff King1-3/+3
Code which split an argv_array call across multiple lines, like: argv_array_pushl(&args, "one argument", "another argument", "and more", NULL); was recently mechanically renamed to use strvec, which results in mis-matched indentation like: strvec_pushl(&args, "one argument", "another argument", "and more", NULL); Let's fix these up to align the arguments with the opening paren. I did this manually by sifting through the results of: git jump grep 'strvec_.*,$' and liberally applying my editor's auto-format. Most of the changes are of the form shown above, though I also normalized a few that had originally used a single-tab indentation (rather than our usual style of aligning with the open paren). I also rewrapped a couple of obvious cases (e.g., where previously too-long lines became short enough to fit on one), but I wasn't aggressive about it. In cases broken to three or more lines, the grouping of arguments is sometimes meaningful, and it wasn't worth my time or reviewer time to ponder each case individually. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: convert remaining callers away from argv_array nameLibravatar Jeff King1-18/+18
We eventually want to drop the argv_array name and just use strvec consistently. There's no particular reason we have to do it all at once, or care about interactions between converted and unconverted bits. Because of our preprocessor compat layer, the names are interchangeable to the compiler (so even a definition and declaration using different names is OK). This patch converts all of the remaining files, as the resulting diff is reasonably sized. The conversion was done purely mechanically with: git ls-files '*.c' '*.h' | xargs perl -i -pe ' s/ARGV_ARRAY/STRVEC/g; s/argv_array/strvec/g; ' We'll deal with any indentation/style fallouts separately. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>