summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-11-25Merge branch 'sg/tests-prereq'Libravatar Junio C Hamano2-4/+25
A lazily defined test prerequisite can now be defined in terms of another lazily defined test prerequisite. * sg/tests-prereq: tests: fix description of 'test_set_prereq' tests: make sure nested lazy prereqs work reliably
2020-11-25Merge branch 'rs/plug-diff-cache-leak'Libravatar Junio C Hamano1-0/+2
Memleak fix. * rs/plug-diff-cache-leak: diff-lib: plug minor memory leaks in do_diff_cache()
2020-11-25Merge branch 'rs/gc-sort-func-cast-fix'Libravatar Junio C Hamano1-4/+2
Fix broken sorting of maintenance tasks. * rs/gc-sort-func-cast-fix: gc: fix cast in compare_tasks_by_selection()
2020-11-25Merge branch 'jc/ci-github-set-env'Libravatar Junio C Hamano1-1/+1
Another CI adjustment. * jc/ci-github-set-env: ci: avoid `set-env` construct in print-test-failures.sh
2020-11-25Merge branch 'sg/t5310-jgit-wants-sha1'Libravatar Junio C Hamano1-2/+2
Since jgit does not yet work with SHA-256 repositories, mark the tests that uses it not to run unless we are testing with ShA-1 repositories. * sg/t5310-jgit-wants-sha1: t5310-pack-bitmaps: skip JGit tests with SHA256
2020-11-25Merge branch 'rs/archive-plug-leak-refname'Libravatar Junio C Hamano2-1/+2
Memleak fix. * rs/archive-plug-leak-refname: archive: release refname after use
2020-11-25Merge branch 'ma/list-object-filter-opt-msgfix'Libravatar Junio C Hamano1-1/+1
Error message fix. * ma/list-object-filter-opt-msgfix: list-objects-filter-options: fix function name in BUG
2020-11-25Merge branch 'pk/subsub-fetch-fix'Libravatar Junio C Hamano2-10/+67
"git fetch" did not work correctly with nested submodules where the innermost submodule that is not of interest got updated in the upstream, which has been corrected. * pk/subsub-fetch-fix: submodules: fix of regression on fetching of non-init subsub-repo
2020-11-25Merge branch 'jk/4gb-idx'Libravatar Junio C Hamano7-19/+19
The code was not prepared to deal with pack .idx file that is larger than 4GB. * jk/4gb-idx: packfile: detect overflow in .idx file size checks block-sha1: take a size_t length parameter fsck: correctly compute checksums on idx files larger than 4GB use size_t to store pack .idx byte offsets compute pack .idx byte offsets using size_t
2020-11-25Merge branch 'jx/t5411-flake-fix'Libravatar Junio C Hamano9-67/+545
The exchange between receive-pack and proc-receive hook did not carefully check for errors. * jx/t5411-flake-fix: receive-pack: use default version 0 for proc-receive receive-pack: gently write messages to proc-receive t5411: new helper filter_out_user_friendly_and_stable_output
2020-11-25Merge branch 'rs/hashwrite-be64'Libravatar Junio C Hamano3-9/+10
Code simplification. * rs/hashwrite-be64: pack-write: use hashwrite_be64() midx: use hashwrite_be64() csum-file: add hashwrite_be64()
2020-11-25Merge branch 'sg/bisect-approximately-halfway'Libravatar Junio C Hamano1-7/+20
"git bisect start/next" in a large span of history spends a lot of time trying to come up with exactly the half-way point; this can be optimized by stopping when we see a commit that is close enough to the half-way point. * sg/bisect-approximately-halfway: bisect: loosen halfway() check for a large number of commits
2020-11-25Merge branch 'fc/bash-completion-alias-of-alias'Libravatar Junio C Hamano2-18/+55
The command line completion script (in contrib/) learned to expand commands that are alias of alias. * fc/bash-completion-alias-of-alias: completion: bash: improve alias loop detection completion: bash: check for alias loop completion: bash: support recursive aliases
2020-11-21Seventh batchLibravatar Junio C Hamano1-0/+20
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-21Merge branch 'pd/mergetool-nvimdiff'Libravatar Junio C Hamano2-0/+5
Fix regression introduced when nvimdiff support in mergetool was added. * pd/mergetool-nvimdiff: mergetool: avoid letting `list_tool_variants` break user-defined setups mergetools/bc: add `bc4` to the alias list for Beyond Compare
2020-11-21Merge branch 'ab/config-mak-uname-simplify'Libravatar Junio C Hamano1-8/+0
Build configuration cleanup. * ab/config-mak-uname-simplify: config.mak.uname: remove unused NEEDS_SSL_WITH_CURL flag config.mak.uname: remove unused the NO_R_TO_GCC_LINKER flag
2020-11-21Merge branch 'en/strmap'Libravatar Junio C Hamano27-170/+621
A specialization of hashmap that uses a string as key has been introduced. Hopefully it will see wider use over time. * en/strmap: shortlog: use strset from strmap.h Use new HASHMAP_INIT macro to simplify hashmap initialization strmap: take advantage of FLEXPTR_ALLOC_STR when relevant strmap: enable allocations to come from a mem_pool strmap: add a strset sub-type strmap: split create_entry() out of strmap_put() strmap: add functions facilitating use as a string->int map strmap: enable faster clearing and reusing of strmaps strmap: add more utility functions strmap: new utility functions hashmap: provide deallocation function names hashmap: introduce a new hashmap_partial_clear() hashmap: allow re-use after hashmap_free() hashmap: adjust spacing to fix argument alignment hashmap: add usage documentation explaining hashmap_free[_entries]()
2020-11-21Merge branch 'jk/diff-release-filespec-fix'Libravatar Junio C Hamano2-0/+16
Running "git diff" while allowing external diff in a state with unmerged paths used to segfault, which has been corrected. * jk/diff-release-filespec-fix: t7800: simplify difftool test diff: allow passing NULL to diff_free_filespec_data()
2020-11-21Merge branch 'jk/rev-parse-end-of-options'Libravatar Junio C Hamano4-47/+99
"git rev-parse" learned the "--end-of-options" to help scripts to safely take a parameter that is supposed to be a revision, e.g. "git rev-parse --verify -q --end-of-options $rev". * jk/rev-parse-end-of-options: rev-parse: handle --end-of-options rev-parse: put all options under the "-" check rev-parse: don't accept options after dashdash
2020-11-21Merge branch 'jc/format-patch-name-max'Libravatar Junio C Hamano7-8/+83
The maximum length of output filenames "git format-patch" creates has become configurable (used to be capped at 64). * jc/format-patch-name-max: format-patch: make output filename configurable
2020-11-18gc: fix cast in compare_tasks_by_selection()Libravatar René Scharfe1-4/+2
compare_tasks_by_selection() is used with QSORT and gets passed pointers to the elements of "static struct maintenance_task tasks[]". It casts the *addresses* of these passed pointers to element pointers, though, and thus effectively compares some unrelated values from the stack. Fix the casts to actually compare array elements. Detected by USan (make SANITIZE=undefined test). Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-18Sixth batchLibravatar Junio C Hamano1-0/+35
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-18Merge branch 'jc/blame-ignore-fix'Libravatar Junio C Hamano2-4/+5
"git blame --ignore-revs-file=<file>" learned to ignore a non-existent object name in the input, instead of complaining. * jc/blame-ignore-fix: blame: silently ignore invalid ignore file objects
2020-11-18Merge branch 'jc/sparse-error-for-developer-build'Libravatar Junio C Hamano1-0/+1
"make DEVELOPER=1 sparse" used to run sparse and let it emit warnings; now such warnings will cause an error. * jc/sparse-error-for-developer-build: Makefile: enable -Wsparse-error for DEVELOPER build
2020-11-18Merge branch 'pb/blame-funcname-range-userdiff'Libravatar Junio C Hamano12-72/+79
"git blame -L :funcname -- path" did not work well for a path for which a userdiff driver is defined. * pb/blame-funcname-range-userdiff: blame: simplify 'setup_blame_bloom_data' interface blame: simplify 'setup_scoreboard' interface blame: enable funcname blaming with userdiff driver line-log: mention both modes in 'blame' and 'log' short help doc: add more pointers to gitattributes(5) for userdiff blame-options.txt: also mention 'funcname' in '-L' description doc: line-range: improve formatting doc: log, gitk: move '-L' description to 'line-range-options.txt'
2020-11-18Merge branch 'en/merge-ort-api-null-impl'Libravatar Junio C Hamano13-15/+517
Preparation for a new merge strategy. * en/merge-ort-api-null-impl: merge,rebase,revert: select ort or recursive by config or environment fast-rebase: demonstrate merge-ort's API via new test-tool command merge-ort-wrappers: new convience wrappers to mimic the old merge API merge-ort: barebones API of new merge strategy with empty implementation
2020-11-18Merge branch 'ds/maintenance-part-3'Libravatar Junio C Hamano17-8/+759
Parts of "git maintenance" to ease writing crontab entries (and other scheduling system configuration) for it. * ds/maintenance-part-3: maintenance: add troubleshooting guide to docs maintenance: use 'incremental' strategy by default maintenance: create maintenance.strategy config maintenance: add start/stop subcommands maintenance: add [un]register subcommands for-each-repo: run subcommands on configured repos maintenance: add --schedule option and config maintenance: optionally skip --auto process
2020-11-18Merge branch 'pw/rebase-i-orig-head'Libravatar Junio C Hamano4-22/+31
"git rebase -i" did not store ORIG_HEAD correctly. * pw/rebase-i-orig-head: rebase -i: simplify get_revision_ranges() rebase -i: use struct object_id when writing state rebase -i: use struct object_id rather than looking up commit rebase -i: stop overwriting ORIG_HEAD buffer
2020-11-18Merge branch 'rs/archive-high-compression'Libravatar Junio C Hamano3-16/+14
"git archive" now allows compression level higher than "-9" when generating tar.gz output. * rs/archive-high-compression: archive: support compression levels beyond 9
2020-11-18Merge branch 'dg/bswap-msvc'Libravatar Junio C Hamano1-1/+1
Define ARM64 compiled with MSVC to be little-endian. * dg/bswap-msvc: compat/bswap.h: don't assume MSVC is little-endian compat/bswap.h: simplify MSVC endianness detection
2020-11-18Merge branch 'jk/format-patch-output'Libravatar Junio C Hamano2-15/+55
"git format-patch --output=there" did not work as expected and instead crashed. The option is now supported. * jk/format-patch-output: format-patch: support --output option format-patch: tie file-opening logic to output_directory format-patch: refactor output selection
2020-11-18Merge branch 'jc/line-log-takes-no-pathspec'Libravatar Junio C Hamano2-0/+25
"git log -L<range>:<path>" is documented to take no pathspec, but this was not enforced by the command line option parser, which has been corrected. * jc/line-log-takes-no-pathspec: log: diagnose -L used with pathspec as an error
2020-11-18Merge branch 'rs/empty-reflog-check-fix'Libravatar Junio C Hamano1-14/+13
The code to see if "git stash drop" can safely remove refs/stash has been made more carerful. * rs/empty-reflog-check-fix: stash: simplify reflog emptiness check
2020-11-18Merge branch 'nk/perf-fsmonitor'Libravatar Junio C Hamano1-31/+63
Add t/perf support for fsmonitor. * nk/perf-fsmonitor: t/perf/fsmonitor: add benchmark for dirty status t/perf/fsmonitor: perf comparison of multiple fsmonitor integrations t/perf/fsmonitor: initialize test with git reset t/perf/fsmonitor: factor setup for fsmonitor into function t/perf/fsmonitor: silence initial git commit t/perf/fsmonitor: shorten DESC to basename t/perf/fsmonitor: factor description out for readability t/perf/fsmonitor: improve error message if typoing hook name t/perf/fsmonitor: move watchman setup to one-time-repo-setup t/perf/fsmonitor: separate one time repo initialization
2020-11-18Merge branch 'en/merge-tests'Libravatar Junio C Hamano13-386/+807
Preparation for a new merge strategy. * en/merge-tests: t6423: add more details about direct resolution of directories t6423: note improved ort handling with untracked files t6423, t6436: note improved ort handling with dirty files merge tests: expect slight differences in output for recursive vs. ort t6423: expect improved conflict markers labels in the ort backend t6404, t6423: expect improved rename/delete handling in ort backend t6416: correct expectation for rename/rename(1to2) + directory/file merge tests: expect improved directory/file conflict handling in ort t/: new helper for tests that pass with ort but fail with recursive
2020-11-18Merge branch 'js/default-branch-name-adjust-t5515'Libravatar Junio C Hamano128-305/+308
Prepare a test script to transition of the default branch name to 'main'. * js/default-branch-name-adjust-t5515: t5515: use `main` as the name of the main branch for testing (conclusion) t5515: use `main` as the name of the main branch for testing (part 3) t5515: use `main` as the name of the main branch for testing (part 2) t5515: use `main` as the name of the main branch for testing (part 1)
2020-11-18Merge branch 'dd/upload-pack-stateless-eof'Libravatar Junio C Hamano2-1/+29
"git fetch --depth=<n>" over the stateless RPC / smart HTTP transport handled EOF from the client poorly at the server end. * dd/upload-pack-stateless-eof: upload-pack: allow stateless client EOF just prior to haves
2020-11-18tests: fix description of 'test_set_prereq'Libravatar SZEDER Gábor1-1/+1
'test_set_prereq's description claims that prereqs can be specified to 'test_expect_code', but that is not the case (it is not meant to run a test _case_, but a git command), so remove it. OTOH that description doesn't mention 'test_external' and 'test_external_without_stderr' that do accept prereqs, so mention them. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-18tests: make sure nested lazy prereqs work reliablyLibravatar SZEDER Gábor2-3/+24
Some test prereqs depend on other prereqs, so in a couple of cases we have nested prereqs that look something like this: test_lazy_prereq FOO ' test_have_prereq BAR && check-foo ' This can be problematic, because lazy prereqs are evaluated in the '$TRASH_DIRECTORY/prereq-test-dir' directory, which is the same for every prereq, and which is automatically removed after the prereq has been evaluated. So if the inner prereq (BAR above) is a lazy prereq that hasn't been evaluated yet, then after its evaluation the 'prereq-test-dir' shared with the outer prereq will be removed. Consequently, 'check-foo' will find itself in a non-existing directory, and won't be able to create/access any files in its cwd, which could result in an unfulfilled outer prereq. Luckily, this doesn't affect any of our current nested prereqs, either because the inner prereq is not a lazy prereq (e.g. MINGW, CYGWIN or PERL), or because the outer prereq happens to be checked without touching any paths in its cwd (GPGSM and RFC1991 in 'lib-gpg.sh'). So to prevent nested prereqs from interfering with each other let's evaluate each prereq in its own dedicated directory by appending the prereq's name to the directory name, e.g. 'prereq-test-dir-SYMLINKS'. In the test we check not only that the prereq test dir is still there, but also that the inner prereq can't mess with the outer prereq's files. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-17ci: avoid `set-env` construct in print-test-failures.shLibravatar Junio C Hamano1-1/+1
Imitating cac42e47 (ci: avoid using the deprecated `set-env` construct, 2020-11-07), avoid deprecated ::set-env and use the recommended alternative instead in print-test-failures.sh Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-17completion: bash: improve alias loop detectionLibravatar Felipe Contreras1-4/+5
It is possible for the name of an alias to end with the name of another alias, in which case the code will incorrectly detect a loop. We can fix that by adding an extra space between words. Suggested-by: SZEDER Gábor <szeder.dev@gmail.com> Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-16list-objects-filter-options: fix function name in BUGLibravatar Martin Ågren1-1/+1
Fix the function name we give in the BUG message. It's "config", not "choice". Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-16archive: release refname after useLibravatar René Scharfe2-1/+2
parse_treeish_arg() uses dwim_ref() to set refname to a strdup'd string. Release it after use. Also remove the const qualifier from the refname member to signify that ownership of the string is handed to the struct, leaving cleanup duty with the caller of parse_treeish_arg(), thus avoiding a cast. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-16diff-lib: plug minor memory leaks in do_diff_cache()Libravatar René Scharfe1-0/+2
do_diff_cache() builds a struct rev_info to hand to diff_cache() from scratch by initializing it using repo_init_revisions() and then replacing its diffopt and prune_data members. The diffopt member is initialized to a heap-allocated list of options, though. Release it using diff_setup_done() before overwriting it. The initial value of the prune_data member doesn't need to be released, but the copy created using copy_pathspec() does. Clear it after use. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-16packfile: detect overflow in .idx file size checksLibravatar Jeff King1-3/+3
In load_idx(), we check that the .idx file is sized appropriately for the number of objects it claims to have. We recently fixed the case where the number of objects caused our expected size to overflow a 32-bit unsigned int, and we switched to size_t. On a 64-bit system, this is fine; our size_t covers any expected size. On a 32-bit system, though, it won't. The file may claim to have 2^31 objects, which will overflow even a size_t. This doesn't hurt us at all for a well-formed idx file. A 32-bit system would already have failed to mmap such a file, since it would be too big. But an .idx file which _claims_ to have 2^31 objects but is actually much smaller would fool our check. This is a broken file, and for the most part we don't care that much what happens. But: - it's a little friendlier to notice up front "woah, this file is broken" than it is to get nonsense results - later access of the data assumes that the loading function sanity-checked that we have at least enough bytes for the regular object-id table. A malformed .idx file could lead to an out-of-bounds read. So let's use our overflow-checking functions to make sure that we're not fooled by a malformed file. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-16block-sha1: take a size_t length parameterLibravatar Jeff King2-2/+2
The block-sha1 implementation takes an "unsigned long" for the length of a buffer to hash, but our hash algorithm wrappers take a size_t, as do other implementations we support like openssl or sha1dc. On many systems, including Linux, these two are equivalent, but they are not on Windows (where only a "long long" is 64 bits). As a result, passing large chunks to a single the_hash_algo->update_fn() would produce wrong answers there. Note that we don't need to update any other sizes outside of the function interface. We store the cumulative size in a "long long" (which we must do since we hash things bigger than 4GB, like packfiles, even on 32-bit platforms). And internally, we break that size_t len down into 64-byte blocks to feed into the guts of the algorithm. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-16fsck: correctly compute checksums on idx files larger than 4GBLibravatar Jeff King1-4/+4
When checking the trailing checksum hash of a .idx file, we pass the whole buffer (minus the trailing hash) into a single call to the_hash_algo->update_fn(). But we cast it to an "unsigned int". This comes from c4001d92be (Use off_t when we really mean a file offset., 2007-03-06). That commit started storing the index_size variable as an off_t, but our mozilla-sha1 implementation from the time was limited to a smaller size. Presumably the cast was a way of annotating that we expected .idx files to be small, and so we didn't need to loop (as we do for arbitrarily-large .pack files). Though as an aside it was still wrong, because the mozilla function actually took a signed int. These days our hash-update functions are defined to take a size_t, so we can pass the whole buffer in directly. The cast is actually causing a buggy truncation! While we're here, though, let's drop the confusing off_t variable in the first place. We're getting the size not from the filesystem anyway, but from p->index_size, which is a size_t. In fact, we can make the code a bit more readable by dropping our local variable duplicating p->index_size, and instead have one that stores the size of the actual index data, minus the trailing hash. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-16use size_t to store pack .idx byte offsetsLibravatar Jeff King2-5/+5
We sometimes store the offset into a pack .idx file as an "unsigned long", but the mmap'd size of a pack .idx file can exceed 4GB. This is sufficient on LP64 systems like Linux, but will be too small on LLP64 systems like Windows, where "unsigned long" is still only 32 bits. Let's use size_t, which is a better type for an offset into a memory buffer. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-16compute pack .idx byte offsets using size_tLibravatar Jeff King4-9/+9
A pack and its matching .idx file are limited to 2^32 objects, because the pack format contains a 32-bit field to store the number of objects. Hence we use uint32_t in the code. But the byte count of even a .idx file can be much larger than that, because it stores at least a hash and an offset for each object. So using SHA-1, a v2 .idx file will cross the 4GB boundary at 153,391,650 objects. This confuses load_idx(), which computes the minimum size like this: unsigned long min_size = 8 + 4*256 + nr*(hashsz + 4 + 4) + hashsz + hashsz; Even though min_size will be big enough on most 64-bit platforms, the actual arithmetic is done as a uint32_t, resulting in a truncation. We actually exceed that min_size, but then we do: unsigned long max_size = min_size; if (nr) max_size += (nr - 1)*8; to account for the variable-sized table. That computation doesn't overflow quite so low, but with the truncation for min_size, we end up with a max_size that is much smaller than our actual size. So we complain that the idx is invalid, and can't find any of its objects. We can fix this case by casting "nr" to a size_t, which will do the multiplication in 64-bits (assuming you're on a 64-bit platform; this will never work on a 32-bit system since we couldn't map the whole .idx anyway). Likewise, we don't have to worry about further additions, because adding a smaller number to a size_t will convert the other side to a size_t. A few notes: - obviously we could just declare "nr" as a size_t in the first place (and likewise, packed_git.num_objects). But it's conceptually a uint32_t because of the on-disk format, and we correctly treat it that way in other contexts that don't need to compute byte offsets (e.g., iterating over the set of objects should and generally does use a uint32_t). Switching to size_t would make all of those other cases look wrong. - it could be argued that the proper type is off_t to represent the file offset. But in practice the .idx file must fit within memory, because we mmap the whole thing. And the rest of the code (including the idx_size variable we're comparing against) uses size_t. - we'll add the same cast to the max_size arithmetic line. Even though we're adding to a larger type, which will convert our result, the multiplication is still done as a 32-bit value and can itself overflow. I didn't check this with my test case, since it would need an even larger pack (~530M objects), but looking at compiler output shows that it works this way. The standard should agree, but I couldn't find anything explicit in 6.3.1.8 ("usual arithmetic conversions"). The case in load_idx() was the most immediate one that I was able to trigger. After fixing it, looking up actual objects (including the very last one in sha1 order) works in a test repo with 153,725,110 objects. That's because bsearch_hash() works with uint32_t entry indices, and the actual byte access: int cmp = hashcmp(table + mi * stride, sha1); is done with "stride" as a size_t, causing the uint32_t "mi" to be promoted to a size_t. This is the way most code will access the index data. However, I audited all of the other byte-wise accesses of packed_git.index_data, and many of the others are suspect (they are similar to the max_size one, where we are adding to a properly sized offset or directly to a pointer, but the multiplication in the sub-expression can overflow). I didn't trigger any of these in practice, but I believe they're potential problems, and certainly adding in the cast is not going to hurt anything here. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-16t5310-pack-bitmaps: skip JGit tests with SHA256Libravatar SZEDER Gábor1-2/+2
In 't5310-pack-bitmaps.sh' two tests make sure that our pack bitmaps are compatible with JGit's bitmaps. Alas, not even the most recent JGit version (5.9.0.202009080501-r) supports SHA256 yet, so when this test script is run with GIT_TEST_DEFAULT_HASH=sha256 on a setup with JGit installed in PATH, then these two tests fail. Protect these two tests with the SHA1 prereq in order to skip them when testing with SHA256. Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>