Age | Commit message (Collapse) | Author | Files | Lines |
|
A lazily defined test prerequisite can now be defined in terms of
another lazily defined test prerequisite.
* sg/tests-prereq:
tests: fix description of 'test_set_prereq'
tests: make sure nested lazy prereqs work reliably
|
|
Memleak fix.
* rs/plug-diff-cache-leak:
diff-lib: plug minor memory leaks in do_diff_cache()
|
|
Fix broken sorting of maintenance tasks.
* rs/gc-sort-func-cast-fix:
gc: fix cast in compare_tasks_by_selection()
|
|
Another CI adjustment.
* jc/ci-github-set-env:
ci: avoid `set-env` construct in print-test-failures.sh
|
|
Since jgit does not yet work with SHA-256 repositories, mark the
tests that uses it not to run unless we are testing with ShA-1
repositories.
* sg/t5310-jgit-wants-sha1:
t5310-pack-bitmaps: skip JGit tests with SHA256
|
|
Memleak fix.
* rs/archive-plug-leak-refname:
archive: release refname after use
|
|
Error message fix.
* ma/list-object-filter-opt-msgfix:
list-objects-filter-options: fix function name in BUG
|
|
"git fetch" did not work correctly with nested submodules where the
innermost submodule that is not of interest got updated in the
upstream, which has been corrected.
* pk/subsub-fetch-fix:
submodules: fix of regression on fetching of non-init subsub-repo
|
|
The code was not prepared to deal with pack .idx file that is
larger than 4GB.
* jk/4gb-idx:
packfile: detect overflow in .idx file size checks
block-sha1: take a size_t length parameter
fsck: correctly compute checksums on idx files larger than 4GB
use size_t to store pack .idx byte offsets
compute pack .idx byte offsets using size_t
|
|
The exchange between receive-pack and proc-receive hook did not
carefully check for errors.
* jx/t5411-flake-fix:
receive-pack: use default version 0 for proc-receive
receive-pack: gently write messages to proc-receive
t5411: new helper filter_out_user_friendly_and_stable_output
|
|
Code simplification.
* rs/hashwrite-be64:
pack-write: use hashwrite_be64()
midx: use hashwrite_be64()
csum-file: add hashwrite_be64()
|
|
"git bisect start/next" in a large span of history spends a lot of
time trying to come up with exactly the half-way point; this can be
optimized by stopping when we see a commit that is close enough to
the half-way point.
* sg/bisect-approximately-halfway:
bisect: loosen halfway() check for a large number of commits
|
|
The command line completion script (in contrib/) learned to expand
commands that are alias of alias.
* fc/bash-completion-alias-of-alias:
completion: bash: improve alias loop detection
completion: bash: check for alias loop
completion: bash: support recursive aliases
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Fix regression introduced when nvimdiff support in mergetool was added.
* pd/mergetool-nvimdiff:
mergetool: avoid letting `list_tool_variants` break user-defined setups
mergetools/bc: add `bc4` to the alias list for Beyond Compare
|
|
Build configuration cleanup.
* ab/config-mak-uname-simplify:
config.mak.uname: remove unused NEEDS_SSL_WITH_CURL flag
config.mak.uname: remove unused the NO_R_TO_GCC_LINKER flag
|
|
A specialization of hashmap that uses a string as key has been
introduced. Hopefully it will see wider use over time.
* en/strmap:
shortlog: use strset from strmap.h
Use new HASHMAP_INIT macro to simplify hashmap initialization
strmap: take advantage of FLEXPTR_ALLOC_STR when relevant
strmap: enable allocations to come from a mem_pool
strmap: add a strset sub-type
strmap: split create_entry() out of strmap_put()
strmap: add functions facilitating use as a string->int map
strmap: enable faster clearing and reusing of strmaps
strmap: add more utility functions
strmap: new utility functions
hashmap: provide deallocation function names
hashmap: introduce a new hashmap_partial_clear()
hashmap: allow re-use after hashmap_free()
hashmap: adjust spacing to fix argument alignment
hashmap: add usage documentation explaining hashmap_free[_entries]()
|
|
Running "git diff" while allowing external diff in a state with
unmerged paths used to segfault, which has been corrected.
* jk/diff-release-filespec-fix:
t7800: simplify difftool test
diff: allow passing NULL to diff_free_filespec_data()
|
|
"git rev-parse" learned the "--end-of-options" to help scripts to
safely take a parameter that is supposed to be a revision, e.g.
"git rev-parse --verify -q --end-of-options $rev".
* jk/rev-parse-end-of-options:
rev-parse: handle --end-of-options
rev-parse: put all options under the "-" check
rev-parse: don't accept options after dashdash
|
|
The maximum length of output filenames "git format-patch" creates
has become configurable (used to be capped at 64).
* jc/format-patch-name-max:
format-patch: make output filename configurable
|
|
compare_tasks_by_selection() is used with QSORT and gets passed pointers
to the elements of "static struct maintenance_task tasks[]". It casts
the *addresses* of these passed pointers to element pointers, though,
and thus effectively compares some unrelated values from the stack. Fix
the casts to actually compare array elements.
Detected by USan (make SANITIZE=undefined test).
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git blame --ignore-revs-file=<file>" learned to ignore a
non-existent object name in the input, instead of complaining.
* jc/blame-ignore-fix:
blame: silently ignore invalid ignore file objects
|
|
"make DEVELOPER=1 sparse" used to run sparse and let it emit
warnings; now such warnings will cause an error.
* jc/sparse-error-for-developer-build:
Makefile: enable -Wsparse-error for DEVELOPER build
|
|
"git blame -L :funcname -- path" did not work well for a path for
which a userdiff driver is defined.
* pb/blame-funcname-range-userdiff:
blame: simplify 'setup_blame_bloom_data' interface
blame: simplify 'setup_scoreboard' interface
blame: enable funcname blaming with userdiff driver
line-log: mention both modes in 'blame' and 'log' short help
doc: add more pointers to gitattributes(5) for userdiff
blame-options.txt: also mention 'funcname' in '-L' description
doc: line-range: improve formatting
doc: log, gitk: move '-L' description to 'line-range-options.txt'
|
|
Preparation for a new merge strategy.
* en/merge-ort-api-null-impl:
merge,rebase,revert: select ort or recursive by config or environment
fast-rebase: demonstrate merge-ort's API via new test-tool command
merge-ort-wrappers: new convience wrappers to mimic the old merge API
merge-ort: barebones API of new merge strategy with empty implementation
|
|
Parts of "git maintenance" to ease writing crontab entries (and
other scheduling system configuration) for it.
* ds/maintenance-part-3:
maintenance: add troubleshooting guide to docs
maintenance: use 'incremental' strategy by default
maintenance: create maintenance.strategy config
maintenance: add start/stop subcommands
maintenance: add [un]register subcommands
for-each-repo: run subcommands on configured repos
maintenance: add --schedule option and config
maintenance: optionally skip --auto process
|
|
"git rebase -i" did not store ORIG_HEAD correctly.
* pw/rebase-i-orig-head:
rebase -i: simplify get_revision_ranges()
rebase -i: use struct object_id when writing state
rebase -i: use struct object_id rather than looking up commit
rebase -i: stop overwriting ORIG_HEAD buffer
|
|
"git archive" now allows compression level higher than "-9"
when generating tar.gz output.
* rs/archive-high-compression:
archive: support compression levels beyond 9
|
|
Define ARM64 compiled with MSVC to be little-endian.
* dg/bswap-msvc:
compat/bswap.h: don't assume MSVC is little-endian
compat/bswap.h: simplify MSVC endianness detection
|
|
"git format-patch --output=there" did not work as expected and
instead crashed. The option is now supported.
* jk/format-patch-output:
format-patch: support --output option
format-patch: tie file-opening logic to output_directory
format-patch: refactor output selection
|
|
"git log -L<range>:<path>" is documented to take no pathspec, but
this was not enforced by the command line option parser, which has
been corrected.
* jc/line-log-takes-no-pathspec:
log: diagnose -L used with pathspec as an error
|
|
The code to see if "git stash drop" can safely remove refs/stash
has been made more carerful.
* rs/empty-reflog-check-fix:
stash: simplify reflog emptiness check
|
|
Add t/perf support for fsmonitor.
* nk/perf-fsmonitor:
t/perf/fsmonitor: add benchmark for dirty status
t/perf/fsmonitor: perf comparison of multiple fsmonitor integrations
t/perf/fsmonitor: initialize test with git reset
t/perf/fsmonitor: factor setup for fsmonitor into function
t/perf/fsmonitor: silence initial git commit
t/perf/fsmonitor: shorten DESC to basename
t/perf/fsmonitor: factor description out for readability
t/perf/fsmonitor: improve error message if typoing hook name
t/perf/fsmonitor: move watchman setup to one-time-repo-setup
t/perf/fsmonitor: separate one time repo initialization
|
|
Preparation for a new merge strategy.
* en/merge-tests:
t6423: add more details about direct resolution of directories
t6423: note improved ort handling with untracked files
t6423, t6436: note improved ort handling with dirty files
merge tests: expect slight differences in output for recursive vs. ort
t6423: expect improved conflict markers labels in the ort backend
t6404, t6423: expect improved rename/delete handling in ort backend
t6416: correct expectation for rename/rename(1to2) + directory/file
merge tests: expect improved directory/file conflict handling in ort
t/: new helper for tests that pass with ort but fail with recursive
|
|
Prepare a test script to transition of the default branch name to
'main'.
* js/default-branch-name-adjust-t5515:
t5515: use `main` as the name of the main branch for testing (conclusion)
t5515: use `main` as the name of the main branch for testing (part 3)
t5515: use `main` as the name of the main branch for testing (part 2)
t5515: use `main` as the name of the main branch for testing (part 1)
|
|
"git fetch --depth=<n>" over the stateless RPC / smart HTTP
transport handled EOF from the client poorly at the server end.
* dd/upload-pack-stateless-eof:
upload-pack: allow stateless client EOF just prior to haves
|
|
'test_set_prereq's description claims that prereqs can be specified to
'test_expect_code', but that is not the case (it is not meant to run a
test _case_, but a git command), so remove it.
OTOH that description doesn't mention 'test_external' and
'test_external_without_stderr' that do accept prereqs, so mention
them.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Some test prereqs depend on other prereqs, so in a couple of cases we
have nested prereqs that look something like this:
test_lazy_prereq FOO '
test_have_prereq BAR &&
check-foo
'
This can be problematic, because lazy prereqs are evaluated in the
'$TRASH_DIRECTORY/prereq-test-dir' directory, which is the same for
every prereq, and which is automatically removed after the prereq has
been evaluated. So if the inner prereq (BAR above) is a lazy prereq
that hasn't been evaluated yet, then after its evaluation the
'prereq-test-dir' shared with the outer prereq will be removed.
Consequently, 'check-foo' will find itself in a non-existing
directory, and won't be able to create/access any files in its cwd,
which could result in an unfulfilled outer prereq.
Luckily, this doesn't affect any of our current nested prereqs, either
because the inner prereq is not a lazy prereq (e.g. MINGW, CYGWIN or
PERL), or because the outer prereq happens to be checked without
touching any paths in its cwd (GPGSM and RFC1991 in 'lib-gpg.sh').
So to prevent nested prereqs from interfering with each other let's
evaluate each prereq in its own dedicated directory by appending the
prereq's name to the directory name, e.g. 'prereq-test-dir-SYMLINKS'.
In the test we check not only that the prereq test dir is still there,
but also that the inner prereq can't mess with the outer prereq's
files.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Imitating cac42e47 (ci: avoid using the deprecated `set-env`
construct, 2020-11-07), avoid deprecated ::set-env and use the
recommended alternative instead in print-test-failures.sh
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
It is possible for the name of an alias to end with the name of another
alias, in which case the code will incorrectly detect a loop.
We can fix that by adding an extra space between words.
Suggested-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Fix the function name we give in the BUG message. It's "config", not
"choice".
Signed-off-by: Martin Ågren <martin.agren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
parse_treeish_arg() uses dwim_ref() to set refname to a strdup'd string.
Release it after use. Also remove the const qualifier from the refname
member to signify that ownership of the string is handed to the struct,
leaving cleanup duty with the caller of parse_treeish_arg(), thus
avoiding a cast.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
do_diff_cache() builds a struct rev_info to hand to diff_cache() from
scratch by initializing it using repo_init_revisions() and then
replacing its diffopt and prune_data members.
The diffopt member is initialized to a heap-allocated list of options,
though. Release it using diff_setup_done() before overwriting it.
The initial value of the prune_data member doesn't need to be released,
but the copy created using copy_pathspec() does. Clear it after use.
Signed-off-by: René Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In load_idx(), we check that the .idx file is sized appropriately for
the number of objects it claims to have. We recently fixed the case
where the number of objects caused our expected size to overflow a
32-bit unsigned int, and we switched to size_t.
On a 64-bit system, this is fine; our size_t covers any expected size.
On a 32-bit system, though, it won't. The file may claim to have 2^31
objects, which will overflow even a size_t.
This doesn't hurt us at all for a well-formed idx file. A 32-bit system
would already have failed to mmap such a file, since it would be too
big. But an .idx file which _claims_ to have 2^31 objects but is
actually much smaller would fool our check.
This is a broken file, and for the most part we don't care that much
what happens. But:
- it's a little friendlier to notice up front "woah, this file is
broken" than it is to get nonsense results
- later access of the data assumes that the loading function
sanity-checked that we have at least enough bytes for the regular
object-id table. A malformed .idx file could lead to an
out-of-bounds read.
So let's use our overflow-checking functions to make sure that we're not
fooled by a malformed file.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The block-sha1 implementation takes an "unsigned long" for the length of
a buffer to hash, but our hash algorithm wrappers take a size_t, as do
other implementations we support like openssl or sha1dc. On many
systems, including Linux, these two are equivalent, but they are not on
Windows (where only a "long long" is 64 bits). As a result, passing
large chunks to a single the_hash_algo->update_fn() would produce wrong
answers there.
Note that we don't need to update any other sizes outside of the
function interface. We store the cumulative size in a "long long" (which
we must do since we hash things bigger than 4GB, like packfiles, even on
32-bit platforms). And internally, we break that size_t len down into
64-byte blocks to feed into the guts of the algorithm.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When checking the trailing checksum hash of a .idx file, we pass the
whole buffer (minus the trailing hash) into a single call to
the_hash_algo->update_fn(). But we cast it to an "unsigned int". This
comes from c4001d92be (Use off_t when we really mean a file offset.,
2007-03-06). That commit started storing the index_size variable as an
off_t, but our mozilla-sha1 implementation from the time was limited to
a smaller size. Presumably the cast was a way of annotating that we
expected .idx files to be small, and so we didn't need to loop (as we do
for arbitrarily-large .pack files). Though as an aside it was still
wrong, because the mozilla function actually took a signed int.
These days our hash-update functions are defined to take a size_t, so we
can pass the whole buffer in directly. The cast is actually causing a
buggy truncation!
While we're here, though, let's drop the confusing off_t variable in the
first place. We're getting the size not from the filesystem anyway, but
from p->index_size, which is a size_t. In fact, we can make the code a
bit more readable by dropping our local variable duplicating
p->index_size, and instead have one that stores the size of the actual
index data, minus the trailing hash.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We sometimes store the offset into a pack .idx file as an "unsigned
long", but the mmap'd size of a pack .idx file can exceed 4GB. This is
sufficient on LP64 systems like Linux, but will be too small on LLP64
systems like Windows, where "unsigned long" is still only 32 bits. Let's
use size_t, which is a better type for an offset into a memory buffer.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A pack and its matching .idx file are limited to 2^32 objects, because
the pack format contains a 32-bit field to store the number of objects.
Hence we use uint32_t in the code.
But the byte count of even a .idx file can be much larger than that,
because it stores at least a hash and an offset for each object. So
using SHA-1, a v2 .idx file will cross the 4GB boundary at 153,391,650
objects. This confuses load_idx(), which computes the minimum size like
this:
unsigned long min_size = 8 + 4*256 + nr*(hashsz + 4 + 4) + hashsz + hashsz;
Even though min_size will be big enough on most 64-bit platforms, the
actual arithmetic is done as a uint32_t, resulting in a truncation. We
actually exceed that min_size, but then we do:
unsigned long max_size = min_size;
if (nr)
max_size += (nr - 1)*8;
to account for the variable-sized table. That computation doesn't
overflow quite so low, but with the truncation for min_size, we end up
with a max_size that is much smaller than our actual size. So we
complain that the idx is invalid, and can't find any of its objects.
We can fix this case by casting "nr" to a size_t, which will do the
multiplication in 64-bits (assuming you're on a 64-bit platform; this
will never work on a 32-bit system since we couldn't map the whole .idx
anyway). Likewise, we don't have to worry about further additions,
because adding a smaller number to a size_t will convert the other side
to a size_t.
A few notes:
- obviously we could just declare "nr" as a size_t in the first place
(and likewise, packed_git.num_objects). But it's conceptually a
uint32_t because of the on-disk format, and we correctly treat it
that way in other contexts that don't need to compute byte offsets
(e.g., iterating over the set of objects should and generally does
use a uint32_t). Switching to size_t would make all of those other
cases look wrong.
- it could be argued that the proper type is off_t to represent the
file offset. But in practice the .idx file must fit within memory,
because we mmap the whole thing. And the rest of the code (including
the idx_size variable we're comparing against) uses size_t.
- we'll add the same cast to the max_size arithmetic line. Even though
we're adding to a larger type, which will convert our result, the
multiplication is still done as a 32-bit value and can itself
overflow. I didn't check this with my test case, since it would need
an even larger pack (~530M objects), but looking at compiler output
shows that it works this way. The standard should agree, but I
couldn't find anything explicit in 6.3.1.8 ("usual arithmetic
conversions").
The case in load_idx() was the most immediate one that I was able to
trigger. After fixing it, looking up actual objects (including the very
last one in sha1 order) works in a test repo with 153,725,110 objects.
That's because bsearch_hash() works with uint32_t entry indices, and the
actual byte access:
int cmp = hashcmp(table + mi * stride, sha1);
is done with "stride" as a size_t, causing the uint32_t "mi" to be
promoted to a size_t. This is the way most code will access the index
data.
However, I audited all of the other byte-wise accesses of
packed_git.index_data, and many of the others are suspect (they are
similar to the max_size one, where we are adding to a properly sized
offset or directly to a pointer, but the multiplication in the
sub-expression can overflow). I didn't trigger any of these in practice,
but I believe they're potential problems, and certainly adding in the
cast is not going to hurt anything here.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In 't5310-pack-bitmaps.sh' two tests make sure that our pack bitmaps
are compatible with JGit's bitmaps. Alas, not even the most recent
JGit version (5.9.0.202009080501-r) supports SHA256 yet, so when this
test script is run with GIT_TEST_DEFAULT_HASH=sha256 on a setup with
JGit installed in PATH, then these two tests fail.
Protect these two tests with the SHA1 prereq in order to skip them
when testing with SHA256.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|