summaryrefslogtreecommitdiff
path: root/t
AgeCommit message (Collapse)AuthorFilesLines
2020-05-24pkt-line: define PACKET_READ_RESPONSE_ENDLibravatar Denton Liu1-0/+4
In a future commit, we will use PACKET_READ_RESPONSE_END to separate messages proxied by remote-curl. To prepare for this, add the PACKET_READ_RESPONSE_END enum value. In switch statements that need a case added, die() or BUG() when a PACKET_READ_RESPONSE_END is unexpected. Otherwise, mirror how PACKET_READ_DELIM is implemented (especially in cases where packets are being forwarded). Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24remote-curl: error on incomplete packetLibravatar Denton Liu5-0/+50
Currently, remote-curl acts as a proxy and blindly forwards packets between an HTTP server and fetch-pack. In the case of a stateless RPC connection where the connection is terminated with a partially written packet, remote-curl will blindly send the partially written packet before waiting on more input from fetch-pack. Meanwhile, fetch-pack will read the partial packet and continue reading, expecting more input. This results in a deadlock between the two processes. For a stateless connection, inspect packets before sending them and error out if a packet line packet is incomplete. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-14Merge branch 'es/trace-log-progress'Libravatar Junio C Hamano1-0/+26
Teach codepaths that show progress meter to also use the start_progress() and the stop_progress() calls as a "region" to be traced. * es/trace-log-progress: trace2: log progress time and throughput
2020-05-14Merge branch 'jt/t5500-unflake'Libravatar Junio C Hamano1-6/+6
Test fix for a topic already in 'master' and meant for 'maint'. * jt/t5500-unflake: t5500: count objects through stderr, not trace
2020-05-14Merge branch 'sn/midx-repack-with-config'Libravatar Junio C Hamano1-0/+27
"git multi-pack-index repack" has been taught to honor some repack.* configuration variables. * sn/midx-repack-with-config: multi-pack-index: respect repack.packKeptObjects=false midx: teach "git multi-pack-index repack" honor "git repack" configurations
2020-05-14Merge branch 'ds/bloom-cleanup'Libravatar Junio C Hamano2-4/+4
Code cleanup and typofixes * ds/bloom-cleanup: completion: offer '--(no-)patch' among 'git log' options bloom: use num_changes not nr for limit detection bloom: de-duplicate directory entries Documentation: changed-path Bloom filters use byte words bloom: parse commit before computing filters test-bloom: fix usage typo bloom: fix whitespace around tab length
2020-05-14Merge branch 'rs/fsck-duplicate-names-in-trees'Libravatar Junio C Hamano1-0/+16
"git fsck" ensures that the paths recorded in tree objects are sorted and without duplicates, but it failed to notice a case where a blob is followed by entries that sort before a tree with the same name. This has been corrected. * rs/fsck-duplicate-names-in-trees: fsck: report non-consecutive duplicate names in trees
2020-05-14Merge branch 'ao/p4-d-f-conflict-recover'Libravatar Junio C Hamano1-0/+70
"git p4" learned to recover from a (broken) state where a directory and a file are recorded at the same path in the Perforce repository the same way as their clients do. * ao/p4-d-f-conflict-recover: git-p4: recover from inconsistent perforce history
2020-05-14Merge branch 'js/rebase-autosquash-double-fixup-fix'Libravatar Junio C Hamano1-0/+16
"rebase -i" segfaulted when rearranging a sequence that has a fix-up that applies another fix-up (which may or may not be a fix-up of yet another step). * js/rebase-autosquash-double-fixup-fix: rebase --autosquash: fix a potential segfault
2020-05-14Merge branch 'cw/bisect-replay-with-dos'Libravatar Junio C Hamano1-0/+7
"git bisect replay" had trouble with input files when they used CRLF line ending, which has been corrected. * cw/bisect-replay-with-dos: bisect: allow CRLF line endings in "git bisect replay" input
2020-05-14Merge branch 'es/bugreport-with-hooks'Libravatar Junio C Hamano1-0/+15
"git bugreport" learned to report enabled hooks in the repository. * es/bugreport-with-hooks: bugreport: collect list of populated hooks
2020-05-13Merge branch 'cc/upload-pack-v2-fetch-fix'Libravatar Junio C Hamano1-4/+3
Serving a "git fetch" client over "git://" and "ssh://" protocols using the on-wire protocol version 2 was buggy on the server end when the client needs to make a follow-up request to e.g. auto-follow tags. * cc/upload-pack-v2-fetch-fix: upload-pack: clear filter_options for each v2 fetch command
2020-05-13Merge branch 'dd/bloom-sparse-fix'Libravatar Junio C Hamano3-3/+3
Code clean-up. * dd/bloom-sparse-fix: bloom: fix `make sparse` warning
2020-05-13Merge branch 'tb/bitmap-walk-with-tree-zero-filter'Libravatar Junio C Hamano2-0/+31
The object walk with object filter "--filter=tree:0" can now take advantage of the pack bitmap when available. * tb/bitmap-walk-with-tree-zero-filter: pack-bitmap: pass object filter to fill-in traversal pack-bitmap.c: support 'tree:0' filtering pack-bitmap.c: make object filtering functions generic list-objects-filter: treat NULL filter_options as "disabled"
2020-05-12trace2: log progress time and throughputLibravatar Emily Shaffer1-0/+26
Rather than teaching only one operation, like 'git fetch', how to write down throughput to traces, we can learn about a wide range of user operations that may seem slow by adding tooling to the progress library itself. Operations which display progress are likely to be slow-running and the kind of thing we want to monitor for performance anyways. By showing object counts and data transfer size, we should be able to make some derived measurements to ensure operations are scaling the way we expect. Signed-off-by: Emily Shaffer <emilyshaffer@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-11bloom: use num_changes not nr for limit detectionLibravatar Derrick Stolee1-1/+1
As diff_tree_oid() computes a diff, it will terminate early if the total number of changed paths is strictly larger than max_changes. This includes the directories that changed, not just the file paths. However, only the file paths are reflected in the resulting diff queue's "nr" value. Use the "num_changes" from diffopt to check if the diff terminated early. This is incredibly important, as it can result in incorrect filters! For example, the first commit in the Linux kernel repo reports only 471 changes, but since these are nested inside several directories they expand to 513 "real" changes, and in fact the total list of changes is not reported. Thus, the computed filter for this commit is incorrect. Demonstrate the subtle difference by using one fewer file change in the 'get bloom filter for commit with 513 changes' test. Before, this edited 513 files inside "bigDir" which hit this inequality. However, dropping the file count by one demonstrates how the previous inequality was incorrect but the new one is correct. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-11bloom: de-duplicate directory entriesLibravatar Derrick Stolee1-2/+2
When computing a changed-path Bloom filter, we need to take the files that changed from the diff computation and extract the parent directories. That way, a directory pathspec such as "Documentation" could match commits that change "Documentation/git.txt". However, the current code does a poor job of this process. The paths are added to a hashmap, but we do not check if an entry already exists with that path. This can create many duplicate entries and cause the filter to have a much larger length than it should. This means that the filter is more sparse than intended, which helps the false positive rate, but wastes a lot of space. Properly use hashmap_get() before hashmap_add(). Also be sure to include a comparison function so these can be matched correctly. This has an effect on a test in t0095-bloom.sh. This makes sense, there are ten changes inside "smallDir" so the total number of paths in the filter should be 11. This would result in 11 * 10 bits required, and with 8 bits per byte, this results in 14 bytes. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-11fsck: report non-consecutive duplicate names in treesLibravatar René Scharfe1-0/+16
Tree entries are sorted in path order, meaning that directory names get a slash ('/') appended implicitly. Git fsck checks if trees contains consecutive duplicates, but due to that ordering there can be non-consecutive duplicates as well if one of them is a directory and the other one isn't. Such a tree cannot be fully checked out. Find these duplicates by recording candidate file names on a stack and check candidate directory names against that stack to find matches. Suggested-by: Brandon Williams <bwilliamseng@gmail.com> Original-test-by: Brandon Williams <bwilliamseng@gmail.com> Signed-off-by: René Scharfe <l.s.r@web.de> Reviewed-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-10git-p4: recover from inconsistent perforce historyLibravatar Andrew Oakley1-0/+70
Perforce allows you commit files and directories with the same name, so you could have files //depot/foo and //depot/foo/bar both checked in. A p4 sync of a repository in this state fails. Deleting one of the files recovers the repository. When this happens we want git-p4 to recover in the same way as perforce. Note that Perforce has this change in their 2017.1 version: Bugs fixed in 2017.1 #1489051 (Job #2170) ** Submitting a file with the same name as an existing depot directory path (or vice versa) will now be rejected. so people hopefully will not creating damaged Perforce repos anymore, but "git p4" needs to be able to interact with already corrupt ones. Signed-off-by: Andrew Oakley <andrew@adoakley.name> Reviewed-by: Luke Diamand <luke@diamand.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-10multi-pack-index: respect repack.packKeptObjects=falseLibravatar Derrick Stolee1-0/+27
When selecting a batch of pack-files to repack in the "git multi-pack-index repack" command, Git should respect the repack.packKeptObjects config option. When false, this option says that the pack-files with an associated ".keep" file should not be repacked. This config value is "false" by default. There are two cases for selecting a batch of objects. The first is the case where the input batch-size is zero, which specifies "repack everything". The second is with a non-zero batch size, which selects pack-files using a greedy selection criteria. Both of these cases are updated and tested. Reported-by: Son Luong Ngoc <sluongng@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-09rebase --autosquash: fix a potential segfaultLibravatar Johannes Schindelin1-0/+16
When rearranging the todo list so that the fixups/squashes are reordered just after the commits they intend to fix up, we use two arrays to maintain that list: `next` and `tail`. The idea is that `next[i]`, if set to a non-negative value, contains the index of the item that should be rearranged just after the `i`th item. To avoid having to walk the entire `next` chain when appending another fixup/squash, we also store the end of the `next` chain in `tail[i]`. The logic we currently use to update these array items is based on the assumption that given a fixup/squash item at index `i`, we just found the index `i2` indicating the first item in that fixup chain. However, as reported by Paul Ganssle, that need not be true: the special form `fixup! <commit-hash>` is allowed to point to _another_ fixup commit in the middle of the fixup chain. Example: * 0192a To fixup * 02f12 fixup! To fixup * 03763 fixup! To fixup * 04ecb fixup! 02f12 Note how the fourth commit targets the second commit, which is already a fixup that targets the first commit. Previously, we would update `next` and `tail` under our assumption that every `fixup!` commit would find the start of the `fixup!`/`squash!` chain. This would lead to a segmentation fault because we would actually end up with a `next[i]` pointing to a `fixup!` but the corresponding `tail[i]` pointing nowhere, which would the lead to a segmentation fault. Let's fix this by _inserting_, rather than _appending_, the item. In other words, if we make a given line successor of another line, we do not simply forget any previously set successor of the latter, but make it a successor of the former. In the above example, at the point when we insert 04ecb just after 02f12, 03763 would already be recorded as a successor of 04ecb, and we now "squeeze in" 04ecb. To complete the idea, we now no longer assume that `next[i]` pointing to a line means that `last[i]` points to a line, too. Instead, we extend the concept of `last` to cover also partial `fixup!`/`squash!` chains, i.e. chains starting in the middle of a larger such chain. In the above example, after processing all lines, `last[0]` (corresponding to 0192a) would point to 03763, which indeed is the end of the overall `fixup!` chain, and `last[1]` (corresponding to 02f12) would point to 04ecb (which is the last `fixup!` targeting 02f12, but it has 03763 as successor, i.e. it is not the end of overall `fixup!` chain). Reported-by: Paul Ganssle <paul@ganssle.io> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-08Merge branch 'cb/test-bash-lineno-fix'Libravatar Junio C Hamano1-8/+10
Recent change to show files and line numbers of a breakage during test (only available when running the tests with bash) were hurting other shells with syntax errors, which has been corrected. * cb/test-bash-lineno-fix: t/test_lib: avoid naked bash arrays in file_lineno
2020-05-08Merge branch 'cb/t0000-use-the-configured-shell'Libravatar Junio C Hamano1-5/+2
The basic test did not honor $TEST_SHELL_PATH setting, which has been corrected. * cb/t0000-use-the-configured-shell: t/t0000-basic: make sure subtests also use TEST_SHELL_PATH
2020-05-08Merge branch 'es/restore-staged-from-head-by-default'Libravatar Junio C Hamano1-0/+11
"git restore --staged --worktree" now defaults to take the contents out of "HEAD", instead of erring out. * es/restore-staged-from-head-by-default: restore: default to HEAD when combining --staged and --worktree
2020-05-08Merge branch 'ds/sparse-allow-empty-working-tree'Libravatar Junio C Hamano2-8/+12
The sparse-checkout patterns have been forbidden from excluding all paths, leaving an empty working tree, for a long time. This limitation has been lifted. * ds/sparse-allow-empty-working-tree: sparse-checkout: stop blocking empty workdirs
2020-05-08Merge branch 'jk/for-each-ref-multi-key-sort-fix'Libravatar Junio C Hamano1-6/+88
"git branch" and other "for-each-ref" variants accepted multiple --sort=<key> options in the increasing order of precedence, but it had a few breakages around "--ignore-case" handling, and tie-breaking with the refname, which have been fixed. * jk/for-each-ref-multi-key-sort-fix: ref-filter: apply fallback refname sort only after all user sorts ref-filter: apply --ignore-case to all sorting keys
2020-05-08Merge branch 'ah/userdiff-markdown'Libravatar Junio C Hamano3-0/+24
The userdiff patterns for Markdown documents have been added. * ah/userdiff-markdown: userdiff: support Markdown
2020-05-08Merge branch 'cb/credential-store-ignore-bogus-lines'Libravatar Junio C Hamano1-1/+90
With the recent tightening of the code that is used to parse various parts of a URL for use in the credential subsystem, a hand-edited credential-store file causes the credential helper to die, which is a bit too harsh to the users. Demote the error behaviour to just ignore and keep using well-formed lines instead. * cb/credential-store-ignore-bogus-lines: credential-store: ignore bogus lines from store file credential-store: document the file format a bit more
2020-05-08upload-pack: clear filter_options for each v2 fetch commandLibravatar Christian Couder1-4/+3
Because of the request/response model of protocol v2, the upload_pack_v2() function is sometimes called twice in the same process, while 'struct list_objects_filter_options filter_options' was declared as static at the beginning of 'upload-pack.c'. This made the check in list_objects_filter_die_if_populated(), which is called by process_args(), fail the second time upload_pack_v2() is called, as filter_options had already been populated the first time. To fix that, filter_options is not static any more. It's now owned directly by upload_pack(). It's now also part of 'struct upload_pack_data', so that it's owned indirectly by upload_pack_v2(). In the long term, the goal is to also have upload_pack() use 'struct upload_pack_data', so adding filter_options to this struct makes more sense than to have it owned directly by upload_pack_v2(). This fixes the first of the 2 bugs documented by d0badf8797 (partial-clone: demonstrate bugs in partial fetch, 2020-02-21). Helped-by: Derrick Stolee <dstolee@microsoft.com> Helped-by: Jeff King <peff@peff.net> Helped-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-08bisect: allow CRLF line endings in "git bisect replay" inputLibravatar Christopher Warrington1-0/+7
We advertise that the bisect log can be corrected in your editor before being fed to "git bisect replay", but some editors may turn the line endings to CRLF. Update the parser of the input lines so that the CR at the end of the line gets ignored. Were anyone to intentionally be using terms/revs with embedded CRs, replaying such bisects will no longer work with this change. I suspect that this is incredibly rare. Signed-off-by: Christopher Warrington <chwarr@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-07bugreport: collect list of populated hooksLibravatar Emily Shaffer1-0/+15
Occasionally a failure a user is seeing may be related to a specific hook which is being run, perhaps without the user realizing. While the contents of hooks can be sensitive - containing user data or process information specific to the user's organization - simply knowing that a hook is being run at a certain stage can help us to understand whether something is going wrong. Without a definitive list of hook names within the code, we compile our own list from the documentation. This is likely prone to bitrot, but designing a single source of truth for acceptable hooks is too much overhead for this small change to the bugreport tool. Signed-off-by: Emily Shaffer <emilyshaffer@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-07bloom: fix `make sparse` warningLibravatar Đoàn Trần Công Danh3-3/+3
* We need a `final_new_line` to make our source code as text file, per POSIX and C specification. * `bloom_filters` should be limited to interal linkage only Signed-off-by: Đoàn Trần Công Danh <congdanhqx@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-07t/test_lib: avoid naked bash arrays in file_linenoLibravatar Carlo Marcelo Arenas Belón1-8/+10
662f9cf154 (tests: when run in Bash, annotate test failures with file name/line number, 2020-04-11), introduces a way to report the location (file:lineno) of a failed test case by traversing the bash callstack. The implementation requires bash and uses shell arrays and is therefore protected by a guard but NetBSD sh will still have to parse the function and therefore will result in: ** t0000-basic.sh *** ./test-lib.sh: 681: Syntax error: Bad substitution Enclose the bash specific code inside an eval to avoid parsing errors in the same way than 5826b7b595 (test-lib: check Bash version for '-x' without using shell arrays, 2019-01-03) Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-07t/t0000-basic: make sure subtests also use TEST_SHELL_PATHLibravatar Carlo Marcelo Arenas Belón1-5/+2
3f824e91c8 (t/Makefile: introduce TEST_SHELL_PATH, 2017-12-08) allows for setting a shell for running the tests, but the generated subtests weren't updated. Correct that and while at it update it to use write_script. Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-06t5500: count objects through stderr, not traceLibravatar Jonathan Tan1-6/+6
In two tests introduced by 4fa3f00abb ("fetch-pack: in protocol v2, in_vain only after ACK", 2020-04-28) and 2f0a093dd6 ("fetch-pack: in protocol v2, reset in_vain upon ACK", 2020-04-28), the count of objects downloaded is checked by grepping for a specific message in the packet trace. However, this is flaky as that specific message may be delivered over 2 or more packet lines. Instead, grep over stderr, just like the "fetch creating new shallow root" test in the same file. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-05Merge branch 'js/partial-urlmatch-2.17'Libravatar Junio C Hamano1-0/+38
Recent updates broke parsing of "credential.<url>.<key>" where <url> is not a full URL (e.g. [credential "https://"] helper = ...) stopped working, which has been corrected. * js/partial-urlmatch-2.17: credential: handle `credential.<partial-URL>.<key>` again credential: optionally allow partial URLs in credential_from_url_gently() credential: fix grammar
2020-05-05Merge branch 'tb/commit-graph-perm-bits'Libravatar Junio C Hamano3-1/+40
Some of the files commit-graph subsystem keeps on disk did not correctly honor the core.sharedRepository settings and some were left read-write. * tb/commit-graph-perm-bits: commit-graph.c: make 'commit-graph-chain's read-only commit-graph.c: ensure graph layers respect core.sharedRepository commit-graph.c: write non-split graphs as read-only lockfile.c: introduce 'hold_lock_file_for_update_mode' tempfile.c: introduce 'create_tempfile_mode'
2020-05-05Merge branch 'jk/test-fail-prereqs-fix'Libravatar Junio C Hamano1-0/+1
Test update. * jk/test-fail-prereqs-fix: t0000: disable GIT_TEST_FAIL_PREREQS in sub-tests
2020-05-05Merge branch 'dd/iso-8601-updates'Libravatar Junio C Hamano1-0/+6
The approxidate parser learns to parse seconds with fraction. * dd/iso-8601-updates: date.c: allow compact version of ISO-8601 datetime date.c: skip fractional second part of ISO-8601 date.c: validate and set time in a helper function date.c: s/is_date/set_date/
2020-05-05Merge branch 'bc/wildcard-credential'Libravatar Junio C Hamano1-0/+45
Update the parser used for credential.<URL>.<variable> configuration, to handle <URL>s with '/' in them correctly. * bc/wildcard-credential: credential: fix matching URLs with multiple levels in path
2020-05-05restore: default to HEAD when combining --staged and --worktreeLibravatar Eric Sunshine1-0/+11
By default, files are restored from the index for --worktree, and from HEAD for --staged. When --worktree and --staged are combined, --source must be specified to disambiguate the restore source[1], thus making it cumbersome to restore a file in both the worktree and the index. However, HEAD is also a reasonable default for --worktree when combined with --staged, so make it the default anytime --staged is used (whether combined with --worktree or not). [1]: Due to an oversight, the --source requirement, though documented, is not actually enforced. Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-04pack-bitmap: pass object filter to fill-in traversalLibravatar Jeff King1-0/+5
Sometimes a bitmap traversal still has to walk some commits manually, because those commits aren't included in the bitmap packfile (e.g., due to a push or commit since the last full repack). If we're given an object filter, we don't pass it down to this traversal. It's not necessary for correctness because the bitmap code has its own filters to post-process the bitmap result (which it must, to filter out the objects that _are_ mentioned in the bitmapped packfile). And with blob filters, there was no performance reason to pass along those filters, either. The fill-in traversal could omit them from the result, but it wouldn't save us any time to do so, since we'd still have to walk each tree entry to see if it's a blob or not. But now that we support tree filters, there's opportunity for savings. A tree:depth=0 filter means we can avoid accessing trees entirely, since we know we won't them (or any of the subtrees or blobs they point to). The new test in p5310 shows this off (the "partial bitmap" state is one where HEAD~100 and its ancestors are all in a bitmapped pack, but HEAD~100..HEAD are not). Here are the results (run against linux.git): Test HEAD^ HEAD ------------------------------------------------------------------------------------------------- [...] 5310.16: rev-list with tree filter (partial bitmap) 0.19(0.17+0.02) 0.03(0.02+0.01) -84.2% The absolute number of savings isn't _huge_, but keep in mind that we only omitted 100 first-parent links (in the version of linux.git here, that's 894 actual commits). In a more pathological case, we might have a much larger proportion of non-bitmapped commits. I didn't bother creating such a case in the perf script because the setup is expensive, and this is plenty to show the savings as a percentage. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-04pack-bitmap.c: support 'tree:0' filteringLibravatar Taylor Blau2-0/+26
In the previous patch, we made it easy to define other filters that exclude all objects of a certain type. Use that in order to implement bitmap-level filtering for the '--filter=tree:<n>' filter when 'n' is equal to 0. The general case is not helped by bitmaps, since for values of 'n > 0', the object filtering machinery requires a full-blown tree traversal in order to determine the depth of a given tree. Caching this is non-obvious, too, since the same tree object can have a different depth depending on the context (e.g., a tree was moved up in the directory hierarchy between two commits). But, the 'n = 0' case can be helped, and this patch does so. Running p5310.11 in this tree and on master with the kernel, we can see that this case is helped substantially: Test master this tree -------------------------------------------------------------------------------- 5310.11: rev-list count with tree:0 10.68(10.39+0.27) 0.06(0.04+0.01) -99.4% Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-04ref-filter: apply fallback refname sort only after all user sortsLibravatar Jeff King1-6/+48
Commit 9e468334b4 (ref-filter: fallback on alphabetical comparison, 2015-10-30) taught ref-filter's sort to fallback to comparing refnames. But it did it at the wrong level, overriding the comparison result for a single "--sort" key from the user, rather than after all sort keys have been exhausted. This worked correctly for a single "--sort" option, but not for multiple ones. We'd break any ties in the first key with the refname and never evaluate the second key at all. To make matters even more interesting, we only applied this fallback sometimes! For a field like "taggeremail" which requires a string comparison, we'd truly return the result of strcmp(), even if it was 0. But for numerical "value" fields like "taggerdate", we did apply the fallback. And that's why our multiple-sort test missed this: it uses taggeremail as the main comparison. So let's start by adding a much more rigorous test. We'll have a set of commits expressing every combination of two tagger emails, dates, and refnames. Then we can confirm that our sort is applied with the correct precedence, and we'll be hitting both the string and value comparators. That does show the bug, and the fix is simple: moving the fallback to the outer compare_refs() function, after all ref_sorting keys have been exhausted. Note that in the outer function we don't have an "ignore_case" flag, as it's part of each individual ref_sorting element. It's debatable what such a fallback should do, since we didn't use the user's keys to match. But until now we have been trying to respect that flag, so the least-invasive thing is to try to continue to do so. Since all callers in the current code either set the flag for all keys or for none, we can just pull the flag from the first key. In a hypothetical world where the user really can flip the case-insensitivity of keys separately, we may want to extend the code to distinguish that case from a blanket "--ignore-case". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-04ref-filter: apply --ignore-case to all sorting keysLibravatar Jeff King1-0/+40
All of the ref-filter users (for-each-ref, branch, and tag) take an --ignore-case option which makes filtering and sorting case-insensitive. However, this option was applied only to the first element of the ref_sorting list. So: git for-each-ref --ignore-case --sort=refname would do what you expect, but: git for-each-ref --ignore-case --sort=refname --sort=taggername would sort the primary key (taggername) case-insensitively, but sort the refname case-sensitively. We have two options here: - teach callers to set ignore_case on the whole list - replace the ref_sorting list with a struct that contains both the list of sorting keys, as well as options that apply to _all_ keys I went with the first one here, as it gives more flexibility if we later want to let the users set the flag per-key (presumably through some special syntax when defining the key; for now it's all or nothing through --ignore-case). The new test covers this by sorting on both tagger and subject case-insensitively, which should compare "a" and "A" identically, but still sort them before "b" and "B". We'll break ties by sorting on the refname to give ourselves a stable output (this is actually supposed to be done automatically, but there's another bug which will be fixed in the next commit). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-04sparse-checkout: stop blocking empty workdirsLibravatar Derrick Stolee2-8/+12
Remove the error condition when updating the sparse-checkout leaves an empty working directory. This behavior was added in 9e1afb167 (sparse checkout: inhibit empty worktree, 2009-08-20). The comment was added in a7bc906f2 (Add explanation why we do not allow to sparse checkout to empty working tree, 2011-09-22) in response to a "dubious" comment in 84563a624 (unpack-trees.c: cosmetic fix, 2010-12-22). With the recent "cone mode" and "git sparse-checkout init [--cone]" command, it is common to set a reasonable sparse-checkout pattern set of /* !/*/ which matches only files at root. If the repository has no such files, then their "git sparse-checkout init" command will fail. Now that we expect this to be a common pattern, we should not have the commands fail on an empty working directory. If it is a confusing result, then the user can recover with "git sparse-checkout disable" or "git sparse-checkout set". This is especially simple when using cone mode. Reported-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-02credential-store: ignore bogus lines from store fileLibravatar Carlo Marcelo Arenas Belón1-1/+90
With the added checks for invalid URLs in credentials, any locally modified store files which might have empty lines or even comments were reported[1] failing to parse as valid credentials. Instead of doing a hard check for credentials, do a soft one and therefore avoid the reported fatal error. While at it add tests for all known corruptions that are currently ignored to keep track of them and avoid the risk of regressions. [1] https://stackoverflow.com/a/61420852/5005936 Reported-by: Dirk <dirk@ed4u.de> Helped-by: Eric Sunshine <sunshine@sunshineco.com> Helped-by: Junio C Hamano <gitster@pobox.com> Based-on-patch-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Carlo Marcelo Arenas Belón <carenas@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-02userdiff: support MarkdownLibravatar Ash Holland3-0/+24
It's typical to find Markdown documentation alongside source code, and having better context for documentation changes is useful; see also commit 69f9c87d4 (userdiff: add support for Fountain documents, 2015-07-21). The pattern is based on the CommonMark specification 0.29, section 4.2 <https://spec.commonmark.org/> but doesn't match empty headings, as seeing them in a hunk header is unlikely to be useful. Only ATX headings are supported, as detecting setext headings would require printing the line before a pattern matches, or matching a multiline pattern. The word-diff pattern is the same as the pattern for HTML, because many Markdown parsers accept inline HTML. Signed-off-by: Ash Holland <ash@sorrel.sh> Acked-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-01Merge branch 'jt/v2-fetch-nego-fix'Libravatar Junio C Hamano1-0/+48
The upload-pack protocol v2 gave up too early before finding a common ancestor, resulting in a wasteful fetch from a fork of a project. This has been corrected to match the behaviour of v0 protocol. * jt/v2-fetch-nego-fix: fetch-pack: in protocol v2, reset in_vain upon ACK fetch-pack: in protocol v2, in_vain only after ACK fetch-pack: return enum from process_acks()
2020-05-01Merge branch 'es/bugreport'Libravatar Junio C Hamano1-0/+61
The "bugreport" tool. * es/bugreport: bugreport: drop extraneous includes bugreport: add compiler info bugreport: add uname info bugreport: gather git version and build info bugreport: add tool to generate debugging info help: move list_config_help to builtin/help