Age | Commit message (Collapse) | Author | Files | Lines |
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Updates to the CI settings.
* js/ci-windows-update:
Azure Pipeline: switch to the latest agent pools
ci: prevent `perforce` from being quarantined
t/lib-httpd: avoid using macOS' sed
|
|
Test updates.
* js/test-unc-fetch:
t5580: test cloning without file://, test fetching via UNC paths
|
|
Testfix.
* js/test-write-junit-xml-fix:
tests: fix --write-junit-xml with subshells
|
|
"git show" and others gave an object name in raw format in its
error output, which has been corrected to give it in hex.
* hd/show-one-mergetag-fix:
show_one_mergetag: print non-parent in hex form.
|
|
Fix for a bug revealed by a recent change to make the protocol v2
the default.
* ds/partial-clone-fixes:
partial-clone: avoid fetching when looking for objects
partial-clone: demonstrate bugs in partial fetch
|
|
The merge-recursive machinery failed to refresh the cache entry for
a merge result in a couple of places, resulting in an unnecessary
merge failure, which has been fixed.
* en/t3433-rebase-stat-dirty-failure:
merge-recursive: fix the refresh logic in update_file_flags
t3433: new rebase testcase documenting a stat-dirty-like failure
|
|
"git check-ignore" did not work when the given path is explicitly
marked as not ignored with a negative entry in the .gitignore file.
* en/check-ignore:
check-ignore: fix documentation and implementation to match
|
|
The code to automatically shrink the fan-out in the notes tree had
an off-by-one bug, which has been killed.
* jh/notes-fanout-fix:
notes.c: fix off-by-one error when decreasing notes fanout
t3305: check notes fanout more carefully and robustly
|
|
The index-pack code now diagnoses a bad input packstream that
records the same object twice when it is used as delta base; the
code used to declare a software bug when encountering such an
input, but it is an input error.
* jk/index-pack-dupfix:
index-pack: downgrade twice-resolved REF_DELTA to die()
|
|
"git rebase -i" identifies existing commits in its todo file with
their abbreviated object name, which could become ambigous as it
goes to create new commits, and has a mechanism to avoid ambiguity
in the main part of its execution. A few other cases however were
not covered by the protection against ambiguity, which has been
corrected.
* js/rebase-i-with-colliding-hash:
rebase -i: also avoid SHA-1 collisions with missingCommitsCheck
rebase -i: re-fix short SHA-1 collision
parse_insn_line(): improve error message when parsing failed
|
|
Running "git rm" on a submodule failed unnecessarily when
.gitmodules is only cache-dirty, which has been corrected.
* dt/submodule-rm-with-stale-cache:
git rm submodule: succeed if .gitmodules index stat info is zero
|
|
The "--recurse-submodules" option of various subcommands did not
work well when run in an alternate worktree, which has been
corrected.
* pb/recurse-submodule-in-worktree-fix:
submodule.c: use get_git_dir() instead of get_git_common_dir()
t2405: clarify test descriptions and simplify test
t2405: use git -C and test_commit -C instead of subshells
t7410: rename to t2405-worktree-submodule.sh
|
|
An earlier update to show the location of working tree in the error
message did not consider the possibility that a git command may be
run in a bare repository, which has been corrected.
* es/outside-repo-errmsg-hints:
prefix_path: show gitdir if worktree unavailable
prefix_path: show gitdir when arg is outside repo
|
|
Minor bugfixes to "git add -i" that has recently been rewritten in C.
* js/builtin-add-i-cmds:
built-in add -i: accept open-ended ranges again
built-in add -i: do not try to `patch`/`diff` an empty list of files
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If there is no worktree at present, we can still hint the user about
Git's current directory by showing them the absolute path to the Git
directory. Even though the Git directory doesn't make it as easy to
locate the worktree in question, it can still help a user figure out
what's going on while developing a script.
This fixes a segmentation fault introduced in e0020b2f
("prefix_path: show gitdir when arg is outside repo", 2020-02-14).
Signed-off-by: Emily Shaffer <emilyshaffer@google.com>
[jc: added minimum tests, with help from Szeder Gábor]
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The credential protocol can't handle values with newlines. We already
detect and block any such URLs from being used with credential helpers,
but let's also add an fsck check to detect and block gitmodules files
with such URLs. That will let us notice the problem earlier when
transfer.fsckObjects is turned on. And in particular it will prevent bad
objects from spreading, which may protect downstream users running older
versions of Git.
We'll file this under the existing gitmodulesUrl flag, which covers URLs
with option injection. There's really no need to distinguish the exact
flaw in the URL in this context. Likewise, I've expanded the description
of t7416 to cover all types of bogus URLs.
|
|
The credential protocol can't represent newlines in values, but URLs can
embed percent-encoded newlines in various components. A previous commit
taught the low-level writing routines to die() when encountering this,
but we can be a little friendlier to the user by detecting them earlier
and handling them gracefully.
This patch teaches credential_from_url() to notice such components,
issue a warning, and blank the credential (which will generally result
in prompting the user for a username and password). We blank the whole
credential in this case. Another option would be to blank only the
invalid component. However, we're probably better off not feeding a
partially-parsed URL result to a credential helper. We don't know how a
given helper would handle it, so we're better off to err on the side of
matching nothing rather than something unexpected.
The die() call in credential_write() is _probably_ impossible to reach
after this patch. Values should end up in credential structs only by URL
parsing (which is covered here), or by reading credential protocol input
(which by definition cannot read a newline into a value). But we should
definitely keep the low-level check, as it's our final and most accurate
line of defense against protocol injection attacks. Arguably it could
become a BUG(), but it probably doesn't matter much either way.
Note that the public interface of credential_from_url() grows a little
more than we need here. We'll use the extra flexibility in a future
patch to help fsck catch these cases.
|
|
The credential tests have a "check" function which feeds some input to
git-credential and checks the stdout and stderr. We look for exact
matches in the output. For stdout, this makes sense; the output is
the credential protocol. But for stderr, we may be showing various
diagnostic messages, or the prompts fed to the askpass program, which
could be translated. Let's mark them as such.
|
|
The credential protocol that we use to speak to helpers can't represent
values with newlines in them. This was an intentional design choice to
keep the protocol simple, since none of the values we pass should
generally have newlines.
However, if we _do_ encounter a newline in a value, we blindly transmit
it in credential_write(). Such values may break the protocol syntax, or
worse, inject new valid lines into the protocol stream.
The most likely way for a newline to end up in a credential struct is by
decoding a URL with a percent-encoded newline. However, since the bug
occurs at the moment we write the value to the protocol, we'll catch it
there. That should leave no possibility of accidentally missing a code
path that can trigger the problem.
At this level of the code we have little choice but to die(). However,
since we'd not ever expect to see this case outside of a malicious URL,
that's an acceptable outcome.
Reported-by: Felix Wilhelm <fwilhelm@google.com>
|
|
When a mergetag names a non-parent, which can occur after a shallow
clone, its hash was previously printed as raw data. Print it in hex form
instead.
Signed-off-by: Harald van Dijk <harald@gigawatt.nl>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Among other differences relative to GNU sed, macOS' sed always ends its
output with a trailing newline, even if the input did not have such a
trailing newline.
Surprisingly, this makes three httpd-based tests fail on macOS: t5616,
t5702 and t5703. ("Surprisingly" because those tests have been around
for some time, but apparently nobody runs them on macOS with a working
Apache2 setup.)
The reason is that we use `sed` in those tests to filter the response of
the web server. Apart from the fact that we use GNU constructs (such as
using a space after the `c` command instead of a backslash and a
newline), we have another problem: macOS' sed LF-only newlines while
webservers are supposed to use CR/LF ones.
Even worse, t5616 uses `sed` to replace a binary part of the response
with a new binary part (kind of hoping that the replaced binary part
does not contain a 0x0a byte which would be interpreted as a newline).
To that end, it calls on Perl to read the binary pack file and
hex-encode it, then calls on `sed` to prefix every hex digit pair with a
`\x` in order to construct the text that the `c` statement of the `sed`
invocation is supposed to insert. So we call Perl and sed to construct a
sed statement. The final nail in the coffin is that macOS' sed does not
even interpret those `\x<hex>` constructs.
Let's just replace all of that by Perl snippets. With Perl, at least, we
do not have to deal with GNU vs macOS semantics, we do not have to worry
about unwanted trailing newlines, and we do not have to spawn commands
to construct arguments for other commands to be spawned (i.e. we can
avoid a whole lot of shell scripting complexity).
The upshot is that this fixes t5616, t5702 and t5703 on macOS with
Apache2.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When using partial clone, find_non_local_tags() in builtin/fetch.c
checks each remote tag to see if its object also exists locally. There
is no expectation that the object exist locally, but this function
nevertheless triggers a lazy fetch if the object does not exist. This
can be extremely expensive when asking for a commit, as we are
completely removed from the context of the non-existent object and
thus supply no "haves" in the request.
6462d5eb9a (fetch: remove fetch_if_missing=0, 2019-11-05) removed a
global variable that prevented these fetches in favor of a bitflag.
However, some object existence checks were not updated to use this flag.
Update find_non_local_tags() to use OBJECT_INFO_SKIP_FETCH_OBJECT in
addition to OBJECT_INFO_QUICK. The _QUICK option only prevents
repreparing the pack-file structures. We need to be extremely careful
about supplying _SKIP_FETCH_OBJECT when we expect an object to not exist
due to updated refs.
This resolves a broken test in t5616-partial-clone.sh.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While testing partial clone, I noticed some odd behavior. I was testing
a way of running 'git init', followed by manually configuring the remote
for partial clone, and then running 'git fetch'. Astonishingly, I saw
the 'git fetch' process start asking the server for multiple rounds of
pack-file downloads! When tweaking the situation a little more, I
discovered that I could cause the remote to hang up with an error.
Add two tests that demonstrate these two issues.
In the first test, we find that when fetching with blob filters from
a repository that previously did not have any tags, the 'git fetch
--tags origin' command fails because the server sends "multiple
filter-specs cannot be combined". This only happens when using
protocol v2.
In the second test, we see that a 'git fetch origin' request with
several ref updates results in multiple pack-file downloads. This must
be due to Git trying to fault-in the objects pointed by the refs. What
makes this matter particularly nasty is that this goes through the
do_oid_object_info_extended() method, so there are no "haves" in the
negotiation. This leads the remote to send every reachable commit and
tree from each new ref, providing a quadratic amount of data transfer!
This test is fixed if we revert 6462d5eb9a (fetch: remove
fetch_if_missing=0, 2019-11-05), but that revert causes other test
failures. The real fix will need more care.
The tests are ordered in this way because if I swap the test order the
tag test will succeed instead of fail. I believe this is because somehow
we need the srv.bare repo to not have any tags when we clone, but then
have tags in our next fetch.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If we need to delete a higher stage entry in the index to place the file
at stage 0, then we'll lose that file's stat information. In such
situations we may still be able to detect that the file on disk is the
version we want (as noted by our comment in the code:
/* do not overwrite file if already present */
), but we do still need to update the mtime since we are creating a new
cache_entry for that file. Update the logic used to determine whether
we refresh a file's mtime.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A user discovered a case where they had a stack of 20 simple commits to
rebase, and the rebase would succeed in picking the first commit and
then error out with a pair of "Could not execute the todo command" and
"Your local changes to the following files would be overwritten by
merge" messages.
Their steps actually made use of the -i flag, but I switched it over to
-m to make it simpler to trigger the bug. With that flag, it bisects
back to commit 68aa495b590d (rebase: implement --merge via the
interactive machinery, 2018-12-11), but that's misleading. If you
change the -m flag to --keep-empty, then the problem persists and will
bisect back to 356ee4659bb5 (sequencer: try to commit without forking
'git commit', 2017-11-24)
After playing with the testcase for a bit, I discovered that added
--exec "sleep 1" to the command line makes the rebase succeed, making me
suspect there is some kind of discard and reloading of caches that lead
us to believe that something is stat dirty, but I didn't succeed in
digging any further than that.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
check-ignore has two different modes, and neither of these modes has an
implementation that matches the documentation. These modes differ in
whether they just print paths or whether they also print the final
pattern matched by the path. The fix is different for both modes, so
I'll discuss both separately.
=== First (default) mode ===
The first mode is documented as:
For each pathname given via the command-line or from a file via
--stdin, check whether the file is excluded by .gitignore (or other
input files to the exclude mechanism) and output the path if it is
excluded.
However, it fails to do this because it did not account for negated
patterns. Commands other than check-ignore verify exclusion rules via
calling
... -> treat_one_path() -> is_excluded() -> last_matching_pattern()
while check-ignore has a call path of the form:
... -> check_ignore() -> last_matching_pattern()
The fact that the latter does not include the call to is_excluded()
means that it is susceptible to to messing up negated patterns (since
that is the only significant thing is_excluded() adds over
last_matching_pattern()). Unfortunately, we can't make it just call
is_excluded(), because the same codepath is used by the verbose mode
which needs to know the matched pattern in question. This brings us
to...
=== Second (verbose) mode ===
The second mode, known as verbose mode, references the first in the
documentation and says:
Also output details about the matching pattern (if any) for each
given pathname. For precedence rules within and between exclude
sources, see gitignore(5).
The "Also" means it will print patterns that match the exclude rules as
noted for the first mode, and also print which pattern matches. Unless
more information is printed than just pathname and pattern (which is not
done), this definition is somewhat ill-defined and perhaps even
self-contradictory for negated patterns: A path which matches a negated
exclude pattern is NOT excluded and thus shouldn't be printed by the
former logic, while it certainly does match one of the explicit patterns
and thus should be printed by the latter logic.
=== Resolution ==
Since the second mode exists to find out which pattern matches given
paths, and showing the user a pattern that begins with a '!' is
sufficient for them to figure out whether the pattern is excluded, the
existing behavior is desirable -- we just need to update the
documentation to match the implementation (i.e. it is about printing
which pattern is matched by paths, not about showing which paths are
excluded).
For the first or default mode, users just want to know whether a pattern
is excluded. As such, the existing documentation is desirable; change
the implementation to match the documented behavior.
Finally, also adjust a few tests in t0008 that were caught up by this
discrepancy in how negated paths were handled.
Signed-off-by: Elijah Newren <newren@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Error message fix.
* jb/parse-options-message-fix:
parse-options: lose an unnecessary space in an error message
|
|
"git grep --no-index" should not get affected by the contents of
the .gitmodules file but when "--recurse-submodules" is given or
the "submodule.recurse" variable is set, it did. Now these
settings are ignored in the "--no-index" mode.
* pb/do-not-recurse-grep-no-index:
grep: ignore --recurse-submodules if --no-index is given
|
|
Futureproofing a test not to depend on the current implementation
detail.
* jt/t5616-robustify:
t5616: make robust to delta base change
|
|
Corner case bugs in "git clean" that stems from a (necessarily for
performance reasons) awkward calling convention in the directory
enumeration API has been corrected.
* en/fill-directory-fixes-more:
dir: point treat_leading_path() warning to the right place
dir: restructure in a way to avoid passing around a struct dirent
dir: treat_leading_path() and read_directory_recursive(), round 2
clean: demonstrate a bug with pathspecs
|
|
"git fetch --refmap=" option has got a better documentation.
* ds/refmap-doc:
fetch: document and test --refmap=""
|
|
Test fixes.
* jk/test-fixes:
t7800: don't rely on reuse_worktree_file()
t4018: drop "debugging" cat from hunk-header tests
|
|
"git restore --staged" did not correctly update the cache-tree
structure, resulting in bogus trees to be written afterwards, which
has been corrected.
* nd/switch-and-restore:
restore: invalidate cache-tree when removing entries with --staged
|
|
Reduce unnecessary round-trip when running "ls-remote" over the
stateless RPC mechanism.
* jk/no-flush-upon-disconnecting-slrpc-transport:
transport: don't flush when disconnecting stateless-rpc helper
|
|
"git commit" gives output similar to "git status" when there is
nothing to commit, but without honoring the advise.statusHints
configuration variable, which has been corrected.
* hw/commit-advise-while-rejecting:
commit: honor advice.statusHints when rejecting an empty commit
|
|
On Windows, it is quite common to work with network drives. The format
of the paths to network drives (or "network shares", or UNC paths) is:
\\<server>\<share>\...
We already have a couple regression tests revolving around those types
of paths, but we missed cloning and fetching from UNC paths without
leading `file://` (and with backslashes instead of forward slashes).
This lil' patch closes that gap.
It gets a bit silly to add the commands to the name of the test script,
so let's just rename it while we're testing more UNC stuff.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In t0000, more precisely in its `test_bool_env` test case, there are two
subshells that are supposed to fail. To be even _more_ precise, they
fail by calling the `error` function, and that is okay, because it is in
a subshell, and it is expected that those two subshell invocations fail.
However, the `error` function also tries to finalize the JUnit XML (if
that XML was asked for, via `--write-junit-xml`. As a consequence, the
XML is edited to add a `time` attribute for the `testsuite` tag. And
since there are two expected `error` calls in addition to the final
`test_done`, the `finalize_junit_xml` function is called three times and
naturally the `time` attribute is added _three times_.
Azure Pipelines is not happy with that, complaining thusly:
##[warning]Failed to read D:\a\1\s\t\out\TEST-t0000-basic.xml. Error : 'time' is a duplicate attribute name. Line 2, position 82..
One possible way to address this would be to unset `write_junit_xml` in
the `test_bool_env` test case.
But that would be fragile, as other `error` calls in subshells could be
introduced.
So let's just modify `finalize_junit_xml` to remove any `time` attribute
before adding the authoritative one.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Jacques Bodin-Hullin <j.bodinhullin@monsieurbiz.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When we're resolving a REF_DELTA, we compare-and-swap its type from
REF_DELTA to whatever real type the base object has, as discussed in
ab791dd138 (index-pack: fix race condition with duplicate bases,
2014-08-29). If the old type wasn't a REF_DELTA, we consider that a
BUG(). But as discussed in that commit, we might see this case whenever
we try to resolve an object twice, which may happen because we have
multiple copies of the base object.
So this isn't a bug at all, but rather a sign that the input pack is
broken. And indeed, this case is triggered already in t5309.5 and
t5309.6, which create packs with delta cycles and duplicate bases. But
we never noticed because those tests are marked expect_failure.
Those tests were added by b2ef3d9ebb (test index-pack on packs with
recoverable delta cycles, 2013-08-23), which was leaving the door open
for cases that we theoretically _could_ handle. And when we see an
already-resolved object like this, in theory we could keep going after
confirming that the previously resolved child->real_type matches
base->obj->real_type. But:
- enforcing the "only resolve once" rule here saves us from an
infinite loop in other parts of the code. If we keep going, then the
delta cycle in t5309.5 causes us to loop infinitely, as
find_ref_delta_children() doesn't realize which objects have already
been resolved. So there would be more changes needed to make this
case work, and in the meantime we'd be worse off.
- any pack that triggers this is broken anyway. It either has a
duplicate base object, or it has a cycle which causes us to bring in
a duplicate via --fix-thin. In either case, we'd end up rejecting
the pack in write_idx_file(), which also detects duplicates.
So the tests have little value in documenting what we _could_ be doing
(and have been neglected for 6+ years). Let's switch them to confirming
that we handle this case cleanly (and switch out the BUG() for a more
informative die() so that we do so).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As noted in the previous commit, the nature of the fanout heuristic
in the notes code causes the exact point at which we increase or
decrease the notes fanout to vary with the objects being annotated.
Since the object ids generated by the test environment are
deterministic (by design), the notes generated and tested by t3305
are always the same, and we therefore happen to see the same fanout
behavior from one run to the next.
Coincidentally, if we were to change the test environment slightly
(say by making a test commit on an unrelated branch before we start
the t3305 test proper), we not only see the fanout switch happen at
different points, we also manage to trigger a _bug_ in the notes
code where the fanout 1 -> 0 switch is not applied uniformly across
the notes tree, but instead yields a notes tree like this:
...
bdeafb301e44b0e4db0f738a2d2a7beefdb70b70
bff2d39b4f7122bd4c5caee3de353a774d1e632a
d3/8ec8f851adf470131178085bfbaab4b12ad2a7
e0b173960431a3e692ae929736df3c9b73a11d5b
eb3c3aede523d729990ac25c62a93eb47c21e2e3
...
The bug occurs when we are writing out a notes tree with a newly
decreased fanout, and the notes tree contains unexpanded subtrees
that should be consolidated into the parent tree as a consequence of
the decreased fanout):
Subtrees that happen to sit at an _even_ level in the internal notes
16-tree structure (in other words: subtrees whose path - "d3" in the
example above - is unique in the first nibble - i.e. there are no
other note paths that start with "d") are _not_ unpacked as part of
the tree writeout. This error will repeat itself in subsequent note
trees until the subtree is forced to be unpacked. In t3305 this only
happens when the d38ec8f8 note is itself removed from the tree.
The error is not severe (no information is lost, and the notes code
is able to read/decode this tree and manipulate it correctly), but
this is nonetheless a bug in the current implementation that should
be fixed.
That said, fixing the off-by-one error is not without complications:
We must take into account that the load_subtree() call from
for_each_note_helper() (that is now done to correctly unpack the
subtree while we're writing out the notes tree) may end up inserting
unpacked non-notes into the linked list of non_note entries held by
the struct notes_tree. Since we are in the process of writing out the
notes tree, this linked list is currently in the process of being
traversed by write_each_non_note_until(). The unpacked non-notes are
necessarily inserted between the last non-note we wrote out, and the
next non-note to be written. Hence, we cannot simply hold the
next_non_note to write in struct write_each_note_data (as we would
then silently skip these newly inserted notes), but must instead
always follow the ->next pointer from the last non-note we wrote.
(This part was caught by an existing test in t3304.)
Cc: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Cc: Brian M. Carlson <sandals@crustytoothpaste.net>
Signed-off-by: Johan Herland <johan@herland.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In short, before this patch, this test script:
- creates many notes
- verifies that all notes in the notes tree has a fanout of 1
- removes most notes
- verifies that the notes in the notes tree now has a fanout of 0
The fanout verification only happened twice: after creating all the
notes, and after removing most of them.
This patch strengthens the test by checking the fanout after _each_
added/removed note: We assert that the switch from fanout 0 -> 1
happens exactly once while adding notes (and that the switch pervades
the entire notes tree). Likewise, we assert that the switch from
fanout 1 -> 0 happens exactly once while removing notes.
Additionally, we decrease the number of notes left after removal,
from 50 to 15 notes, in order to ensure that fanout 1 -> 0 transition
keeps happening regardless of external factors[1].
[1]: Currently (with the SHA1 hash function and the deterministic
object ids of the test environment) the fanout heuristic in the notes
code happens to switch from 0 -> 1 at 109 notes, and from 1 -> 0 at
59 notes. However, changing the hash function or other external
factors will vary these numbers, and the latter may - in theory - go
as low as 15. For more details, please see the discussion at
https://public-inbox.org/git/20200125230035.136348-4-sandals@crustytoothpaste.net/
Cc: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Cc: Brian M. Carlson <sandals@crustytoothpaste.net>
Signed-off-by: Johan Herland <johan@herland.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Since grep learned to recurse into submodules in 0281e487fd
(grep: optionally recurse into submodules, 2016-12-16),
using --recurse-submodules along with --no-index makes Git
die().
This is unfortunate because if submodule.recurse is set in a user's
~/.gitconfig, invoking `git grep --no-index` either inside or outside
a Git repository results in
fatal: option not supported with --recurse-submodules
Let's allow using these options together, so that setting submodule.recurse
globally does not prevent using `git grep --no-index`.
Using `--recurse-submodules` should not have any effect if `--no-index`
is used inside a repository, as Git will recurse into the checked out
submodule directories just like into regular directories.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Philippe Blain <levraiphilippeblain@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|