Age | Commit message (Collapse) | Author | Files | Lines |
|
The five tests checking 'git fast-import's checkpoint handling in
't9300-fast-import.sh', all with the prefix "V:" in their test
description, can hang indefinitely if 'git fast-import' unexpectedly
dies early in any of these tests.
These five tests run 'git fast-import' in the background, while
feeding instructions to its standard input through a fifo (fd 8) from
a background subshell, and reading and verifying its standard output
through another fifo (fd 9) in the test script's main shell process.
This "reading and verifying" is basically a 'while read ...' shell
loop iterating until 'git fast-import' outputs the expected line,
ignoring any other output. This doesn't work very well when 'git
fast-import' dies before printing that particular line, because the
'read' builtin doesn't get EOF after the death of 'git fast-import',
as their input and output are not connected directly but through a
fifo. Consequently, that 'read' hangs waiting for the next line from
the already dead 'git fast-import', leaving the test script and in
turn the whole test suite hanging.
Avoid this hang by checking whether the background 'git fast-import'
process exited unexpectedly early, and interrupt the 'while read' loop
if it did. We have to jump through some hoops to achive that, though:
- Start the background 'git fast-import' in another background
subshell, which then:
- prints the PID of that 'git fast-import' process to the fifo,
to be read by the main shell process, so it will know which
process to kill when the test is finished.
- waits until that 'git fast-import' process exits. If it does
exit, then report its exit code, and write a message to the
fifo used for 'git fast-import's standard output, thus
un-block the 'read' builtin in the main shell process.
- Modify that 'while read' loop to break the loop upon seeing that
message, and fail the test in the usual way.
- Once the test is finished kill that background subshell as well,
and do so before killing the background 'git fast-import'.
Otherwise the background 'git fast-import' and subshell processes
would die racily, and if 'git fast-import' were to die sooner,
then we might get some undesired and potentially confusing
messages in the test's output.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The five tests running 'git fast-import' in the background in
't9300-fast-import.sh' store the PID of that background process in a
pidfile, to be used to check whether that background process survived
each test and then to kill it in test_when_finished commands. To
achieve this all these five tests run three $(cat <pidfile>) command
substitutions each.
Store the PID of the background 'git fast-import' in a variable to
avoid those extra processes.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Regression fix.
* ds/commit-graph-on-fetch:
commit-graph: fix writing first commit-graph during fetch
t5510-fetch.sh: demonstrate fetch.writeCommitGraph bug
|
|
Comment update.
* wb/fsmonitor-bitmap-fix:
t7519-status-fsmonitor: improve comments
|
|
The comments for the staging/unstaging test did not accurately
describe the scenario being tested. It is not essential that
the test files being staged/unstaged appear at the end of the
index. All that is required is that the test files are not
flagged with CE_FSMONITOR_VALID and have a position in the
index greater than the number of entries in the index after
unstaging.
The comment for this test has been updated to be more
accurate with respect to the scenario that's being tested.
Signed-off-by: William Baker <William.Baker@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The previous commit includes a failing test for an issue around
fetch.writeCommitGraph and fetching in a repo with a submodule. Here, we
fix that bug and set the test to "test_expect_success".
The problem arises with this set of commands when the remote repo at
<url> has a submodule. Note that --recurse-submodules is not needed to
demonstrate the bug.
$ git clone <url> test
$ cd test
$ git -c fetch.writeCommitGraph=true fetch origin
Computing commit graph generation numbers: 100% (12/12), done.
BUG: commit-graph.c:886: missing parent <hash1> for commit <hash2>
Aborted (core dumped)
As an initial fix, I converted the code in builtin/fetch.c that calls
write_commit_graph_reachable() to instead launch a "git commit-graph
write --reachable --split" process. That code worked, but is not how we
want the feature to work long-term.
That test did demonstrate that the issue must be something to do with
internal state of the 'git fetch' process.
The write_commit_graph() method in commit-graph.c ensures the commits we
plan to write are "closed under reachability" using close_reachable().
This method walks from the input commits, and uses the UNINTERESTING
flag to mark which commits have already been visited. This allows the
walk to take O(N) time, where N is the number of commits, instead of
O(P) time, where P is the number of paths. (The number of paths can be
exponential in the number of commits.)
However, the UNINTERESTING flag is used in lots of places in the
codebase. This flag usually means some barrier to stop a commit walk,
such as in revision-walking to compare histories. It is not often
cleared after the walk completes because the starting points of those
walks do not have the UNINTERESTING flag, and clear_commit_marks() would
stop immediately.
This is happening during a 'git fetch' call with a remote. The fetch
negotiation is comparing the remote refs with the local refs and marking
some commits as UNINTERESTING.
I tested running clear_commit_marks_many() to clear the UNINTERESTING
flag inside close_reachable(), but the tips did not have the flag, so
that did nothing.
It turns out that the calculate_changed_submodule_paths() method is at
fault. Thanks, Peff, for pointing out this detail! More specifically,
for each submodule, the collect_changed_submodules() runs a revision
walk to essentially do file-history on the list of submodules. That
revision walk marks commits UNININTERESTING if they are simplified away
by not changing the submodule.
Instead, I finally arrived on the conclusion that I should use a flag
that is not used in any other part of the code. In commit-reach.c, a
number of flags were defined for commit walk algorithms. The REACHABLE
flag seemed like it made the most sense, and it seems it was not
actually used in the file. The REACHABLE flag was used in early versions
of commit-reach.c, but was removed by 4fbcca4 (commit-reach: make
can_all_from_reach... linear, 2018-07-20).
Add the REACHABLE flag to commit-graph.c and use it instead of
UNINTERESTING in close_reachable(). This fixes the bug in manual
testing.
Reported-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Jeff King <peff@peff.net>
Helped-by: Szeder Gábor <szeder.dev@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While dogfooding, Johannes found a bug in the fetch.writeCommitGraph
config behavior. His example initially happened during a clone with
--recurse-submodules, we found that this happens with the first fetch
after cloning a repository that contains a submodule:
$ git clone <url> test
$ cd test
$ git -c fetch.writeCommitGraph=true fetch origin
Computing commit graph generation numbers: 100% (12/12), done.
BUG: commit-graph.c:886: missing parent <hash1> for commit <hash2>
Aborted (core dumped)
In the repo I had cloned, there were really 60 commits to scan, but
only 12 were in the list to write when calling
compute_generation_numbers(). A commit in the list expects to see a
parent, but that parent is not in the list.
A follow-up will fix the bug, but first we create a test that
demonstrates the problem. This test must be careful about an existing
commit-graph file, since GIT_TEST_COMMIT_GRAPH=1 will cause the repo we
are cloning to already have one. This then prevents the incremtnal
commit-graph write during the first 'git fetch'.
Helped-by: Jeff King <peff@peff.net>
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Szeder Gábor <szeder.dev@gmail.com>
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The codepath that reads the index.version configuration was broken
with a recent update, which has been corrected.
* ds/feature-macros:
repo-settings: read an int for index.version
|
|
Test update.
* bw/format-patch-o-create-leading-dirs:
t4014: make output-directory tests self-contained
|
|
Test update.
* dl/submodule-set-branch:
t7419: change test_must_fail to ! for grep
|
|
Several config options were combined into a repo_settings struct in
ds/feature-macros, including a move of the "index.version" config
setting in 7211b9e (repo-settings: consolidate some config settings,
2019-08-13).
Unfortunately, that file looked like a lot of boilerplate and what is
clearly a factor of copy-paste overload, the config setting is parsed
with repo_config_ge_bool() instead of repo_config_get_int(). This means
that a setting "index.version=4" would not register correctly and would
revert to the default version of 3.
I caught this while incorporating v2.24.0-rc0 into the VFS for Git
codebase, where we really care that the index is in version 4.
This was not caught by the codebase because the version checks placed
in t1600-index.sh did not test the "basic" scenario enough. Here, we
modify the test to include these normal settings to not be overridden by
features.manyFiles or GIT_INDEX_VERSION. While the "default" version is
3, this is demoted to version 2 in do_write_index() when not necessary.
Signed-off-by: Derrick Stolee <dstolee@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The atomic push over smart HTTP transport did not work, which has
been corrected.
* bc/smart-http-atomic-push:
remote-curl: pass on atomic capability to remote side
|
|
A segfault fix.
* wb/fsmonitor-bitmap-fix:
fsmonitor: don't fill bitmap with entries to be removed
|
|
Tweak userdiff patterns for dts.
* sb/userdiff-dts:
userdiff: fix some corner cases in dts regex
|
|
Byte-order fix the recent update to progress display code.
* sg/progress-fix:
test-progress: fix test failures on big-endian systems
|
|
According to t/README, test_must_fail() should only be used to test for
failure in Git commands. Replace the invocations of
`test_must_fail grep` with `! grep`.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As noted by Gábor in [1], the new tests in edefc31873 ("format-patch:
create leading components of output directory", 2019-10-11) cannot be
run independently. Fix this.
[1] https://public-inbox.org/git/20191011144650.GM29845@szeder.dev/
Signed-off-by: Bert Wesarg <bert.wesarg@googlemail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While reviewing some dts diffs recently I noticed that the hunk header
logic was failing to find the containing node. This is because the regex
doesn't consider properties that may span multiple lines, i.e.
property = <something>,
<something_else>;
and it got hung up on comments inside nodes that look like the root node
because they start with '/*'. Add tests for these cases and update the
regex to find them. Maybe detecting the root node is too complicated but
forcing it to be a backslash with any amount of whitespace up to an open
bracket seemed OK. I tried to detect that a comment is in-between the
two parts but I wasn't happy so I just dropped it.
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Frank Rowand <frowand.list@gmail.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Reviewed-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In 't0500-progress-display.sh' all tests running 'test-tool progress
--total=<N>' fail on big-endian systems, e.g. like this:
+ test-tool progress --total=3 Working hard
[...]
+ test_i18ncmp expect out
--- expect 2019-10-18 23:07:54.765523916 +0000
+++ out 2019-10-18 23:07:54.773523916 +0000
@@ -1,4 +1,2 @@
-Working hard: 33% (1/3)<CR>
-Working hard: 66% (2/3)<CR>
-Working hard: 100% (3/3)<CR>
-Working hard: 100% (3/3), done.
+Working hard: 0% (1/12884901888)<CR>
+Working hard: 0% (3/12884901888), done.
The reason for that bogus value is that '--total's parameter is parsed
via parse-options's OPT_INTEGER into a uint64_t variable [1], so the
two bits of 3 end up in the "wrong" bytes on big-endian systems
(12884901888 = 0x300000000).
Change the type of that variable from uint64_t to int, to match what
parse-options expects; in the tests of the progress output we won't
use values that don't fit into an int anyway.
[1] start_progress() expects the total number as an uint64_t, that's
why I chose the same type when declaring the variable holding the
value given on the command line.
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com>
[jpag: Debian unstable/ppc64 (big-endian)]
Tested-By: John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
[tz: Fedora s390x (big-endian)]
Tested-By: Todd Zullinger <tmz@pobox.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git stash save" lost local changes to submodules, which has been
corrected.
* jj/stash-reset-only-toplevel:
stash: avoid recursive hard reset on submodules
|
|
"git format-patch -o <outdir>" did an equivalent of "mkdir <outdir>"
not "mkdir -p <outdir>", which is being corrected.
* bw/format-patch-o-create-leading-dirs:
format-patch: create leading components of output directory
|
|
Test fix.
* ta/t1308-typofix:
t1308-config-set: fix a test that has a typo
|
|
When pushing more than one reference with the --atomic option, the
server is supposed to perform a single atomic transaction to update the
references, leaving them either all to succeed or all to fail. This
works fine when pushing locally or over SSH, but when pushing over HTTP,
we fail to pass the atomic capability to the remote side. In fact, we
have not reported this capability to any remote helpers during the life
of the feature.
Now normally, things happen to work nevertheless, since we actually
check for most types of failures, such as non-fast-forward updates, on
the client side, and just abort the entire attempt. However, if the
server side reports a problem, such as the inability to lock a ref, the
transaction isn't atomic, because we haven't passed the appropriate
capability over and the remote side has no way of knowing that we wanted
atomic behavior.
Fix this by passing the option from the transport code through to remote
helpers, and from the HTTP remote helper down to send-pack. With this
change, we can detect if the server side rejects the push and report
back appropriately. Note the difference in the messages: the remote
side reports "atomic transaction failed", while our own checking rejects
pushes with the message "atomic push failed".
Document the atomic option in the remote helper documentation, so other
implementers can implement it if they like.
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
test cleanup.
* dl/format-patch-doc-test-cleanup:
t4014: treat rev-list output as the expected value
|
|
test update.
* dl/t0000-skip-test-test:
t0000: cover GIT_SKIP_TESTS blindspots
|
|
"git range-diff" failed to handle mode-only change, which has been
corrected.
* tg/range-diff-output-update:
range-diff: don't segfault with mode-only changes
|
|
Pretty-printed command line formatter (used in e.g. reporting the
command being run by the tracing API) had a bug that lost an
argument that is an empty string, which has been corrected.
* gs/sq-quote-buf-pretty:
sq_quote_buf_pretty: don't drop empty arguments
|
|
Code clean-up of the hashmap API, both users and implementation.
* ew/hashmap:
hashmap_entry: remove first member requirement from docs
hashmap: remove type arg from hashmap_{get,put,remove}_entry
OFFSETOF_VAR macro to simplify hashmap iterators
hashmap: introduce hashmap_free_entries
hashmap: hashmap_{put,remove} return hashmap_entry *
hashmap: use *_entry APIs for iteration
hashmap_cmp_fn takes hashmap_entry params
hashmap_get{,_from_hash} return "struct hashmap_entry *"
hashmap: use *_entry APIs to wrap container_of
hashmap_get_next returns "struct hashmap_entry *"
introduce container_of macro
hashmap_put takes "struct hashmap_entry *"
hashmap_remove takes "const struct hashmap_entry *"
hashmap_get takes "const struct hashmap_entry *"
hashmap_add takes "struct hashmap_entry *"
hashmap_get_next takes "const struct hashmap_entry *"
hashmap_entry_init takes "struct hashmap_entry *"
packfile: use hashmap_entry in delta_base_cache_entry
coccicheck: detect hashmap_entry.hash assignment
diff: use hashmap_entry_init on moved_entry.ent
|
|
The trace2 output, when sending them to files in a designated
directory, can populate the directory with too many files; a
mechanism is introduced to set the maximum number of files and
discard further logs when the maximum is reached.
* js/trace2-cap-max-output-files:
trace2: write discard message to sentinel files
trace2: discard new traces if target directory has too many files
docs: clarify trace2 version invariants
docs: mention trace2 target-dir mode in git-config
|
|
Test fixes.
* am/t0028-utf16-tests:
t0028: add more tests
t0028: fix test for UTF-16-LE-BOM
|
|
"git log --graph" for an octopus merge is sometimes colored
incorrectly, which is demonstrated and documented but not yet
fixed.
* dl/octopus-graph-bug:
t4214: demonstrate octopus graph coloring failure
t4214: explicitly list tags in log
t4214: generate expect in their own test cases
t4214: use test_merge
test-lib: let test_merge() perform octopus merges
|
|
Updates to fast-import/export.
* en/fast-imexport-nested-tags:
fast-export: handle nested tags
t9350: add tests for tags of things other than a commit
fast-export: allow user to request tags be marked with --mark-tags
fast-export: add support for --import-marks-if-exists
fast-import: add support for new 'alias' command
fast-import: allow tags to be identified by mark labels
fast-import: fix handling of deleted tags
fast-export: fix exporting a tag and nothing else
|
|
CI updates.
* js/azure-pipelines-msvc:
ci: also build and test with MS Visual Studio on Azure Pipelines
ci: really use shallow clones on Azure Pipelines
tests: let --immediate and --write-junit-xml play well together
test-tool run-command: learn to run (parts of) the testsuite
vcxproj: include more generated files
vcxproj: only copy `git-remote-http.exe` once it was built
msvc: work around a bug in GetEnvironmentVariable()
msvc: handle DEVELOPER=1
msvc: ignore some libraries when linking
compat/win32/path-utils.h: add #include guards
winansi: use FLEX_ARRAY to avoid compiler warning
msvc: avoid using minus operator on unsigned types
push: do not pretend to return `int` from `die_push_simple()`
|
|
"git fetch --jobs=<n>" allowed <n> parallel jobs when fetching
submodules, but this did not apply to "git fetch --multiple" that
fetches from multiple remote repositories. It now does.
* js/fetch-jobs:
fetch: let --jobs=<n> parallelize --multiple, too
|
|
The merge-recursive machiery is one of the most complex parts of
the system that accumulated cruft over time. This large series
cleans up the implementation quite a bit.
* en/merge-recursive-cleanup: (26 commits)
merge-recursive: fix the fix to the diff3 common ancestor label
merge-recursive: fix the diff3 common ancestor label for virtual commits
merge-recursive: alphabetize include list
merge-recursive: add sanity checks for relevant merge_options
merge-recursive: rename MERGE_RECURSIVE_* to MERGE_VARIANT_*
merge-recursive: split internal fields into a separate struct
merge-recursive: avoid losing output and leaking memory holding that output
merge-recursive: comment and reorder the merge_options fields
merge-recursive: consolidate unnecessary fields in merge_options
merge-recursive: move some definitions around to clean up the header
merge-recursive: rename merge_options argument to opt in header
merge-recursive: rename 'mrtree' to 'result_tree', for clarity
merge-recursive: use common name for ancestors/common/base_list
merge-recursive: fix some overly long lines
cache-tree: share code between functions writing an index as a tree
merge-recursive: don't force external callers to do our logging
merge-recursive: remove useless parameter in merge_trees()
merge-recursive: exit early if index != head
Ensure index matches head before invoking merge machinery, round N
merge-recursive: remove another implicit dependency on the_repository
...
|
|
git stash push does not recursively stash submodules, but if
submodule.recurse is set, it may recursively reset --hard them. Having
only the destructive action recurse is likely to be surprising
behaviour, and unlikely to be desirable, so the easiest fix should be to
ensure that the call to git reset --hard never recurses into submodules.
This matches the behavior of check_changes_tracked_files, which ignores
submodules.
Signed-off-by: Jakob Jarmar <jakob@jarmar.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
'git format-patch -o <outdir>' did an equivalent of 'mkdir <outdir>'
not 'mkdir -p <outdir>', which is being corrected.
Avoid the usage of 'adjust_shared_perm' on the leading directories which
may have security implications. Achieved by temporarily disabling of
'config.sharedRepository' like 'git init' does.
Signed-off-by: Bert Wesarg <bert.wesarg@googlemail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
While doing some testing with fsmonitor enabled I found
that git commands would segfault after staging and
unstaging an untracked file. Looking at the crash it
appeared that fsmonitor_ewah_callback was attempting to
adjust bits beyond the bounds of the index cache.
Digging into how this could happen it became clear that
the fsmonitor extension must have been written with
more bits than there were entries in the index. The
root cause ended up being that fill_fsmonitor_bitmap was
populating fsmonitor_dirty with bits for all entries in
the index, even those that had been marked for removal.
To solve this problem fill_fsmonitor_bitmap has been
updated to skip entries with the the CE_REMOVE flag set.
With this change the bits written for the fsmonitor
extension will be consistent with the index entries
written by do_write_index. Additionally, BUG checks
have been added to detect if the number of bits in
fsmonitor_dirty should ever exceed the number of
entries in the index again.
Another option that was considered was moving the call
to fill_fsmonitor_bitmap closer to where the index is
written (and where the fsmonitor extension itself is
written). However, that did not work as the
fsmonitor_dirty bitmap must be filled before the index
is split during writing.
Signed-off-by: William Baker <William.Baker@microsoft.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Change test 'find value_list for a key from a configset' to redirect the
result to 'expect' instead of 'except' which was a typo.
With this change, the test case actually fails because it uses
`configset_get_value`. Clearly, this was intended to be
`configset_get_value_multi` since the test expects a list of values
instead of a single value, so let's fix that, too.
Signed-off-by: Tanay Abhra <tanayabh@gmail.com>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git add -i" has been taught to show the total number of hunks and
the hunks that has been processed so far when showing prompts.
* kt/add-i-progress:
add -i: show progress counter in the prompt
|
|
"git stash apply" in a subdirectory of a secondary worktree failed
to access the worktree correctly, which has been corrected.
* js/stash-apply-in-secondary-worktree:
stash apply: report status correctly even in a worktree's subdirectory
|
|
"git range-diff" segfaulted when diff.noprefix configuration was
used, as it blindly expected the patch it internally generates to
have the standard a/ and b/ prefixes. The command now forces the
internal patch to be built without any prefix, not to be affected
by any end-user configuration.
* js/range-diff-noprefix:
range-diff: internally force `diff.noprefix=true`
|
|
A few simplification and bugfixes to PCRE interface.
* ab/pcre-jit-fixes:
grep: under --debug, show whether PCRE JIT is enabled
grep: do not enter PCRE2_UTF mode on fixed matching
grep: stess test PCRE v2 on invalid UTF-8 data
grep: create a "is_fixed" member in "grep_pat"
grep: consistently use "p->fixed" in compile_regexp()
grep: stop using a custom JIT stack with PCRE v1
grep: stop "using" a custom JIT stack with PCRE v2
grep: remove overly paranoid BUG(...) code
grep: use PCRE v2 for optimized fixed-string search
grep: remove the kwset optimization
grep: drop support for \0 in --fixed-strings <pattern>
grep: make the behavior for NUL-byte in patterns sane
grep tests: move binary pattern tests into their own file
grep tests: move "grep binary" alongside the rest
grep: inline the return value of a function call used only once
t4210: skip more command-line encoding tests on MinGW
grep: don't use PCRE2?_UTF8 with "log --encoding=<non-utf8>"
log tests: test regex backends in "--encode=<enc>" tests
|
|
"git rebase -i" showed a wrong HEAD while "reword" open the editor.
* pw/rebase-i-show-HEAD-to-reword:
sequencer: simplify root commit creation
rebase -i: check for updated todo after squash and reword
rebase -i: always update HEAD before rewording
|
|
"git clean" fixes.
* en/clean-nested-with-ignored:
dir: special case check for the possibility that pathspec is NULL
clean: fix theoretical path corruption
clean: rewrap overly long line
clean: avoid removing untracked files in a nested git repository
clean: disambiguate the definition of -d
git-clean.txt: do not claim we will delete files with -n/--dry-run
dir: add commentary explaining match_pathspec_item's return value
dir: if our pathspec might match files under a dir, recurse into it
dir: make the DO_MATCH_SUBMODULE code reusable for a non-submodule case
dir: also check directories for matching pathspecs
dir: fix off-by-one error in match_pathspec_item
dir: fix typo in comment
t7300: add testcases showing failure to clean specified pathspecs
|
|
Code cleanup.
* rs/test-remove-useless-debugging-cat:
tests: remove "cat foo" before "test_i18ngrep bar foo"
|
|
Test fix.
* js/mingw-spawn-with-spaces-in-path:
t0061: fix test for argv[0] with spaces (MINGW only)
|
|
Integer arithmetic fix.
* sg/name-rev-cutoff-underflow-fix:
name-rev: avoid cutoff timestamp underflow
|
|
Currently, the tests for GIT_SKIP_TESTS do not cover the situation where
we skip an entire test suite. The tests also do not cover the situation
where we have GIT_SKIP_TESTS defined but the test suite does not match.
Add two test cases so we cover this blindspot.
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In 6bd26f58ea (t4014: use test_line_count() where possible, 2019-08-27),
we converted many test cases to take advantage of the test_line_count()
function. In one conversion, we inverted the expected and actual value
as tested by test_line_count(). Although functionally correct, if
format-patch ever produced incorrect output, the debugging output would
be a bunch of hashes which would be difficult to debug.
Invert the expected and actual values provided to test_line_count() so
that if format-patch produces incorrect output, the debugging output
will be a list of human-readable files instead.
Helped-by: SZEDER Gábor <szeder.dev@gmail.com>
Signed-off-by: Denton Liu <liu.denton@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|