Age | Commit message (Collapse) | Author | Files | Lines |
|
When "git rebase" tries to compare set of changes on the updated
upstream and our own branch, it computes patch-id for all of these
changes and attempts to find matches. This has been optimized by
lazily computing the full patch-id (which is expensive) to be
compared only for changes that touch the same set of paths.
* kw/patch-ids-optim:
rebase: avoid computing unnecessary patch IDs
patch-ids: add flag to create the diff patch id using header only data
patch-ids: replace the seen indicator with a commit pointer
patch-ids: stop using a hand-rolled hashmap implementation
|
|
"git mv dir non-existing-dir/" did not work in some environments
the same way as existing mainstream platforms. The code now moves
"dir" to "non-existing-dir", without relying on rename("A", "B/")
that strips the trailing slash of '/'.
* js/mv-dir-to-new-directory:
git mv: do not keep slash in `git mv dir non-existing-dir/`
|
|
A small code clean-up.
* rs/use-strbuf-add-unique-abbrev:
use strbuf_add_unique_abbrev() for adding short hashes
|
|
A small code clean-up.
* rs/merge-add-strategies-simplification:
merge: use string_list_split() in add_strategies()
|
|
A small code clean-up.
* rs/child-process-init:
use CHILD_PROCESS_INIT to initialize automatic variables
|
|
Fix-up to an error codepath in a topic already in 'master'.
* sb/submodule-clone-retry:
submodule--helper: use parallel processor correctly
|
|
The `rebase` family of Git commands avoid applying patches that were
already integrated upstream. They do that by using the revision walking
option that computes the patch IDs of the two sides of the rebase
(local-only patches vs upstream-only ones) and skipping those local
patches whose patch ID matches one of the upstream ones.
In many cases, this causes unnecessary churn, as already the set of
paths touched by a given commit would suffice to determine that an
upstream patch has no local equivalent.
This hurts performance in particular when there are a lot of upstream
patches, and/or large ones.
Therefore, let's introduce the concept of a "diff-header-only" patch ID,
compare those first, and only evaluate the "full" patch ID lazily.
Please note that in contrast to the "full" patch IDs, those
"diff-header-only" patch IDs are prone to collide with one another, as
adjacent commits frequently touch the very same files. Hence we now
have to be careful to allow multiple hash entries with the same hash.
We accomplish that by using the hashmap_add() function that does not even
test for hash collisions. This also allows us to evaluate the full patch ID
lazily, i.e. only when we found commits with matching diff-header-only
patch IDs.
We add a performance test that demonstrates ~1-6% improvement. In
practice this will depend on various factors such as how many upstream
changes and how big those changes are along with whether file system
caches are cold or warm. As Git's test suite has no way of catching
performance regressions, we also add a regression test that verifies
that the full patch ID computation is skipped when the diff-header-only
computation suffices.
Signed-off-by: Kevin Willford <kcwillford@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A few updates to "git submodule update".
Use of "| wc -l" break with BSD variant of 'wc'.
* sb/submodule-update-dot-branch:
t7406: fix breakage on OSX
submodule update: allow '.' for branch value
submodule--helper: add remote-branch helper
submodule-config: keep configured branch around
submodule--helper: fix usage string for relative-path
submodule update: narrow scope of local variable
submodule update: respect depth in subsequent fetches
t7406: future proof tests with hard coded depth
|
|
"git am -3" calls "git merge-recursive" when it needs to fall back
to a three-way merge; this call has been turned into an internal
subroutine call instead of spawning a separate subprocess.
* js/am-3-merge-recursive-direct:
merge-recursive: flush output buffer even when erroring out
merge_trees(): ensure that the callers release output buffer
merge-recursive: offer an option to retain the output in 'obuf'
merge-recursive: write the commit title in one go
merge-recursive: flush output buffer before printing error messages
am -3: use merge_recursive() directly again
merge-recursive: switch to returning errors instead of dying
merge-recursive: handle return values indicating errors
merge-recursive: allow write_tree_from_memory() to error out
merge-recursive: avoid returning a wholesale struct
merge_recursive: abort properly upon errors
prepare the builtins for a libified merge_recursive()
merge-recursive: clarify code in was_tracked()
die(_("BUG")): avoid translating bug messages
die("bug"): report bugs consistently
t5520: verify that `pull --rebase` shows the helpful advice when failing
|
|
* js/commit-slab-decl-fix:
commit-slab.h: avoid duplicated global static variables
config.c: avoid duplicated global static variables
|
|
"git format-patch" learned format.from configuration variable to
specify the default settings for its "--from" option.
* jt/format-patch-from-config:
format-patch: format.from gives the default for --from
|
|
Not-so-recent rewrite of "git am" that started making internal
calls into the commit machinery had an unintended regression, in
that no matter how many seconds it took to apply many patches, the
resulting committer timestamp for the resulting commits were all
the same.
* jk/reset-ident-time-per-commit:
am: reset cached ident date for each patch
|
|
When developing another patch series I had a temporary state in
which git-clone would segfault, when the call was prepared in
prepare_to_clone_next_submodule. This lead to the call failing,
i.e. in `update_clone_task_finished` the task was scheduled to be
tried again. The second call to prepare_to_clone_next_submodule
would return 0, as the segfaulted clone did create the .git file
already, such that was not considered to need to be cloned again. I
was seeing the "BUG: ce was a submodule before?\n" message, which
was the correct behavior at the time as my local code was
buggy. When trying to debug this failure, I tried to use printing
messages into the strbuf that is passed around, but these messages
were never printed as the die(..) doesn't flush the `err` strbuf.
When implementing the die() in 665b35ecc (2016-06-09, "submodule--helper:
initial clone learns retry logic"), I considered this condition to be
a severe condition, which should lead to an immediate abort as we do not
trust ourselves any more. However the queued messages in `err` are valuable
so let's not toss them out by immediately dying, but a graceful return.
Another thing to note: The error message itself was misleading. A return
value of 0 doesn't indicate the passed in `ce` is not a submodule any more,
but just that we do not consider cloning it any more.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Repeating the definition of a static variable seems to be valid in C.
Nevertheless, it is bad style because it can cause confusion, definitely
when it becomes necessary to change the type.
d64ec16 (git config: reorganize to use parseopt, 2009-02-21) added two
static variables near the top of the file config.c without removing the
definitions of the two variables that occurs later in the file.
The two variables were needed earlier in the file in the newly
introduced parseopt structure. These references were removed later in
d0e08d6 (config: fix parsing of "git config --get-color some.key -1",
2014-11-20).
Remove the redundant, younger, definitions near the top of the file and
keep the original definitions that occur later.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A small memory leak in the command line parsing of "git blame"
has been plugged.
* jk/parseopt-string-list:
blame: drop strdup of string literal
|
|
* rs/use-strbuf-addstr:
use strbuf_addstr() instead of strbuf_addf() with "%s"
use strbuf_addstr() for adding constant strings to a strbuf
|
|
"git pack-objects" has a few options that tell it not to pack
objects found in certain packfiles, which require it to scan .idx
files of all available packs. The codepaths involved in these
operations have been optimized for a common case of not having any
non-local pack and/or any .kept pack.
* jk/pack-objects-optim:
pack-objects: compute local/ignore_pack_keep early
pack-objects: break out of want_object loop early
find_pack_entry: replace last_found_pack with MRU cache
add generic most-recently-used list
sha1_file: drop free_pack_by_name
t/perf: add tests for many-pack scenarios
|
|
More i18n marking.
* va/i18n:
i18n: config: unfold error messages marked for translation
i18n: notes: mark comment for translation
|
|
The reflog output format is documented better, and a new format
--date=unix to report the seconds-since-epoch (without timezone)
has been added.
* jk/reflog-date:
date: clarify --date=raw description
date: add "unix" format
date: document and test "raw-local" mode
doc/pretty-formats: explain shortening of %gd
doc/pretty-formats: describe index/time formats for %gd
doc/rev-list-options: explain "-g" output formats
doc/rev-list-options: clarify "commit@{Nth}" for "-g" option
|
|
When calling `rename("dir", "non-existing-dir/")` on Linux, it silently
succeeds, stripping the trailing slash of the second argument.
This is all good and dandy but this behavior disagrees with the specs at
http://pubs.opengroup.org/onlinepubs/9699919799/functions/rename.html
that state clearly regarding the 2nd parameter (called `new`):
If the `new` argument does not resolve to an existing directory
entry for a file of type directory and the `new` argument
contains at least one non- <slash> character and ends with one
or more trailing <slash> characters after all symbolic links
have been processed, `rename()` shall fail.
Of course, we would like `git mv dir non-existing-dir/` to succeed (and
rename the directory "dir" to "non-existing-dir"). Let's be extra
careful to remove the trailing slash in that case.
This lets t7001-mv.sh pass in Bash on Windows.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Call strbuf_add_unique_abbrev() to add abbreviated hashes to strbufs
instead of taking detours through find_unique_abbrev() and its static
buffer. This is shorter and a bit more efficient.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Call string_list_split() for cutting a space separated list into pieces
instead of reimplementing it based on struct strategy. The attr member
of struct strategy was not used split_merge_strategies(); it was a pure
string operation. Also be nice and clean up once we're done splitting;
the old code didn't bother freeing any of the allocated memory.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Initialize struct child_process variables already when they're defined.
That's shorter and saves a function call.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Call strbuf_addstr() for adding a simple string to a strbuf instead of
using the heavier strbuf_addf(). This is shorter and documents the
intent more clearly.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
An earlier tweak to make "submodule update" retry a failing clone
of submodules was buggy and caused segfault, which has been fixed.
* sb/submodule-clone-retry:
submodule-helper: fix indexing in clone retry error reporting path
git-submodule: forward exit code of git-submodule--helper more faithfully
|
|
Gerrit has a "superproject subscription" feature[1], that triggers a
commit in a superproject that is subscribed to its submodules.
Conceptually this Gerrit feature can be done on the client side with
Git via (except for raciness, error handling etc):
while [ true ]; do
git -C <superproject> submodule update --remote --force
git -C <superproject> commit -a -m "Update submodules"
git -C <superproject> push
done
for each branch in the superproject. To ease the configuration in Gerrit
a special value of "." has been introduced for the submodule.<name>.branch
to mean the same branch as the superproject[2], such that you can create a
new branch on both superproject and the submodule and this feature
continues to work on that new branch.
Now we find projects in the wild with such a .gitmodules file.
The .gitmodules used in these Gerrit projects do not conform
to Gits understanding of how .gitmodules should look like.
This teaches Git to deal gracefully with this syntax as well.
The redefinition of "." does no harm to existing projects unaware of
this change, as "." is an invalid branch name in Git, so we do not
expect such projects to exist.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In a later patch we want to enhance the logic for the branch selection.
Rewrite the current logic to be in C, so we can directly use C when
we enhance the logic.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git push" and "git clone" learned to give better progress meters
to the end user who is waiting on the terminal.
* jk/push-progress:
receive-pack: send keepalives during quiet periods
receive-pack: turn on connectivity progress
receive-pack: relay connectivity errors to sideband
receive-pack: turn on index-pack resolving progress
index-pack: add flag for showing delta-resolution progress
clone: use a real progress meter for connectivity check
check_connected: add progress flag
check_connected: relay errors to alternate descriptor
check_everything_connected: use a struct with named options
check_everything_connected: convert to argv_array
rev-list: add optional progress reporting
check_everything_connected: always pass --quiet to rev-list
|
|
Users of the parse_options_concat() API function need to allocate
extra slots in advance and fill them with OPT_END() when they want
to decide the set of supported options dynamically, which makes the
code error-prone and hard to read. This has been corrected by tweaking
the API to allocate and return a new copy of "struct option" array.
* jk/parse-options-concat:
parse_options: allocate a new array when concatenating
|
|
"git push" learned to accept and pass extra options to the
receiving end so that hooks can read and react to them.
* sb/push-options:
add a test for push options
push: accept push options
receive-pack: implement advertising and receiving push options
push options: {pre,post}-receive hook learns about push options
|
|
"git commit --help" said "--no-verify" is only about skipping the
pre-commit hook, and failed to say that it also skipped the
commit-msg hook.
* os/no-verify-skips-commit-msg-too:
commit: describe that --no-verify skips the commit-msg hook in the help text
|
|
This strdup was added as part of 58dbfa2 (blame: accept
multiple -L ranges, 2013-08-06) to be consistent with
parse_opt_string_list(), which appends to the same list.
But as of 7a7a517 (parse_opt_string_list: stop allocating
new strings, 2016-06-13), we should stop using strdup (to
match parse_opt_string_list, and for all the reasons
described in that commit; namely that it does nothing useful
and causes us to leak the memory).
Signed-off-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When we compute the date to go in author/committer lines of
commits, or tagger lines of tags, we get the current date
once and then cache it for the rest of the program. This is
a good thing in some cases, like "git commit", because it
means we do not racily assign different times to the
author/committer fields of a single commit object.
But as more programs start to make many commits in a single
process (e.g., the recently builtin "git am"), it means that
you'll get long strings of commits with identical committer
timestamps (whereas before, we invoked "git commit" many
times and got true timestamps).
This patch addresses it by letting callers reset the cached
time, which means they'll get a fresh time on their next
call to git_committer_info() or git_author_info(). The first
caller to do so is "git am", which resets the time for each
patch it applies.
It would be nice if we could just do this automatically
before filling in the ident fields of commit and tag
objects. Unfortunately, it's hard to know where a particular
logical operation begins and ends.
For instance, if commit_tree_extended() were to call
reset_ident_date() before getting the committer/author
ident, that doesn't quite work; sometimes the author info is
passed in to us as a parameter, and it may or may not have
come from a previous call to ident_default_date(). So in
those cases, we lose the property that the committer and the
author timestamp always match.
You could similarly put a date-reset at the end of
commit_tree_extended(). That actually works in the current
code base, but it's fragile. It makes the assumption that
after commit_tree_extended() finishes, the caller has no
other operations that would logically want to fall into the
same timestamp.
So instead we provide the tool to easily do the reset, and
let the high-level callers use it to annotate their own
logical operations.
There's no automated test, because it would be inherently
racy (it depends on whether the program takes multiple
seconds to run). But you can see the effect with something
like:
# make a fake 100-patch series
top=$(git rev-parse HEAD)
bottom=$(git rev-list --first-parent -100 HEAD | tail -n 1)
git log --format=email --reverse --first-parent \
--binary -m -p $bottom..$top >patch
# now apply it; this presumably takes multiple seconds
git checkout --detach $bottom
git am <patch
# now count the number of distinct committer times;
# prior to this patch, there would only be one, but
# now we'd typically see several.
git log --format=%ct $bottom.. | sort -u
Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Helped-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Internally we call the underscore version of relative_path, but externally
we present an API with no underscores.
Signed-off-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Replace uses of strbuf_addf() for adding strings with more lightweight
strbuf_addstr() calls.
In http-push.c it becomes easier to see what's going on without having
to verfiy that the definition of PROPFIND_ALL_REQUEST doesn't contain
any format specifiers.
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This helps users who would prefer format-patch to default to --from,
and makes it easier to change the default in the future.
Signed-off-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The recursive merge machinery accumulates its output in an output
buffer, to be flushed at the end of merge_recursive(). At this point,
we forgot to release the output buffer.
When calling merge_trees() (i.e. the non-recursive part of the recursive
merge) directly, the output buffer is never flushed because the caller
may be merge_recursive() which wants to flush the output itself.
For the same reason, merge_trees() cannot release the output buffer: it
may still be needed.
Forgetting to release the output buffer did not matter much when running
git-checkout, or git-merge-recursive, because we exited after the
operation anyway. Ever since cherry-pick learned to pick a commit range,
however, this memory leak had the potential of becoming a problem.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
In want_object_in_pack(), we can exit early from our loop if
neither "local" nor "ignore_pack_keep" are set. If they are,
however, we must examine each pack to see if it has the
object and is non-local or has a ".keep".
It's quite common for there to be no non-local or .keep
packs at all, in which case we know ahead of time that
looking further will be pointless. We can pre-compute this
by simply iterating over the list of packs ahead of time,
and dropping the flags if there are no packs that could
match.
Another similar strategy would be to modify the loop in
want_object_in_pack() to notice that we have already found
the object once, and that we are looping only to check for
"local" and "keep" attributes. If a pack has neither of
those, we can skip the call to find_pack_entry_one(), which
is the expensive part of the loop.
This has two advantages:
- it isn't all-or-nothing; we still get some improvement
when there's a small number of kept or non-local packs,
and a large number of non-kept local packs
- it eliminates any possible race where we add new
non-local or kept packs after our initial scan. In
practice, I don't think this race matters; we already
cache the packed_git information, so somebody who adds a
new pack or .keep file after we've started will not be
noticed at all, unless we happen to need to call
reprepare_packed_git() because a lookup fails.
In other words, we're already racy, and the race is not
a big deal (losing the race means we might include an
object in the pack that would not otherwise be, which is
an acceptable outcome).
However, it also has a disadvantage: we still loop over the
rest of the packs for each object to check their flags. This
is much less expensive than doing the object lookup, but
still not free. So if we wanted to implement that strategy
to cover the non-all-or-nothing cases, we could do so in
addition to this one (so you get the most speedup in the
all-or-nothing case, and the best we can do in the other
cases). But given that the all-or-nothing case is likely the
most common, it is probably not worth the trouble, and we
can revisit this later if evidence points otherwise.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When pack-objects collects the list of objects to pack
(either from stdin, or via its internal rev-list), it
filters each one through want_object_in_pack().
This function loops through each existing packfile, looking
for the object. When we find it, we mark the pack/offset
combo for later use. However, we can't just return "yes, we
want it" at that point. If --honor-pack-keep is in effect,
we must keep looking to find it in _all_ packs, to make sure
none of them has a .keep. Likewise, if --local is in effect,
we must make sure it is not present in any non-local pack.
As a result, the sum effort of these calls is effectively
O(nr_objects * nr_packs). In an ordinary repository, we have
only a handful of packs, and this doesn't make a big
difference. But in pathological cases, it can slow the
counting phase to a crawl.
This patch notices the case that we have neither "--local"
nor "--honor-pack-keep" in effect and breaks out of the loop
early, after finding the first instance. Note that our worst
case is still "objects * packs" (i.e., we might find each
object in the last pack we look in), but in practice we will
often break out early. On an "average" repo, my git.git with
8 packs, this shows a modest 2% (a few dozen milliseconds)
improvement in the counting-objects phase of "git
pack-objects --all <foo" (hackily instrumented by sticking
exit(0) right after list_objects).
But in a much more pathological case, it makes a bigger
difference. I ran the same command on a real-world example
with ~9 million objects across 1300 packs. The counting time
dropped from 413s to 45s, an improvement of about 89%.
Note that this patch won't do anything by itself for a
normal "git gc", as it uses both --honor-pack-keep and
--local.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"git blame -M" missed a single line that was moved within the file.
* dk/blame-move-no-reason-for-1-line-context:
blame: require 0 context lines while finding moved lines with -M
|
|
One part of "git am" had an oddball helper function that called
stuff from outside "his" as opposed to calling what we have "ours",
which was not gender-neutral and also inconsistent with the rest of
the system where outside stuff is usuall called "theirs" in
contrast to "ours".
* js/am-call-theirs-theirs-in-fallback-3way:
am: counteract gender bias
|
|
"gc.autoPackLimit" when set to 1 should not trigger a repacking
when there is only one pack, but the code counted poorly and did
so.
* ew/gc-auto-pack-limit-fix:
gc: fix off-by-one error with gc.autoPackLimit
|
|
Git does not know what the contents in the index should be for a
path added with "git add -N" yet, so "git grep --cached" should not
show hits (or show lack of hits, with -L) in such a path, but that
logic does not apply to "git grep", i.e. searching in the working
tree files. But we did so by mistake, which has been corrected.
* nd/ita-cleanup:
grep: fix grepping for "intent to add" files
t7810-grep.sh: fix a whitespace inconsistency
t7810-grep.sh: fix duplicated test name
|
|
A helper function that takes the contents of a commit object and
finds its subject line did not ignore leading blank lines, as is
commonly done by other codepaths. Make it ignore leading blank
lines to match.
* js/find-commit-subject-ignore-leading-blanks:
reset --hard: skip blank lines when reporting the commit subject
sequencer: use skip_blank_lines() to find the commit subject
commit -C: skip blank lines at the beginning of the message
commit.c: make find_commit_subject() more robust
pretty: make the skip_blank_lines() function public
|
|
"git pack-objects" and "git index-pack" mostly operate with off_t
when talking about the offset of objects in a packfile, but there
were a handful of places that used "unsigned long" to hold that
value, leading to an unintended truncation.
* nd/pack-ofs-4gb-limit:
fsck: use streaming interface for large blobs in pack
pack-objects: do not truncate result in-pack object size on 32-bit systems
index-pack: correct "offset" type in unpack_entry_data()
index-pack: report correct bad object offsets even if they are large
index-pack: correct "len" type in unpack_data()
sha1_file.c: use type off_t* for object_info->disk_sizep
pack-objects: pass length to check_pack_crc() without truncation
|
|
"git worktree prune" protected worktrees that are marked as
"locked" by creating a file in a known location. "git worktree"
command learned a dedicated command pair to create and remove such
a file, so that the users do not have to do this with editor.
* nd/worktree-lock:
worktree.c: find_worktree() search by path suffix
worktree: add "unlock" command
worktree: add "lock" command
worktree.c: add is_worktree_locked()
worktree.c: add is_main_worktree()
worktree.c: add find_worktree()
|
|
Mark comment displayed when editing a note for translation.
Signed-off-by: Vasco Almeida <vascomalmeida@sapo.pt>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We already have "--date=raw", which is a Unix epoch
timestamp plus a contextual timezone (either the author's or
the local). But one may not care about the timezone and just
want the epoch timestamp by itself. It's not hard to parse
the two apart, but if you are using a pretty-print format,
you may want git to show the "finished" form that the user
will see.
We can accomodate this by adding a new date format, "unix",
which is basically "raw" without the timezone.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This brings the short help in line with the documentation.
Signed-off-by: Orgad Shaneh <orgads@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Last October, we had to change this code to run `git merge-recursive`
in a child process: git-am wants to print some helpful advice when the
merge failed, but the code in question was not prepared to return, it
die()d instead.
We are finally at a point when the code *is* prepared to return errors,
and can avoid the child process again.
This reverts commit c63d4b2 (am -3: do not let failed merge from
completing the error codepath, 2015-10-09), with the necessary changes
to adjust for the fact that Git's source code changed in the meantime
(such as: using OIDs instead of hashes in the recursive merge, and a
removed gender bias).
Note: the code now calls merge_recursive_generic() again. Unlike
merge_trees() and merge_recursive(), this function returns 0 upon success,
as most of Git's functions. Therefore, the error value -1 naturally is
handled correctly, and we do not have to take care of it specifically.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|