diff options
75 files changed, 3184 insertions, 2045 deletions
diff --git a/Documentation/config.txt b/Documentation/config.txt index 11f320b962..9a0544cf1f 100644 --- a/Documentation/config.txt +++ b/Documentation/config.txt @@ -538,14 +538,14 @@ core.pager:: `LESS` variable to some other value. Alternately, these settings can be overridden on a project or global basis by setting the `core.pager` option. - Setting `core.pager` has no affect on the `LESS` + Setting `core.pager` has no effect on the `LESS` environment variable behaviour above, so if you want to override git's default settings this way, you need to be explicit. For example, to disable the S option in a backward compatible manner, set `core.pager` - to `less -+$LESS -FRX`. This will be passed to the - shell by git, which will translate the final command to - `LESS=FRSX less -+FRSX -FRX`. + to `less -+S`. This will be passed to the shell by + git, which will translate the final command to + `LESS=FRSX less -+S`. core.whitespace:: A comma separated list of common whitespace problems to diff --git a/Documentation/git-clone.txt b/Documentation/git-clone.txt index 6d98ef3d2a..7fefdb0384 100644 --- a/Documentation/git-clone.txt +++ b/Documentation/git-clone.txt @@ -196,9 +196,9 @@ objects from the source repository into a pack in the cloned repository. `--no-single-branch` is given to fetch the histories near the tips of all branches. Further fetches into the resulting repository will only update the - remote tracking branch for the branch this option was used for the + remote-tracking branch for the branch this option was used for the initial cloning. If the HEAD at the remote did not point at any - branch when `--single-branch` clone was made, no remote tracking + branch when `--single-branch` clone was made, no remote-tracking branch is created. --recursive:: diff --git a/Documentation/git-commit.txt b/Documentation/git-commit.txt index 3acf2e70d6..7bdb039d5e 100644 --- a/Documentation/git-commit.txt +++ b/Documentation/git-commit.txt @@ -13,7 +13,7 @@ SYNOPSIS [-F <file> | -m <msg>] [--reset-author] [--allow-empty] [--allow-empty-message] [--no-verify] [-e] [--author=<author>] [--date=<date>] [--cleanup=<mode>] [--status | --no-status] - [-i | -o] [--] [<file>...] + [-i | -o] [-S[<keyid>]] [--] [<file>...] DESCRIPTION ----------- @@ -188,6 +188,11 @@ OPTIONS commit log message unmodified. This option lets you further edit the message taken from these sources. +--no-edit:: + Use the selected commit message without launching an editor. + For example, `git commit --amend --no-edit` amends a commit + without changing its commit message. + --amend:: Used to amend the tip of the current branch. Prepare the tree object you would want to replace the latest commit as usual @@ -197,10 +202,6 @@ OPTIONS current tip -- if it was a merge, it will have the parents of the current tip as parents -- so the current top commit is discarded. - ---no-post-rewrite:: - Bypass the post-rewrite hook. - + -- It is a rough equivalent for: @@ -217,6 +218,9 @@ You should understand the implications of rewriting history if you amend a commit that has already been published. (See the "RECOVERING FROM UPSTREAM REBASE" section in linkgit:git-rebase[1].) +--no-post-rewrite:: + Bypass the post-rewrite hook. + -i:: --include:: Before making a commit out of staged contents so far, @@ -280,6 +284,10 @@ configuration variable documented in linkgit:git-config[1]. commit message template when using an editor to prepare the default commit message. +-S[<keyid>]:: +--gpg-sign[=<keyid>]:: + GPG-sign commit. + \--:: Do not interpret any more arguments as options. diff --git a/Documentation/git-cvsimport.txt b/Documentation/git-cvsimport.txt index 6695ab3b4b..98d9881d7e 100644 --- a/Documentation/git-cvsimport.txt +++ b/Documentation/git-cvsimport.txt @@ -137,17 +137,19 @@ This option can be used several times to provide several detection regexes. -A <author-conv-file>:: CVS by default uses the Unix username when writing its commit logs. Using this option and an author-conv-file - in this format + maps the name recorded in CVS to author name, e-mail and + optional timezone: + --------- exon=Andreas Ericsson <ae@op5.se> - spawn=Simon Pawn <spawn@frog-pond.org> + spawn=Simon Pawn <spawn@frog-pond.org> America/Chicago --------- + 'git cvsimport' will make it appear as those authors had their GIT_AUTHOR_NAME and GIT_AUTHOR_EMAIL set properly -all along. +all along. If a timezone is specified, GIT_AUTHOR_DATE will +have the corresponding offset applied. + For convenience, this data is saved to `$GIT_DIR/cvs-authors` each time the '-A' option is provided and read from that same diff --git a/Documentation/git-merge.txt b/Documentation/git-merge.txt index 20f9228511..d34ea3c50b 100644 --- a/Documentation/git-merge.txt +++ b/Documentation/git-merge.txt @@ -99,7 +99,7 @@ commit or stash your changes before running 'git merge'. more than two parents (affectionately called an Octopus merge). + If no commit is given from the command line, and if `merge.defaultToUpstream` -configuration variable is set, merge the remote tracking branches +configuration variable is set, merge the remote-tracking branches that the current branch is configured to use as its upstream. See also the configuration section of this manual page. diff --git a/Documentation/git-push.txt b/Documentation/git-push.txt index 22d2580129..fe46c4258a 100644 --- a/Documentation/git-push.txt +++ b/Documentation/git-push.txt @@ -175,7 +175,7 @@ useful if you write an alias or script around 'git push'. --recurse-submodules=check|on-demand:: Make sure all submodule commits used by the revisions to be - pushed are available on a remote tracking branch. If 'check' is + pushed are available on a remote-tracking branch. If 'check' is used git will verify that all submodule commits that changed in the revisions to be pushed are available on at least one remote of the submodule. If any commits are missing the push will be diff --git a/Documentation/git-reset.txt b/Documentation/git-reset.txt index 117e3743a6..978d8da50c 100644 --- a/Documentation/git-reset.txt +++ b/Documentation/git-reset.txt @@ -10,7 +10,7 @@ SYNOPSIS [verse] 'git reset' [-q] [<commit>] [--] <paths>... 'git reset' (--patch | -p) [<commit>] [--] [<paths>...] -'git reset' (--soft | --mixed | --hard | --merge | --keep) [-q] [<commit>] +'git reset' [--soft | --mixed | --hard | --merge | --keep] [-q] [<commit>] DESCRIPTION ----------- @@ -43,11 +43,11 @@ This means that `git reset -p` is the opposite of `git add -p`, i.e. you can use it to selectively reset hunks. See the ``Interactive Mode'' section of linkgit:git-add[1] to learn how to operate the `--patch` mode. -'git reset' --<mode> [<commit>]:: +'git reset' [<mode>] [<commit>]:: This form resets the current branch head to <commit> and possibly updates the index (resetting it to the tree of <commit>) and - the working tree depending on <mode>, which - must be one of the following: + the working tree depending on <mode>. If <mode> is omitted, + defaults to "--mixed". The <mode> must be one of the following: + -- --soft:: diff --git a/Documentation/git-send-email.txt b/Documentation/git-send-email.txt index 324117072d..eeb561cf14 100644 --- a/Documentation/git-send-email.txt +++ b/Documentation/git-send-email.txt @@ -126,6 +126,10 @@ The --to option must be repeated for each user you want on the to list. + Note that no attempts whatsoever are made to validate the encoding. +--compose-encoding=<encoding>:: + Specify encoding of compose message. Default is the value of the + 'sendemail.composeencoding'; if that is unspecified, UTF-8 is assumed. + Sending ~~~~~~~ diff --git a/Documentation/git-svn.txt b/Documentation/git-svn.txt index cfe8d2b5df..8b0d3adfed 100644 --- a/Documentation/git-svn.txt +++ b/Documentation/git-svn.txt @@ -146,6 +146,13 @@ Skip "branches" and "tags" of first level directories;; ------------------------------------------------------------------------ -- +--log-window-size=<n>;; + Fetch <n> log entries per request when scanning Subversion history. + The default is 100. For very large Subversion repositories, larger + values may be needed for 'clone'/'fetch' to complete in reasonable + time. But overly large values may lead to higher memory usage and + request timeouts. + 'clone':: Runs 'init' and 'fetch'. It will automatically create a directory based on the basename of the URL passed to it; diff --git a/Documentation/git-symbolic-ref.txt b/Documentation/git-symbolic-ref.txt index 981d3a8fc1..ef68ad2b71 100644 --- a/Documentation/git-symbolic-ref.txt +++ b/Documentation/git-symbolic-ref.txt @@ -3,13 +3,14 @@ git-symbolic-ref(1) NAME ---- -git-symbolic-ref - Read and modify symbolic refs +git-symbolic-ref - Read, modify and delete symbolic refs SYNOPSIS -------- [verse] 'git symbolic-ref' [-m <reason>] <name> <ref> 'git symbolic-ref' [-q] [--short] <name> +'git symbolic-ref' --delete [-q] <name> DESCRIPTION ----------- @@ -21,6 +22,9 @@ argument to see which branch your working tree is on. Given two arguments, creates or updates a symbolic ref <name> to point at the given branch <ref>. +Given `--delete` and an additional argument, deletes the given +symbolic ref. + A symbolic ref is a regular file that stores a string that begins with `ref: refs/`. For example, your `.git/HEAD` is a regular file whose contents is `ref: refs/heads/master`. @@ -28,6 +32,10 @@ a regular file whose contents is `ref: refs/heads/master`. OPTIONS ------- +-d:: +--delete:: + Delete the symbolic ref <name>. + -q:: --quiet:: Do not issue an error message if the <name> is not a diff --git a/Documentation/gitattributes.txt b/Documentation/gitattributes.txt index ba02d4de59..2698f63cf9 100644 --- a/Documentation/gitattributes.txt +++ b/Documentation/gitattributes.txt @@ -56,6 +56,7 @@ When more than one pattern matches the path, a later line overrides an earlier line. This overriding is done per attribute. The rules how the pattern matches paths are the same as in `.gitignore` files; see linkgit:gitignore[5]. +Unlike `.gitignore`, negative patterns are forbidden. When deciding what attributes are assigned to a path, git consults `$GIT_DIR/info/attributes` file (which has the highest diff --git a/Documentation/gitrepository-layout.txt b/Documentation/gitrepository-layout.txt index 5c891f1169..9f628862b4 100644 --- a/Documentation/gitrepository-layout.txt +++ b/Documentation/gitrepository-layout.txt @@ -93,6 +93,12 @@ refs/remotes/`name`:: records tip-of-the-tree commit objects of branches copied from a remote repository. +refs/replace/`<obj-sha1>`:: + records the SHA1 of the object that replaces `<obj-sha1>`. + This is similar to info/grafts and is internally used and + maintained by linkgit:git-replace[1]. Such refs can be exchanged + between repositories while grafts are not. + packed-refs:: records the same information as refs/heads/, refs/tags/, and friends record in a more efficient way. See diff --git a/Documentation/merge-config.txt b/Documentation/merge-config.txt index 861bd6f553..9bb4956ccd 100644 --- a/Documentation/merge-config.txt +++ b/Documentation/merge-config.txt @@ -9,11 +9,11 @@ merge.conflictstyle:: merge.defaultToUpstream:: If merge is called without any commit argument, merge the upstream branches configured for the current branch by using their last - observed values stored in their remote tracking branches. + observed values stored in their remote-tracking branches. The values of the `branch.<current branch>.merge` that name the branches at the remote named by `branch.<current branch>.remote` are consulted, and then they are mapped via `remote.<remote>.fetch` - to their corresponding remote tracking branches, and the tips of + to their corresponding remote-tracking branches, and the tips of these tracking branches are merged. merge.ff:: @@ -746,6 +746,7 @@ LIB_OBJS += editor.o LIB_OBJS += entry.o LIB_OBJS += environment.o LIB_OBJS += exec_cmd.o +LIB_OBJS += fetch-pack.o LIB_OBJS += fsck.o LIB_OBJS += gettext.o LIB_OBJS += gpg-interface.o @@ -763,6 +764,7 @@ LIB_OBJS += lockfile.o LIB_OBJS += log-tree.o LIB_OBJS += mailmap.o LIB_OBJS += match-trees.o +LIB_OBJS += merge.o LIB_OBJS += merge-file.o LIB_OBJS += merge-recursive.o LIB_OBJS += mergesort.o @@ -797,6 +799,7 @@ LIB_OBJS += rerere.o LIB_OBJS += resolve-undo.o LIB_OBJS += revision.o LIB_OBJS += run-command.o +LIB_OBJS += send-pack.o LIB_OBJS += sequencer.o LIB_OBJS += server-info.o LIB_OBJS += setup.o @@ -1382,6 +1385,10 @@ ifeq ($(uname_S),NONSTOP_KERNEL) MKDIR_WO_TRAILING_SLASH = YesPlease # RFE 10-120912-4693 submitted to HP NonStop development. NO_SETITIMER = UnfortunatelyYes + SANE_TOOL_PATH=/usr/coreutils/bin:/usr/local/bin + SHELL_PATH=/usr/local/bin/bash + # as of H06.25/J06.14, we might better use this + #SHELL_PATH=/usr/coreutils/bin/bash endif ifneq (,$(findstring MINGW,$(uname_S))) pathsep = ; @@ -115,6 +115,13 @@ struct attr_state { const char *setto; }; +struct pattern { + const char *pattern; + int patternlen; + int nowildcardlen; + int flags; /* EXC_FLAG_* */ +}; + /* * One rule, as from a .gitattributes file. * @@ -131,7 +138,7 @@ struct attr_state { */ struct match_attr { union { - char *pattern; + struct pattern pat; struct git_attr *attr; } u; char is_macro; @@ -241,9 +248,16 @@ static struct match_attr *parse_attr_line(const char *line, const char *src, if (is_macro) res->u.attr = git_attr_internal(name, namelen); else { - res->u.pattern = (char *)&(res->state[num_attr]); - memcpy(res->u.pattern, name, namelen); - res->u.pattern[namelen] = 0; + char *p = (char *)&(res->state[num_attr]); + memcpy(p, name, namelen); + res->u.pat.pattern = p; + parse_exclude_pattern(&res->u.pat.pattern, + &res->u.pat.patternlen, + &res->u.pat.flags, + &res->u.pat.nowildcardlen); + if (res->u.pat.flags & EXC_FLAG_NEGATIVE) + die(_("Negative patterns are forbidden in git attributes\n" + "Use '\\!' for literal leading exclamation.")); } res->is_macro = is_macro; res->num_attr = num_attr; @@ -648,25 +662,21 @@ static void prepare_attr_stack(const char *path) static int path_matches(const char *pathname, int pathlen, const char *basename, - const char *pattern, + const struct pattern *pat, const char *base, int baselen) { - if (!strchr(pattern, '/')) { - return (fnmatch_icase(pattern, basename, 0) == 0); + const char *pattern = pat->pattern; + int prefix = pat->nowildcardlen; + + if (pat->flags & EXC_FLAG_NODIR) { + return match_basename(basename, + pathlen - (basename - pathname), + pattern, prefix, + pat->patternlen, pat->flags); } - /* - * match with FNM_PATHNAME; the pattern has base implicitly - * in front of it. - */ - if (*pattern == '/') - pattern++; - if (pathlen < baselen || - (baselen && pathname[baselen] != '/') || - strncmp(pathname, base, baselen)) - return 0; - if (baselen != 0) - baselen++; - return fnmatch_icase(pattern, pathname + baselen, FNM_PATHNAME) == 0; + return match_pathname(pathname, pathlen, + base, baselen, + pattern, prefix, pat->patternlen, pat->flags); } static int macroexpand_one(int attr_nr, int rem); @@ -704,7 +714,7 @@ static int fill(const char *path, int pathlen, const char *basename, if (a->is_macro) continue; if (path_matches(path, pathlen, basename, - a->u.pattern, base, stk->originlen)) + &a->u.pat, base, stk->originlen)) rem = fill_one("fill", a, rem); } return rem; @@ -956,3 +956,41 @@ int bisect_next_all(const char *prefix, int no_checkout) return bisect_checkout(bisect_rev_hex, no_checkout); } +static inline int log2i(int n) +{ + int log2 = 0; + + for (; n > 1; n >>= 1) + log2++; + + return log2; +} + +static inline int exp2i(int n) +{ + return 1 << n; +} + +/* + * Estimate the number of bisect steps left (after the current step) + * + * For any x between 0 included and 2^n excluded, the probability for + * n - 1 steps left looks like: + * + * P(2^n + x) == (2^n - x) / (2^n + x) + * + * and P(2^n + x) < 0.5 means 2^n < 3x + */ +int estimate_bisect_steps(int all) +{ + int n, x, e; + + if (all < 3) + return 0; + + n = log2i(all); + e = exp2i(n); + x = all - e; + + return (e < 3 * x) ? n : n - 1; +} @@ -11,10 +11,6 @@ extern struct commit_list *filter_skipped(struct commit_list *list, int *count, int *skipped_first); -extern void print_commit_list(struct commit_list *list, - const char *format_cur, - const char *format_last); - #define BISECT_SHOW_ALL (1<<0) #define REV_LIST_QUIET (1<<1) @@ -37,10 +37,6 @@ int copy_note_for_rewrite(struct notes_rewrite_cfg *c, const unsigned char *from_obj, const unsigned char *to_obj); void finish_copy_notes_for_rewrite(struct notes_rewrite_cfg *c); -extern int check_pager_config(const char *cmd); -struct diff_options; -extern void setup_diff_pager(struct diff_options *); - extern int textconv_object(const char *path, unsigned mode, const unsigned char *sha1, int sha1_valid, char **buf, unsigned long *buf_size); extern int cmd_add(int argc, const char **argv, const char *prefix); diff --git a/builtin/diff.c b/builtin/diff.c index 9650be2c50..9c70e40809 100644 --- a/builtin/diff.c +++ b/builtin/diff.c @@ -418,19 +418,3 @@ int cmd_diff(int argc, const char **argv, const char *prefix) refresh_index_quietly(); return result; } - -void setup_diff_pager(struct diff_options *opt) -{ - /* - * If the user asked for our exit code, then either they want --quiet - * or --exit-code. We should definitely not bother with a pager in the - * former case, as we will generate no output. Since we still properly - * report our exit code even when a pager is run, we _could_ run a - * pager with --exit-code. But since we have not done so historically, - * and because it is easy to find people oneline advising "git diff - * --exit-code" in hooks and other scripts, we do not do so. - */ - if (!DIFF_OPT_TST(opt, EXIT_WITH_STATUS) && - check_pager_config("diff") != 0) - setup_pager(); -} diff --git a/builtin/fetch-pack.c b/builtin/fetch-pack.c index e6443986b8..940ae35dc2 100644 --- a/builtin/fetch-pack.c +++ b/builtin/fetch-pack.c @@ -1,895 +1,12 @@ #include "builtin.h" -#include "refs.h" #include "pkt-line.h" -#include "commit.h" -#include "tag.h" -#include "exec_cmd.h" -#include "pack.h" -#include "sideband.h" #include "fetch-pack.h" -#include "remote.h" -#include "run-command.h" -#include "transport.h" -#include "version.h" - -static int transfer_unpack_limit = -1; -static int fetch_unpack_limit = -1; -static int unpack_limit = 100; -static int prefer_ofs_delta = 1; -static int no_done; -static int fetch_fsck_objects = -1; -static int transfer_fsck_objects = -1; -static int agent_supported; -static struct fetch_pack_args args = { - /* .uploadpack = */ "git-upload-pack", -}; static const char fetch_pack_usage[] = "git fetch-pack [--all] [--stdin] [--quiet|-q] [--keep|-k] [--thin] " "[--include-tag] [--upload-pack=<git-upload-pack>] [--depth=<n>] " "[--no-progress] [-v] [<host>:]<directory> [<refs>...]"; -#define COMPLETE (1U << 0) -#define COMMON (1U << 1) -#define COMMON_REF (1U << 2) -#define SEEN (1U << 3) -#define POPPED (1U << 4) - -static int marked; - -/* - * After sending this many "have"s if we do not get any new ACK , we - * give up traversing our history. - */ -#define MAX_IN_VAIN 256 - -static struct commit_list *rev_list; -static int non_common_revs, multi_ack, use_sideband; - -static void rev_list_push(struct commit *commit, int mark) -{ - if (!(commit->object.flags & mark)) { - commit->object.flags |= mark; - - if (!(commit->object.parsed)) - if (parse_commit(commit)) - return; - - commit_list_insert_by_date(commit, &rev_list); - - if (!(commit->object.flags & COMMON)) - non_common_revs++; - } -} - -static int rev_list_insert_ref(const char *refname, const unsigned char *sha1, int flag, void *cb_data) -{ - struct object *o = deref_tag(parse_object(sha1), refname, 0); - - if (o && o->type == OBJ_COMMIT) - rev_list_push((struct commit *)o, SEEN); - - return 0; -} - -static int clear_marks(const char *refname, const unsigned char *sha1, int flag, void *cb_data) -{ - struct object *o = deref_tag(parse_object(sha1), refname, 0); - - if (o && o->type == OBJ_COMMIT) - clear_commit_marks((struct commit *)o, - COMMON | COMMON_REF | SEEN | POPPED); - return 0; -} - -/* - This function marks a rev and its ancestors as common. - In some cases, it is desirable to mark only the ancestors (for example - when only the server does not yet know that they are common). -*/ - -static void mark_common(struct commit *commit, - int ancestors_only, int dont_parse) -{ - if (commit != NULL && !(commit->object.flags & COMMON)) { - struct object *o = (struct object *)commit; - - if (!ancestors_only) - o->flags |= COMMON; - - if (!(o->flags & SEEN)) - rev_list_push(commit, SEEN); - else { - struct commit_list *parents; - - if (!ancestors_only && !(o->flags & POPPED)) - non_common_revs--; - if (!o->parsed && !dont_parse) - if (parse_commit(commit)) - return; - - for (parents = commit->parents; - parents; - parents = parents->next) - mark_common(parents->item, 0, dont_parse); - } - } -} - -/* - Get the next rev to send, ignoring the common. -*/ - -static const unsigned char *get_rev(void) -{ - struct commit *commit = NULL; - - while (commit == NULL) { - unsigned int mark; - struct commit_list *parents; - - if (rev_list == NULL || non_common_revs == 0) - return NULL; - - commit = rev_list->item; - if (!commit->object.parsed) - parse_commit(commit); - parents = commit->parents; - - commit->object.flags |= POPPED; - if (!(commit->object.flags & COMMON)) - non_common_revs--; - - if (commit->object.flags & COMMON) { - /* do not send "have", and ignore ancestors */ - commit = NULL; - mark = COMMON | SEEN; - } else if (commit->object.flags & COMMON_REF) - /* send "have", and ignore ancestors */ - mark = COMMON | SEEN; - else - /* send "have", also for its ancestors */ - mark = SEEN; - - while (parents) { - if (!(parents->item->object.flags & SEEN)) - rev_list_push(parents->item, mark); - if (mark & COMMON) - mark_common(parents->item, 1, 0); - parents = parents->next; - } - - rev_list = rev_list->next; - } - - return commit->object.sha1; -} - -enum ack_type { - NAK = 0, - ACK, - ACK_continue, - ACK_common, - ACK_ready -}; - -static void consume_shallow_list(int fd) -{ - if (args.stateless_rpc && args.depth > 0) { - /* If we sent a depth we will get back "duplicate" - * shallow and unshallow commands every time there - * is a block of have lines exchanged. - */ - char line[1000]; - while (packet_read_line(fd, line, sizeof(line))) { - if (!prefixcmp(line, "shallow ")) - continue; - if (!prefixcmp(line, "unshallow ")) - continue; - die("git fetch-pack: expected shallow list"); - } - } -} - -struct write_shallow_data { - struct strbuf *out; - int use_pack_protocol; - int count; -}; - -static int write_one_shallow(const struct commit_graft *graft, void *cb_data) -{ - struct write_shallow_data *data = cb_data; - const char *hex = sha1_to_hex(graft->sha1); - data->count++; - if (data->use_pack_protocol) - packet_buf_write(data->out, "shallow %s", hex); - else { - strbuf_addstr(data->out, hex); - strbuf_addch(data->out, '\n'); - } - return 0; -} - -static int write_shallow_commits(struct strbuf *out, int use_pack_protocol) -{ - struct write_shallow_data data; - data.out = out; - data.use_pack_protocol = use_pack_protocol; - data.count = 0; - for_each_commit_graft(write_one_shallow, &data); - return data.count; -} - -static enum ack_type get_ack(int fd, unsigned char *result_sha1) -{ - static char line[1000]; - int len = packet_read_line(fd, line, sizeof(line)); - - if (!len) - die("git fetch-pack: expected ACK/NAK, got EOF"); - if (line[len-1] == '\n') - line[--len] = 0; - if (!strcmp(line, "NAK")) - return NAK; - if (!prefixcmp(line, "ACK ")) { - if (!get_sha1_hex(line+4, result_sha1)) { - if (strstr(line+45, "continue")) - return ACK_continue; - if (strstr(line+45, "common")) - return ACK_common; - if (strstr(line+45, "ready")) - return ACK_ready; - return ACK; - } - } - die("git fetch_pack: expected ACK/NAK, got '%s'", line); -} - -static void send_request(int fd, struct strbuf *buf) -{ - if (args.stateless_rpc) { - send_sideband(fd, -1, buf->buf, buf->len, LARGE_PACKET_MAX); - packet_flush(fd); - } else - safe_write(fd, buf->buf, buf->len); -} - -static void insert_one_alternate_ref(const struct ref *ref, void *unused) -{ - rev_list_insert_ref(NULL, ref->old_sha1, 0, NULL); -} - -#define INITIAL_FLUSH 16 -#define PIPESAFE_FLUSH 32 -#define LARGE_FLUSH 1024 - -static int next_flush(int count) -{ - int flush_limit = args.stateless_rpc ? LARGE_FLUSH : PIPESAFE_FLUSH; - - if (count < flush_limit) - count <<= 1; - else - count += flush_limit; - return count; -} - -static int find_common(int fd[2], unsigned char *result_sha1, - struct ref *refs) -{ - int fetching; - int count = 0, flushes = 0, flush_at = INITIAL_FLUSH, retval; - const unsigned char *sha1; - unsigned in_vain = 0; - int got_continue = 0; - int got_ready = 0; - struct strbuf req_buf = STRBUF_INIT; - size_t state_len = 0; - - if (args.stateless_rpc && multi_ack == 1) - die("--stateless-rpc requires multi_ack_detailed"); - if (marked) - for_each_ref(clear_marks, NULL); - marked = 1; - - for_each_ref(rev_list_insert_ref, NULL); - for_each_alternate_ref(insert_one_alternate_ref, NULL); - - fetching = 0; - for ( ; refs ; refs = refs->next) { - unsigned char *remote = refs->old_sha1; - const char *remote_hex; - struct object *o; - - /* - * If that object is complete (i.e. it is an ancestor of a - * local ref), we tell them we have it but do not have to - * tell them about its ancestors, which they already know - * about. - * - * We use lookup_object here because we are only - * interested in the case we *know* the object is - * reachable and we have already scanned it. - */ - if (((o = lookup_object(remote)) != NULL) && - (o->flags & COMPLETE)) { - continue; - } - - remote_hex = sha1_to_hex(remote); - if (!fetching) { - struct strbuf c = STRBUF_INIT; - if (multi_ack == 2) strbuf_addstr(&c, " multi_ack_detailed"); - if (multi_ack == 1) strbuf_addstr(&c, " multi_ack"); - if (no_done) strbuf_addstr(&c, " no-done"); - if (use_sideband == 2) strbuf_addstr(&c, " side-band-64k"); - if (use_sideband == 1) strbuf_addstr(&c, " side-band"); - if (args.use_thin_pack) strbuf_addstr(&c, " thin-pack"); - if (args.no_progress) strbuf_addstr(&c, " no-progress"); - if (args.include_tag) strbuf_addstr(&c, " include-tag"); - if (prefer_ofs_delta) strbuf_addstr(&c, " ofs-delta"); - if (agent_supported) strbuf_addf(&c, " agent=%s", - git_user_agent_sanitized()); - packet_buf_write(&req_buf, "want %s%s\n", remote_hex, c.buf); - strbuf_release(&c); - } else - packet_buf_write(&req_buf, "want %s\n", remote_hex); - fetching++; - } - - if (!fetching) { - strbuf_release(&req_buf); - packet_flush(fd[1]); - return 1; - } - - if (is_repository_shallow()) - write_shallow_commits(&req_buf, 1); - if (args.depth > 0) - packet_buf_write(&req_buf, "deepen %d", args.depth); - packet_buf_flush(&req_buf); - state_len = req_buf.len; - - if (args.depth > 0) { - char line[1024]; - unsigned char sha1[20]; - - send_request(fd[1], &req_buf); - while (packet_read_line(fd[0], line, sizeof(line))) { - if (!prefixcmp(line, "shallow ")) { - if (get_sha1_hex(line + 8, sha1)) - die("invalid shallow line: %s", line); - register_shallow(sha1); - continue; - } - if (!prefixcmp(line, "unshallow ")) { - if (get_sha1_hex(line + 10, sha1)) - die("invalid unshallow line: %s", line); - if (!lookup_object(sha1)) - die("object not found: %s", line); - /* make sure that it is parsed as shallow */ - if (!parse_object(sha1)) - die("error in object: %s", line); - if (unregister_shallow(sha1)) - die("no shallow found: %s", line); - continue; - } - die("expected shallow/unshallow, got %s", line); - } - } else if (!args.stateless_rpc) - send_request(fd[1], &req_buf); - - if (!args.stateless_rpc) { - /* If we aren't using the stateless-rpc interface - * we don't need to retain the headers. - */ - strbuf_setlen(&req_buf, 0); - state_len = 0; - } - - flushes = 0; - retval = -1; - while ((sha1 = get_rev())) { - packet_buf_write(&req_buf, "have %s\n", sha1_to_hex(sha1)); - if (args.verbose) - fprintf(stderr, "have %s\n", sha1_to_hex(sha1)); - in_vain++; - if (flush_at <= ++count) { - int ack; - - packet_buf_flush(&req_buf); - send_request(fd[1], &req_buf); - strbuf_setlen(&req_buf, state_len); - flushes++; - flush_at = next_flush(count); - - /* - * We keep one window "ahead" of the other side, and - * will wait for an ACK only on the next one - */ - if (!args.stateless_rpc && count == INITIAL_FLUSH) - continue; - - consume_shallow_list(fd[0]); - do { - ack = get_ack(fd[0], result_sha1); - if (args.verbose && ack) - fprintf(stderr, "got ack %d %s\n", ack, - sha1_to_hex(result_sha1)); - switch (ack) { - case ACK: - flushes = 0; - multi_ack = 0; - retval = 0; - goto done; - case ACK_common: - case ACK_ready: - case ACK_continue: { - struct commit *commit = - lookup_commit(result_sha1); - if (!commit) - die("invalid commit %s", sha1_to_hex(result_sha1)); - if (args.stateless_rpc - && ack == ACK_common - && !(commit->object.flags & COMMON)) { - /* We need to replay the have for this object - * on the next RPC request so the peer knows - * it is in common with us. - */ - const char *hex = sha1_to_hex(result_sha1); - packet_buf_write(&req_buf, "have %s\n", hex); - state_len = req_buf.len; - } - mark_common(commit, 0, 1); - retval = 0; - in_vain = 0; - got_continue = 1; - if (ack == ACK_ready) { - rev_list = NULL; - got_ready = 1; - } - break; - } - } - } while (ack); - flushes--; - if (got_continue && MAX_IN_VAIN < in_vain) { - if (args.verbose) - fprintf(stderr, "giving up\n"); - break; /* give up */ - } - } - } -done: - if (!got_ready || !no_done) { - packet_buf_write(&req_buf, "done\n"); - send_request(fd[1], &req_buf); - } - if (args.verbose) - fprintf(stderr, "done\n"); - if (retval != 0) { - multi_ack = 0; - flushes++; - } - strbuf_release(&req_buf); - - consume_shallow_list(fd[0]); - while (flushes || multi_ack) { - int ack = get_ack(fd[0], result_sha1); - if (ack) { - if (args.verbose) - fprintf(stderr, "got ack (%d) %s\n", ack, - sha1_to_hex(result_sha1)); - if (ack == ACK) - return 0; - multi_ack = 1; - continue; - } - flushes--; - } - /* it is no error to fetch into a completely empty repo */ - return count ? retval : 0; -} - -static struct commit_list *complete; - -static int mark_complete(const char *refname, const unsigned char *sha1, int flag, void *cb_data) -{ - struct object *o = parse_object(sha1); - - while (o && o->type == OBJ_TAG) { - struct tag *t = (struct tag *) o; - if (!t->tagged) - break; /* broken repository */ - o->flags |= COMPLETE; - o = parse_object(t->tagged->sha1); - } - if (o && o->type == OBJ_COMMIT) { - struct commit *commit = (struct commit *)o; - if (!(commit->object.flags & COMPLETE)) { - commit->object.flags |= COMPLETE; - commit_list_insert_by_date(commit, &complete); - } - } - return 0; -} - -static void mark_recent_complete_commits(unsigned long cutoff) -{ - while (complete && cutoff <= complete->item->date) { - if (args.verbose) - fprintf(stderr, "Marking %s as complete\n", - sha1_to_hex(complete->item->object.sha1)); - pop_most_recent_commit(&complete, COMPLETE); - } -} - -static int non_matching_ref(struct string_list_item *item, void *unused) -{ - if (item->util) { - item->util = NULL; - return 0; - } - else - return 1; -} - -static void filter_refs(struct ref **refs, struct string_list *sought) -{ - struct ref *newlist = NULL; - struct ref **newtail = &newlist; - struct ref *ref, *next; - int sought_pos; - - sought_pos = 0; - for (ref = *refs; ref; ref = next) { - int keep = 0; - next = ref->next; - if (!memcmp(ref->name, "refs/", 5) && - check_refname_format(ref->name + 5, 0)) - ; /* trash */ - else { - while (sought_pos < sought->nr) { - int cmp = strcmp(ref->name, sought->items[sought_pos].string); - if (cmp < 0) - break; /* definitely do not have it */ - else if (cmp == 0) { - keep = 1; /* definitely have it */ - sought->items[sought_pos++].util = "matched"; - break; - } - else - sought_pos++; /* might have it; keep looking */ - } - } - - if (! keep && args.fetch_all && - (!args.depth || prefixcmp(ref->name, "refs/tags/"))) - keep = 1; - - if (keep) { - *newtail = ref; - ref->next = NULL; - newtail = &ref->next; - } else { - free(ref); - } - } - - filter_string_list(sought, 0, non_matching_ref, NULL); - *refs = newlist; -} - -static void mark_alternate_complete(const struct ref *ref, void *unused) -{ - mark_complete(NULL, ref->old_sha1, 0, NULL); -} - -static int everything_local(struct ref **refs, struct string_list *sought) -{ - struct ref *ref; - int retval; - unsigned long cutoff = 0; - - save_commit_buffer = 0; - - for (ref = *refs; ref; ref = ref->next) { - struct object *o; - - o = parse_object(ref->old_sha1); - if (!o) - continue; - - /* We already have it -- which may mean that we were - * in sync with the other side at some time after - * that (it is OK if we guess wrong here). - */ - if (o->type == OBJ_COMMIT) { - struct commit *commit = (struct commit *)o; - if (!cutoff || cutoff < commit->date) - cutoff = commit->date; - } - } - - if (!args.depth) { - for_each_ref(mark_complete, NULL); - for_each_alternate_ref(mark_alternate_complete, NULL); - if (cutoff) - mark_recent_complete_commits(cutoff); - } - - /* - * Mark all complete remote refs as common refs. - * Don't mark them common yet; the server has to be told so first. - */ - for (ref = *refs; ref; ref = ref->next) { - struct object *o = deref_tag(lookup_object(ref->old_sha1), - NULL, 0); - - if (!o || o->type != OBJ_COMMIT || !(o->flags & COMPLETE)) - continue; - - if (!(o->flags & SEEN)) { - rev_list_push((struct commit *)o, COMMON_REF | SEEN); - - mark_common((struct commit *)o, 1, 1); - } - } - - filter_refs(refs, sought); - - for (retval = 1, ref = *refs; ref ; ref = ref->next) { - const unsigned char *remote = ref->old_sha1; - unsigned char local[20]; - struct object *o; - - o = lookup_object(remote); - if (!o || !(o->flags & COMPLETE)) { - retval = 0; - if (!args.verbose) - continue; - fprintf(stderr, - "want %s (%s)\n", sha1_to_hex(remote), - ref->name); - continue; - } - - hashcpy(ref->new_sha1, local); - if (!args.verbose) - continue; - fprintf(stderr, - "already have %s (%s)\n", sha1_to_hex(remote), - ref->name); - } - return retval; -} - -static int sideband_demux(int in, int out, void *data) -{ - int *xd = data; - - int ret = recv_sideband("fetch-pack", xd[0], out); - close(out); - return ret; -} - -static int get_pack(int xd[2], char **pack_lockfile) -{ - struct async demux; - const char *argv[20]; - char keep_arg[256]; - char hdr_arg[256]; - const char **av; - int do_keep = args.keep_pack; - struct child_process cmd; - - memset(&demux, 0, sizeof(demux)); - if (use_sideband) { - /* xd[] is talking with upload-pack; subprocess reads from - * xd[0], spits out band#2 to stderr, and feeds us band#1 - * through demux->out. - */ - demux.proc = sideband_demux; - demux.data = xd; - demux.out = -1; - if (start_async(&demux)) - die("fetch-pack: unable to fork off sideband" - " demultiplexer"); - } - else - demux.out = xd[0]; - - memset(&cmd, 0, sizeof(cmd)); - cmd.argv = argv; - av = argv; - *hdr_arg = 0; - if (!args.keep_pack && unpack_limit) { - struct pack_header header; - - if (read_pack_header(demux.out, &header)) - die("protocol error: bad pack header"); - snprintf(hdr_arg, sizeof(hdr_arg), - "--pack_header=%"PRIu32",%"PRIu32, - ntohl(header.hdr_version), ntohl(header.hdr_entries)); - if (ntohl(header.hdr_entries) < unpack_limit) - do_keep = 0; - else - do_keep = 1; - } - - if (do_keep) { - if (pack_lockfile) - cmd.out = -1; - *av++ = "index-pack"; - *av++ = "--stdin"; - if (!args.quiet && !args.no_progress) - *av++ = "-v"; - if (args.use_thin_pack) - *av++ = "--fix-thin"; - if (args.lock_pack || unpack_limit) { - int s = sprintf(keep_arg, - "--keep=fetch-pack %"PRIuMAX " on ", (uintmax_t) getpid()); - if (gethostname(keep_arg + s, sizeof(keep_arg) - s)) - strcpy(keep_arg + s, "localhost"); - *av++ = keep_arg; - } - } - else { - *av++ = "unpack-objects"; - if (args.quiet || args.no_progress) - *av++ = "-q"; - } - if (*hdr_arg) - *av++ = hdr_arg; - if (fetch_fsck_objects >= 0 - ? fetch_fsck_objects - : transfer_fsck_objects >= 0 - ? transfer_fsck_objects - : 0) - *av++ = "--strict"; - *av++ = NULL; - - cmd.in = demux.out; - cmd.git_cmd = 1; - if (start_command(&cmd)) - die("fetch-pack: unable to fork off %s", argv[0]); - if (do_keep && pack_lockfile) { - *pack_lockfile = index_pack_lockfile(cmd.out); - close(cmd.out); - } - - if (finish_command(&cmd)) - die("%s failed", argv[0]); - if (use_sideband && finish_async(&demux)) - die("error in sideband demultiplexer"); - return 0; -} - -static struct ref *do_fetch_pack(int fd[2], - const struct ref *orig_ref, - struct string_list *sought, - char **pack_lockfile) -{ - struct ref *ref = copy_ref_list(orig_ref); - unsigned char sha1[20]; - const char *agent_feature; - int agent_len; - - sort_ref_list(&ref, ref_compare_name); - - if (is_repository_shallow() && !server_supports("shallow")) - die("Server does not support shallow clients"); - if (server_supports("multi_ack_detailed")) { - if (args.verbose) - fprintf(stderr, "Server supports multi_ack_detailed\n"); - multi_ack = 2; - if (server_supports("no-done")) { - if (args.verbose) - fprintf(stderr, "Server supports no-done\n"); - if (args.stateless_rpc) - no_done = 1; - } - } - else if (server_supports("multi_ack")) { - if (args.verbose) - fprintf(stderr, "Server supports multi_ack\n"); - multi_ack = 1; - } - if (server_supports("side-band-64k")) { - if (args.verbose) - fprintf(stderr, "Server supports side-band-64k\n"); - use_sideband = 2; - } - else if (server_supports("side-band")) { - if (args.verbose) - fprintf(stderr, "Server supports side-band\n"); - use_sideband = 1; - } - if (!server_supports("thin-pack")) - args.use_thin_pack = 0; - if (!server_supports("no-progress")) - args.no_progress = 0; - if (!server_supports("include-tag")) - args.include_tag = 0; - if (server_supports("ofs-delta")) { - if (args.verbose) - fprintf(stderr, "Server supports ofs-delta\n"); - } else - prefer_ofs_delta = 0; - - if ((agent_feature = server_feature_value("agent", &agent_len))) { - agent_supported = 1; - if (args.verbose && agent_len) - fprintf(stderr, "Server version is %.*s\n", - agent_len, agent_feature); - } - - if (everything_local(&ref, sought)) { - packet_flush(fd[1]); - goto all_done; - } - if (find_common(fd, sha1, ref) < 0) - if (!args.keep_pack) - /* When cloning, it is not unusual to have - * no common commit. - */ - warning("no common commits"); - - if (args.stateless_rpc) - packet_flush(fd[1]); - if (get_pack(fd, pack_lockfile)) - die("git fetch-pack: fetch failed."); - - all_done: - return ref; -} - -static int fetch_pack_config(const char *var, const char *value, void *cb) -{ - if (strcmp(var, "fetch.unpacklimit") == 0) { - fetch_unpack_limit = git_config_int(var, value); - return 0; - } - - if (strcmp(var, "transfer.unpacklimit") == 0) { - transfer_unpack_limit = git_config_int(var, value); - return 0; - } - - if (strcmp(var, "repack.usedeltabaseoffset") == 0) { - prefer_ofs_delta = git_config_bool(var, value); - return 0; - } - - if (!strcmp(var, "fetch.fsckobjects")) { - fetch_fsck_objects = git_config_bool(var, value); - return 0; - } - - if (!strcmp(var, "transfer.fsckobjects")) { - transfer_fsck_objects = git_config_bool(var, value); - return 0; - } - - return git_default_config(var, value, cb); -} - -static struct lock_file lock; - -static void fetch_pack_setup(void) -{ - static int did_setup; - if (did_setup) - return; - git_config(fetch_pack_config, NULL); - if (0 <= transfer_unpack_limit) - unpack_limit = transfer_unpack_limit; - else if (0 <= fetch_unpack_limit) - unpack_limit = fetch_unpack_limit; - did_setup = 1; -} - int cmd_fetch_pack(int argc, const char **argv, const char *prefix) { int i, ret; @@ -900,9 +17,13 @@ int cmd_fetch_pack(int argc, const char **argv, const char *prefix) char *pack_lockfile = NULL; char **pack_lockfile_ptr = NULL; struct child_process *conn; + struct fetch_pack_args args; packet_trace_identity("fetch-pack"); + memset(&args, 0, sizeof(args)); + args.uploadpack = "git-upload-pack"; + for (i = 1; i < argc && *argv[i] == '-'; i++) { const char *arg = argv[i]; @@ -1038,66 +159,3 @@ int cmd_fetch_pack(int argc, const char **argv, const char *prefix) return ret; } - -struct ref *fetch_pack(struct fetch_pack_args *my_args, - int fd[], struct child_process *conn, - const struct ref *ref, - const char *dest, - struct string_list *sought, - char **pack_lockfile) -{ - struct stat st; - struct ref *ref_cpy; - - fetch_pack_setup(); - if (&args != my_args) - memcpy(&args, my_args, sizeof(args)); - if (args.depth > 0) { - if (stat(git_path("shallow"), &st)) - st.st_mtime = 0; - } - - if (sought->nr) { - sort_string_list(sought); - string_list_remove_duplicates(sought, 0); - } - - if (!ref) { - packet_flush(fd[1]); - die("no matching remote head"); - } - ref_cpy = do_fetch_pack(fd, ref, sought, pack_lockfile); - - if (args.depth > 0) { - struct cache_time mtime; - struct strbuf sb = STRBUF_INIT; - char *shallow = git_path("shallow"); - int fd; - - mtime.sec = st.st_mtime; - mtime.nsec = ST_MTIME_NSEC(st); - if (stat(shallow, &st)) { - if (mtime.sec) - die("shallow file was removed during fetch"); - } else if (st.st_mtime != mtime.sec -#ifdef USE_NSEC - || ST_MTIME_NSEC(st) != mtime.nsec -#endif - ) - die("shallow file was changed during fetch"); - - fd = hold_lock_file_for_update(&lock, shallow, - LOCK_DIE_ON_ERROR); - if (!write_shallow_commits(&sb, 0) - || write_in_full(fd, sb.buf, sb.len) != sb.len) { - unlink_or_warn(shallow); - rollback_lock_file(&lock); - } else { - commit_lock_file(&lock); - } - strbuf_release(&sb); - } - - reprepare_packed_git(); - return ref_cpy; -} diff --git a/builtin/mailinfo.c b/builtin/mailinfo.c index da231400b3..24a772d8e1 100644 --- a/builtin/mailinfo.c +++ b/builtin/mailinfo.c @@ -483,7 +483,8 @@ static void convert_to_utf8(struct strbuf *line, const char *charset) if (!charset || !*charset) return; - if (!strcasecmp(metainfo_charset, charset)) + + if (same_encoding(metainfo_charset, charset)) return; out = reencode_string(line->buf, metainfo_charset, charset); if (!out) diff --git a/builtin/merge.c b/builtin/merge.c index 0ec8f0d449..a96e8eac19 100644 --- a/builtin/merge.c +++ b/builtin/merge.c @@ -628,59 +628,6 @@ static void write_tree_trivial(unsigned char *sha1) die(_("git write-tree failed to write a tree")); } -static const char *merge_argument(struct commit *commit) -{ - if (commit) - return sha1_to_hex(commit->object.sha1); - else - return EMPTY_TREE_SHA1_HEX; -} - -int try_merge_command(const char *strategy, size_t xopts_nr, - const char **xopts, struct commit_list *common, - const char *head_arg, struct commit_list *remotes) -{ - const char **args; - int i = 0, x = 0, ret; - struct commit_list *j; - struct strbuf buf = STRBUF_INIT; - - args = xmalloc((4 + xopts_nr + commit_list_count(common) + - commit_list_count(remotes)) * sizeof(char *)); - strbuf_addf(&buf, "merge-%s", strategy); - args[i++] = buf.buf; - for (x = 0; x < xopts_nr; x++) { - char *s = xmalloc(strlen(xopts[x])+2+1); - strcpy(s, "--"); - strcpy(s+2, xopts[x]); - args[i++] = s; - } - for (j = common; j; j = j->next) - args[i++] = xstrdup(merge_argument(j->item)); - args[i++] = "--"; - args[i++] = head_arg; - for (j = remotes; j; j = j->next) - args[i++] = xstrdup(merge_argument(j->item)); - args[i] = NULL; - ret = run_command_v_opt(args, RUN_GIT_CMD); - strbuf_release(&buf); - i = 1; - for (x = 0; x < xopts_nr; x++) - free((void *)args[i++]); - for (j = common; j; j = j->next) - free((void *)args[i++]); - i += 2; - for (j = remotes; j; j = j->next) - free((void *)args[i++]); - free(args); - discard_cache(); - if (read_cache() < 0) - die(_("failed to read the cache")); - resolve_undo_clear(); - - return ret; -} - static int try_merge_strategy(const char *strategy, struct commit_list *common, struct commit_list *remoteheads, struct commit *head, const char *head_arg) @@ -762,56 +709,6 @@ static int count_unmerged_entries(void) return ret; } -int checkout_fast_forward(const unsigned char *head, const unsigned char *remote) -{ - struct tree *trees[MAX_UNPACK_TREES]; - struct unpack_trees_options opts; - struct tree_desc t[MAX_UNPACK_TREES]; - int i, fd, nr_trees = 0; - struct dir_struct dir; - struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file)); - - refresh_cache(REFRESH_QUIET); - - fd = hold_locked_index(lock_file, 1); - - memset(&trees, 0, sizeof(trees)); - memset(&opts, 0, sizeof(opts)); - memset(&t, 0, sizeof(t)); - if (overwrite_ignore) { - memset(&dir, 0, sizeof(dir)); - dir.flags |= DIR_SHOW_IGNORED; - setup_standard_excludes(&dir); - opts.dir = &dir; - } - - opts.head_idx = 1; - opts.src_index = &the_index; - opts.dst_index = &the_index; - opts.update = 1; - opts.verbose_update = 1; - opts.merge = 1; - opts.fn = twoway_merge; - setup_unpack_trees_porcelain(&opts, "merge"); - - trees[nr_trees] = parse_tree_indirect(head); - if (!trees[nr_trees++]) - return -1; - trees[nr_trees] = parse_tree_indirect(remote); - if (!trees[nr_trees++]) - return -1; - for (i = 0; i < nr_trees; i++) { - parse_tree(trees[i]); - init_tree_desc(t+i, trees[i]->buffer, trees[i]->size); - } - if (unpack_trees(nr_trees, t, &opts)) - return -1; - if (write_cache(fd, active_cache, active_nr) || - commit_locked_index(lock_file)) - die(_("unable to write new index file")); - return 0; -} - static void split_merge_strategies(const char *string, struct strategy **list, int *nr, int *alloc) { @@ -1424,7 +1321,8 @@ int cmd_merge(int argc, const char **argv, const char *prefix) } if (checkout_fast_forward(head_commit->object.sha1, - commit->object.sha1)) { + commit->object.sha1, + overwrite_ignore)) { ret = 1; goto done; } diff --git a/builtin/rev-list.c b/builtin/rev-list.c index ff5a38372d..67701be551 100644 --- a/builtin/rev-list.c +++ b/builtin/rev-list.c @@ -201,55 +201,6 @@ static void show_edge(struct commit *commit) printf("-%s\n", sha1_to_hex(commit->object.sha1)); } -static inline int log2i(int n) -{ - int log2 = 0; - - for (; n > 1; n >>= 1) - log2++; - - return log2; -} - -static inline int exp2i(int n) -{ - return 1 << n; -} - -/* - * Estimate the number of bisect steps left (after the current step) - * - * For any x between 0 included and 2^n excluded, the probability for - * n - 1 steps left looks like: - * - * P(2^n + x) == (2^n - x) / (2^n + x) - * - * and P(2^n + x) < 0.5 means 2^n < 3x - */ -int estimate_bisect_steps(int all) -{ - int n, x, e; - - if (all < 3) - return 0; - - n = log2i(all); - e = exp2i(n); - x = all - e; - - return (e < 3 * x) ? n : n - 1; -} - -void print_commit_list(struct commit_list *list, - const char *format_cur, - const char *format_last) -{ - for ( ; list; list = list->next) { - const char *format = list->next ? format_cur : format_last; - printf(format, sha1_to_hex(list->item->object.sha1)); - } -} - static void print_var_str(const char *var, const char *val) { printf("%s='%s'\n", var, val); diff --git a/builtin/send-pack.c b/builtin/send-pack.c index 7d05064218..d34201372d 100644 --- a/builtin/send-pack.c +++ b/builtin/send-pack.c @@ -16,164 +16,6 @@ static const char send_pack_usage[] = static struct send_pack_args args; -static int feed_object(const unsigned char *sha1, int fd, int negative) -{ - char buf[42]; - - if (negative && !has_sha1_file(sha1)) - return 1; - - memcpy(buf + negative, sha1_to_hex(sha1), 40); - if (negative) - buf[0] = '^'; - buf[40 + negative] = '\n'; - return write_or_whine(fd, buf, 41 + negative, "send-pack: send refs"); -} - -/* - * Make a pack stream and spit it out into file descriptor fd - */ -static int pack_objects(int fd, struct ref *refs, struct extra_have_objects *extra, struct send_pack_args *args) -{ - /* - * The child becomes pack-objects --revs; we feed - * the revision parameters to it via its stdin and - * let its stdout go back to the other end. - */ - const char *argv[] = { - "pack-objects", - "--all-progress-implied", - "--revs", - "--stdout", - NULL, - NULL, - NULL, - NULL, - NULL, - }; - struct child_process po; - int i; - - i = 4; - if (args->use_thin_pack) - argv[i++] = "--thin"; - if (args->use_ofs_delta) - argv[i++] = "--delta-base-offset"; - if (args->quiet || !args->progress) - argv[i++] = "-q"; - if (args->progress) - argv[i++] = "--progress"; - memset(&po, 0, sizeof(po)); - po.argv = argv; - po.in = -1; - po.out = args->stateless_rpc ? -1 : fd; - po.git_cmd = 1; - if (start_command(&po)) - die_errno("git pack-objects failed"); - - /* - * We feed the pack-objects we just spawned with revision - * parameters by writing to the pipe. - */ - for (i = 0; i < extra->nr; i++) - if (!feed_object(extra->array[i], po.in, 1)) - break; - - while (refs) { - if (!is_null_sha1(refs->old_sha1) && - !feed_object(refs->old_sha1, po.in, 1)) - break; - if (!is_null_sha1(refs->new_sha1) && - !feed_object(refs->new_sha1, po.in, 0)) - break; - refs = refs->next; - } - - close(po.in); - - if (args->stateless_rpc) { - char *buf = xmalloc(LARGE_PACKET_MAX); - while (1) { - ssize_t n = xread(po.out, buf, LARGE_PACKET_MAX); - if (n <= 0) - break; - send_sideband(fd, -1, buf, n, LARGE_PACKET_MAX); - } - free(buf); - close(po.out); - po.out = -1; - } - - if (finish_command(&po)) - return -1; - return 0; -} - -static int receive_status(int in, struct ref *refs) -{ - struct ref *hint; - char line[1000]; - int ret = 0; - int len = packet_read_line(in, line, sizeof(line)); - if (len < 10 || memcmp(line, "unpack ", 7)) - return error("did not receive remote status"); - if (memcmp(line, "unpack ok\n", 10)) { - char *p = line + strlen(line) - 1; - if (*p == '\n') - *p = '\0'; - error("unpack failed: %s", line + 7); - ret = -1; - } - hint = NULL; - while (1) { - char *refname; - char *msg; - len = packet_read_line(in, line, sizeof(line)); - if (!len) - break; - if (len < 3 || - (memcmp(line, "ok ", 3) && memcmp(line, "ng ", 3))) { - fprintf(stderr, "protocol error: %s\n", line); - ret = -1; - break; - } - - line[strlen(line)-1] = '\0'; - refname = line + 3; - msg = strchr(refname, ' '); - if (msg) - *msg++ = '\0'; - - /* first try searching at our hint, falling back to all refs */ - if (hint) - hint = find_ref_by_name(hint, refname); - if (!hint) - hint = find_ref_by_name(refs, refname); - if (!hint) { - warning("remote reported status on unknown ref: %s", - refname); - continue; - } - if (hint->status != REF_STATUS_EXPECTING_REPORT) { - warning("remote reported status on unexpected ref: %s", - refname); - continue; - } - - if (line[0] == 'o' && line[1] == 'k') - hint->status = REF_STATUS_OK; - else { - hint->status = REF_STATUS_REMOTE_REJECT; - ret = -1; - } - if (msg) - hint->remote_status = xstrdup(msg); - /* start our next search from the next ref */ - hint = hint->next; - } - return ret; -} - static void print_helper_status(struct ref *ref) { struct strbuf buf = STRBUF_INIT; @@ -227,181 +69,6 @@ static void print_helper_status(struct ref *ref) strbuf_release(&buf); } -static int sideband_demux(int in, int out, void *data) -{ - int *fd = data, ret; -#ifdef NO_PTHREADS - close(fd[1]); -#endif - ret = recv_sideband("send-pack", fd[0], out); - close(out); - return ret; -} - -int send_pack(struct send_pack_args *args, - int fd[], struct child_process *conn, - struct ref *remote_refs, - struct extra_have_objects *extra_have) -{ - int in = fd[0]; - int out = fd[1]; - struct strbuf req_buf = STRBUF_INIT; - struct ref *ref; - int new_refs; - int allow_deleting_refs = 0; - int status_report = 0; - int use_sideband = 0; - int quiet_supported = 0; - int agent_supported = 0; - unsigned cmds_sent = 0; - int ret; - struct async demux; - - /* Does the other end support the reporting? */ - if (server_supports("report-status")) - status_report = 1; - if (server_supports("delete-refs")) - allow_deleting_refs = 1; - if (server_supports("ofs-delta")) - args->use_ofs_delta = 1; - if (server_supports("side-band-64k")) - use_sideband = 1; - if (server_supports("quiet")) - quiet_supported = 1; - if (server_supports("agent")) - agent_supported = 1; - - if (!remote_refs) { - fprintf(stderr, "No refs in common and none specified; doing nothing.\n" - "Perhaps you should specify a branch such as 'master'.\n"); - return 0; - } - - /* - * Finally, tell the other end! - */ - new_refs = 0; - for (ref = remote_refs; ref; ref = ref->next) { - if (!ref->peer_ref && !args->send_mirror) - continue; - - /* Check for statuses set by set_ref_status_for_push() */ - switch (ref->status) { - case REF_STATUS_REJECT_NONFASTFORWARD: - case REF_STATUS_UPTODATE: - continue; - default: - ; /* do nothing */ - } - - if (ref->deletion && !allow_deleting_refs) { - ref->status = REF_STATUS_REJECT_NODELETE; - continue; - } - - if (!ref->deletion) - new_refs++; - - if (args->dry_run) { - ref->status = REF_STATUS_OK; - } else { - char *old_hex = sha1_to_hex(ref->old_sha1); - char *new_hex = sha1_to_hex(ref->new_sha1); - int quiet = quiet_supported && (args->quiet || !args->progress); - - if (!cmds_sent && (status_report || use_sideband || - quiet || agent_supported)) { - packet_buf_write(&req_buf, - "%s %s %s%c%s%s%s%s%s", - old_hex, new_hex, ref->name, 0, - status_report ? " report-status" : "", - use_sideband ? " side-band-64k" : "", - quiet ? " quiet" : "", - agent_supported ? " agent=" : "", - agent_supported ? git_user_agent_sanitized() : "" - ); - } - else - packet_buf_write(&req_buf, "%s %s %s", - old_hex, new_hex, ref->name); - ref->status = status_report ? - REF_STATUS_EXPECTING_REPORT : - REF_STATUS_OK; - cmds_sent++; - } - } - - if (args->stateless_rpc) { - if (!args->dry_run && cmds_sent) { - packet_buf_flush(&req_buf); - send_sideband(out, -1, req_buf.buf, req_buf.len, LARGE_PACKET_MAX); - } - } else { - safe_write(out, req_buf.buf, req_buf.len); - packet_flush(out); - } - strbuf_release(&req_buf); - - if (use_sideband && cmds_sent) { - memset(&demux, 0, sizeof(demux)); - demux.proc = sideband_demux; - demux.data = fd; - demux.out = -1; - if (start_async(&demux)) - die("send-pack: unable to fork off sideband demultiplexer"); - in = demux.out; - } - - if (new_refs && cmds_sent) { - if (pack_objects(out, remote_refs, extra_have, args) < 0) { - for (ref = remote_refs; ref; ref = ref->next) - ref->status = REF_STATUS_NONE; - if (args->stateless_rpc) - close(out); - if (git_connection_is_socket(conn)) - shutdown(fd[0], SHUT_WR); - if (use_sideband) - finish_async(&demux); - return -1; - } - } - if (args->stateless_rpc && cmds_sent) - packet_flush(out); - - if (status_report && cmds_sent) - ret = receive_status(in, remote_refs); - else - ret = 0; - if (args->stateless_rpc) - packet_flush(out); - - if (use_sideband && cmds_sent) { - if (finish_async(&demux)) { - error("error in sideband demultiplexer"); - ret = -1; - } - close(demux.out); - } - - if (ret < 0) - return ret; - - if (args->porcelain) - return 0; - - for (ref = remote_refs; ref; ref = ref->next) { - switch (ref->status) { - case REF_STATUS_NONE: - case REF_STATUS_UPTODATE: - case REF_STATUS_OK: - break; - default: - return -1; - } - } - return 0; -} - int cmd_send_pack(int argc, const char **argv, const char *prefix) { int i, nr_refspecs = 0; diff --git a/builtin/symbolic-ref.c b/builtin/symbolic-ref.c index 9e92828b3a..f481959421 100644 --- a/builtin/symbolic-ref.c +++ b/builtin/symbolic-ref.c @@ -5,12 +5,11 @@ static const char * const git_symbolic_ref_usage[] = { N_("git symbolic-ref [options] name [ref]"), + N_("git symbolic-ref -d [-q] name"), NULL }; -static int shorten; - -static void check_symref(const char *HEAD, int quiet) +static int check_symref(const char *HEAD, int quiet, int shorten, int print) { unsigned char sha1[20]; int flag; @@ -22,20 +21,24 @@ static void check_symref(const char *HEAD, int quiet) if (!quiet) die("ref %s is not a symbolic ref", HEAD); else - exit(1); + return 1; + } + if (print) { + if (shorten) + refname = shorten_unambiguous_ref(refname, 0); + puts(refname); } - if (shorten) - refname = shorten_unambiguous_ref(refname, 0); - puts(refname); + return 0; } int cmd_symbolic_ref(int argc, const char **argv, const char *prefix) { - int quiet = 0; + int quiet = 0, delete = 0, shorten = 0, ret = 0; const char *msg = NULL; struct option options[] = { OPT__QUIET(&quiet, N_("suppress error message for non-symbolic (detached) refs")), + OPT_BOOL('d', "delete", &delete, N_("delete symbolic ref")), OPT_BOOL(0, "short", &shorten, N_("shorten ref output")), OPT_STRING('m', NULL, &msg, N_("reason"), N_("reason of the update")), OPT_END(), @@ -46,9 +49,19 @@ int cmd_symbolic_ref(int argc, const char **argv, const char *prefix) git_symbolic_ref_usage, 0); if (msg &&!*msg) die("Refusing to perform update with empty message"); + + if (delete) { + if (argc != 1) + usage_with_options(git_symbolic_ref_usage, options); + ret = check_symref(argv[0], 1, 0, 0); + if (ret) + die("Cannot delete %s, not a symbolic ref", argv[0]); + return delete_ref(argv[0], NULL, REF_NODEREF); + } + switch (argc) { case 1: - check_symref(argv[0], quiet); + ret = check_symref(argv[0], quiet, shorten, 1); break; case 2: if (!strcmp(argv[0], "HEAD") && @@ -59,5 +72,5 @@ int cmd_symbolic_ref(int argc, const char **argv, const char *prefix) default: usage_with_options(git_symbolic_ref_usage, options); } - return 0; + return ret; } @@ -1183,6 +1183,7 @@ extern int pager_in_use(void); extern int pager_use_color; extern int term_columns(void); extern int decimal_width(int); +extern int check_pager_config(const char *cmd); extern const char *editor_program; extern const char *askpass_program; @@ -1265,8 +1266,15 @@ struct startup_info { }; extern struct startup_info *startup_info; -/* builtin/merge.c */ -int checkout_fast_forward(const unsigned char *from, const unsigned char *to); +/* merge.c */ +struct commit_list; +int try_merge_command(const char *strategy, size_t xopts_nr, + const char **xopts, struct commit_list *common, + const char *head_arg, struct commit_list *remotes); +int checkout_fast_forward(const unsigned char *from, + const unsigned char *to, + int overwrite_ignore); + int sane_execvp(const char *file, char *const argv[]); @@ -1347,3 +1347,13 @@ struct commit_list **commit_list_append(struct commit *commit, new->next = NULL; return &new->next; } + +void print_commit_list(struct commit_list *list, + const char *format_cur, + const char *format_last) +{ + for ( ; list; list = list->next) { + const char *format = list->next ? format_cur : format_last; + printf(format, sha1_to_hex(list->item->object.sha1)); + } +} @@ -222,4 +222,8 @@ struct commit *get_merge_parent(const char *name); extern int parse_signed_commit(const unsigned char *sha1, struct strbuf *message, struct strbuf *signature); +extern void print_commit_list(struct commit_list *list, + const char *format_cur, + const char *format_last); + #endif /* COMMIT_H */ diff --git a/configure.ac b/configure.ac index c85888c851..ad215cc4a1 100644 --- a/configure.ac +++ b/configure.ac @@ -1024,7 +1024,7 @@ elif test -z "$PTHREAD_CFLAGS"; then for opt in -mt -pthread -lpthread; do old_CFLAGS="$CFLAGS" CFLAGS="$opt $CFLAGS" - AC_MSG_CHECKING([Checking for POSIX Threads with '$opt']) + AC_MSG_CHECKING([for POSIX Threads with '$opt']) AC_LINK_IFELSE([PTHREADTEST_SRC], [AC_MSG_RESULT([yes]) NO_PTHREADS= @@ -1044,7 +1044,7 @@ elif test -z "$PTHREAD_CFLAGS"; then else old_CFLAGS="$CFLAGS" CFLAGS="$PTHREAD_CFLAGS $CFLAGS" - AC_MSG_CHECKING([Checking for POSIX Threads with '$PTHREAD_CFLAGS']) + AC_MSG_CHECKING([for POSIX Threads with '$PTHREAD_CFLAGS']) AC_LINK_IFELSE([PTHREADTEST_SRC], [AC_MSG_RESULT([yes]) NO_PTHREADS= diff --git a/contrib/completion/git-completion.bash b/contrib/completion/git-completion.bash index be800e09bd..bc0657a224 100644 --- a/contrib/completion/git-completion.bash +++ b/contrib/completion/git-completion.bash @@ -1116,6 +1116,14 @@ _git_fetch () __git_complete_remote_or_refspec } +__git_format_patch_options=" + --stdout --attach --no-attach --thread --thread= --output-directory + --numbered --start-number --numbered-files --keep-subject --signoff + --signature --no-signature --in-reply-to= --cc= --full-index --binary + --not --all --cover-letter --no-prefix --src-prefix= --dst-prefix= + --inline --suffix= --ignore-if-in-upstream --subject-prefix= +" + _git_format_patch () { case "$cur" in @@ -1126,21 +1134,7 @@ _git_format_patch () return ;; --*) - __gitcomp " - --stdout --attach --no-attach --thread --thread= - --output-directory - --numbered --start-number - --numbered-files - --keep-subject - --signoff --signature --no-signature - --in-reply-to= --cc= - --full-index --binary - --not --all - --cover-letter - --no-prefix --src-prefix= --dst-prefix= - --inline --suffix= --ignore-if-in-upstream - --subject-prefix= - " + __gitcomp "$__git_format_patch_options" return ;; esac @@ -1554,6 +1548,12 @@ _git_send_email () __gitcomp "ssl tls" "" "${cur##--smtp-encryption=}" return ;; + --thread=*) + __gitcomp " + deep shallow + " "" "${cur##--thread=}" + return + ;; --*) __gitcomp "--annotate --bcc --cc --cc-cmd --chain-reply-to --compose --confirm= --dry-run --envelope-sender @@ -1563,11 +1563,12 @@ _git_send_email () --signed-off-by-cc --smtp-pass --smtp-server --smtp-server-port --smtp-encryption= --smtp-user --subject --suppress-cc= --suppress-from --thread --to - --validate --no-validate" + --validate --no-validate + $__git_format_patch_options" return ;; esac - COMPREPLY=() + __git_complete_revlist } _git_stage () @@ -15,6 +15,7 @@ #include "sigchain.h" #include "submodule.h" #include "ll-merge.h" +#include "string-list.h" #ifdef NO_FAST_WORKING_DIRECTORY #define FAST_WORKING_DIRECTORY 0 @@ -69,26 +70,30 @@ static int parse_diff_color_slot(const char *var, int ofs) return -1; } -static int parse_dirstat_params(struct diff_options *options, const char *params, +static int parse_dirstat_params(struct diff_options *options, const char *params_string, struct strbuf *errmsg) { - const char *p = params; - int p_len, ret = 0; + char *params_copy = xstrdup(params_string); + struct string_list params = STRING_LIST_INIT_NODUP; + int ret = 0; + int i; - while (*p) { - p_len = strchrnul(p, ',') - p; - if (!memcmp(p, "changes", p_len)) { + if (*params_copy) + string_list_split_in_place(¶ms, params_copy, ',', -1); + for (i = 0; i < params.nr; i++) { + const char *p = params.items[i].string; + if (!strcmp(p, "changes")) { DIFF_OPT_CLR(options, DIRSTAT_BY_LINE); DIFF_OPT_CLR(options, DIRSTAT_BY_FILE); - } else if (!memcmp(p, "lines", p_len)) { + } else if (!strcmp(p, "lines")) { DIFF_OPT_SET(options, DIRSTAT_BY_LINE); DIFF_OPT_CLR(options, DIRSTAT_BY_FILE); - } else if (!memcmp(p, "files", p_len)) { + } else if (!strcmp(p, "files")) { DIFF_OPT_CLR(options, DIRSTAT_BY_LINE); DIFF_OPT_SET(options, DIRSTAT_BY_FILE); - } else if (!memcmp(p, "noncumulative", p_len)) { + } else if (!strcmp(p, "noncumulative")) { DIFF_OPT_CLR(options, DIRSTAT_CUMULATIVE); - } else if (!memcmp(p, "cumulative", p_len)) { + } else if (!strcmp(p, "cumulative")) { DIFF_OPT_SET(options, DIRSTAT_CUMULATIVE); } else if (isdigit(*p)) { char *end; @@ -100,24 +105,21 @@ static int parse_dirstat_params(struct diff_options *options, const char *params while (isdigit(*++end)) ; /* nothing */ } - if (end - p == p_len) + if (!*end) options->dirstat_permille = permille; else { - strbuf_addf(errmsg, _(" Failed to parse dirstat cut-off percentage '%.*s'\n"), - p_len, p); + strbuf_addf(errmsg, _(" Failed to parse dirstat cut-off percentage '%s'\n"), + p); ret++; } } else { - strbuf_addf(errmsg, _(" Unknown dirstat parameter '%.*s'\n"), - p_len, p); + strbuf_addf(errmsg, _(" Unknown dirstat parameter '%s'\n"), p); ret++; } - p += p_len; - - if (*p) - p++; /* more parameters, swallow separator */ } + string_list_clear(¶ms, 0); + free(params_copy); return ret; } @@ -4878,3 +4880,19 @@ size_t fill_textconv(struct userdiff_driver *driver, return size; } + +void setup_diff_pager(struct diff_options *opt) +{ + /* + * If the user asked for our exit code, then either they want --quiet + * or --exit-code. We should definitely not bother with a pager in the + * former case, as we will generate no output. Since we still properly + * report our exit code even when a pager is run, we _could_ run a + * pager with --exit-code. But since we have not done so historically, + * and because it is easy to find people oneline advising "git diff + * --exit-code" in hooks and other scripts, we do not do so. + */ + if (!DIFF_OPT_TST(opt, EXIT_WITH_STATUS) && + check_pager_config("diff") != 0) + setup_pager(); +} @@ -335,5 +335,6 @@ extern int parse_rename_score(const char **cp_p); extern int print_stat_summary(FILE *fp, int files, int insertions, int deletions); +extern void setup_diff_pager(struct diff_options *); #endif /* DIFF_H */ diff --git a/diffcore-pickaxe.c b/diffcore-pickaxe.c index ed23eb4bdd..a209376354 100644 --- a/diffcore-pickaxe.c +++ b/diffcore-pickaxe.c @@ -104,10 +104,10 @@ static int diff_grep(struct diff_filepair *p, struct diff_options *o, if (!mf2.ptr) return 0; /* ignore unmerged */ /* created "two" -- does it have what we are looking for? */ - hit = !regexec(regexp, p->two->data, 1, ®match, 0); + hit = !regexec(regexp, mf2.ptr, 1, ®match, 0); } else if (!mf2.ptr) { /* removed "one" -- did it have what we are looking for? */ - hit = !regexec(regexp, p->one->data, 1, ®match, 0); + hit = !regexec(regexp, mf1.ptr, 1, ®match, 0); } else { /* * We have both sides; need to run textual diff and see if @@ -308,42 +308,69 @@ static int no_wildcard(const char *string) return string[simple_length(string)] == '\0'; } +void parse_exclude_pattern(const char **pattern, + int *patternlen, + int *flags, + int *nowildcardlen) +{ + const char *p = *pattern; + size_t i, len; + + *flags = 0; + if (*p == '!') { + *flags |= EXC_FLAG_NEGATIVE; + p++; + } + len = strlen(p); + if (len && p[len - 1] == '/') { + len--; + *flags |= EXC_FLAG_MUSTBEDIR; + } + for (i = 0; i < len; i++) { + if (p[i] == '/') + break; + } + if (i == len) + *flags |= EXC_FLAG_NODIR; + *nowildcardlen = simple_length(p); + /* + * we should have excluded the trailing slash from 'p' too, + * but that's one more allocation. Instead just make sure + * nowildcardlen does not exceed real patternlen + */ + if (*nowildcardlen > len) + *nowildcardlen = len; + if (*p == '*' && no_wildcard(p + 1)) + *flags |= EXC_FLAG_ENDSWITH; + *pattern = p; + *patternlen = len; +} + void add_exclude(const char *string, const char *base, int baselen, struct exclude_list *which) { struct exclude *x; - size_t len; - int to_exclude = 1; - int flags = 0; + int patternlen; + int flags; + int nowildcardlen; - if (*string == '!') { - to_exclude = 0; - string++; - } - len = strlen(string); - if (len && string[len - 1] == '/') { + parse_exclude_pattern(&string, &patternlen, &flags, &nowildcardlen); + if (flags & EXC_FLAG_MUSTBEDIR) { char *s; - x = xmalloc(sizeof(*x) + len); + x = xmalloc(sizeof(*x) + patternlen + 1); s = (char *)(x+1); - memcpy(s, string, len - 1); - s[len - 1] = '\0'; - string = s; + memcpy(s, string, patternlen); + s[patternlen] = '\0'; x->pattern = s; - flags = EXC_FLAG_MUSTBEDIR; } else { x = xmalloc(sizeof(*x)); x->pattern = string; } - x->to_exclude = to_exclude; - x->patternlen = strlen(string); + x->patternlen = patternlen; + x->nowildcardlen = nowildcardlen; x->base = base; x->baselen = baselen; x->flags = flags; - if (!strchr(string, '/')) - x->flags |= EXC_FLAG_NODIR; - x->nowildcardlen = simple_length(string); - if (*string == '*' && no_wildcard(string+1)) - x->flags |= EXC_FLAG_ENDSWITH; ALLOC_GROW(which->excludes, which->nr + 1, which->alloc); which->excludes[which->nr++] = x; } @@ -505,6 +532,72 @@ static void prep_exclude(struct dir_struct *dir, const char *base, int baselen) dir->basebuf[baselen] = '\0'; } +int match_basename(const char *basename, int basenamelen, + const char *pattern, int prefix, int patternlen, + int flags) +{ + if (prefix == patternlen) { + if (!strcmp_icase(pattern, basename)) + return 1; + } else if (flags & EXC_FLAG_ENDSWITH) { + if (patternlen - 1 <= basenamelen && + !strcmp_icase(pattern + 1, + basename + basenamelen - patternlen + 1)) + return 1; + } else { + if (fnmatch_icase(pattern, basename, 0) == 0) + return 1; + } + return 0; +} + +int match_pathname(const char *pathname, int pathlen, + const char *base, int baselen, + const char *pattern, int prefix, int patternlen, + int flags) +{ + const char *name; + int namelen; + + /* + * match with FNM_PATHNAME; the pattern has base implicitly + * in front of it. + */ + if (*pattern == '/') { + pattern++; + prefix--; + } + + /* + * baselen does not count the trailing slash. base[] may or + * may not end with a trailing slash though. + */ + if (pathlen < baselen + 1 || + (baselen && pathname[baselen] != '/') || + strncmp_icase(pathname, base, baselen)) + return 0; + + namelen = baselen ? pathlen - baselen - 1 : pathlen; + name = pathname + pathlen - namelen; + + if (prefix) { + /* + * if the non-wildcard part is longer than the + * remaining pathname, surely it cannot match. + */ + if (prefix > namelen) + return 0; + + if (strncmp_icase(pattern, name, prefix)) + return 0; + pattern += prefix; + name += prefix; + namelen -= prefix; + } + + return fnmatch_icase(pattern, name, FNM_PATHNAME) == 0; +} + /* Scan the list and let the last match determine the fate. * Return 1 for exclude, 0 for include and -1 for undecided. */ @@ -519,9 +612,9 @@ int excluded_from_list(const char *pathname, for (i = el->nr - 1; 0 <= i; i--) { struct exclude *x = el->excludes[i]; - const char *name, *exclude = x->pattern; - int to_exclude = x->to_exclude; - int namelen, prefix = x->nowildcardlen; + const char *exclude = x->pattern; + int to_exclude = x->flags & EXC_FLAG_NEGATIVE ? 0 : 1; + int prefix = x->nowildcardlen; if (x->flags & EXC_FLAG_MUSTBEDIR) { if (*dtype == DT_UNKNOWN) @@ -531,51 +624,18 @@ int excluded_from_list(const char *pathname, } if (x->flags & EXC_FLAG_NODIR) { - /* match basename */ - if (prefix == x->patternlen) { - if (!strcmp_icase(exclude, basename)) - return to_exclude; - } else if (x->flags & EXC_FLAG_ENDSWITH) { - if (x->patternlen - 1 <= pathlen && - !strcmp_icase(exclude + 1, pathname + pathlen - x->patternlen + 1)) - return to_exclude; - } else { - if (fnmatch_icase(exclude, basename, 0) == 0) - return to_exclude; - } - continue; - } - - /* match with FNM_PATHNAME: - * exclude has base (baselen long) implicitly in front of it. - */ - if (*exclude == '/') { - exclude++; - prefix--; - } - - if (pathlen < x->baselen || - (x->baselen && pathname[x->baselen-1] != '/') || - strncmp_icase(pathname, x->base, x->baselen)) + if (match_basename(basename, + pathlen - (basename - pathname), + exclude, prefix, x->patternlen, + x->flags)) + return to_exclude; continue; - - namelen = x->baselen ? pathlen - x->baselen : pathlen; - name = pathname + pathlen - namelen; - - /* if the non-wildcard part is longer than the - remaining pathname, surely it cannot match */ - if (prefix > namelen) - continue; - - if (prefix) { - if (strncmp_icase(exclude, name, prefix)) - continue; - exclude += prefix; - name += prefix; - namelen -= prefix; } - if (!namelen || !fnmatch_icase(exclude, name, FNM_PATHNAME)) + assert(x->baselen == 0 || x->base[x->baselen - 1] == '/'); + if (match_pathname(pathname, pathlen, + x->base, x->baselen ? x->baselen - 1 : 0, + exclude, prefix, x->patternlen, x->flags)) return to_exclude; } return -1; /* undecided */ @@ -11,6 +11,7 @@ struct dir_entry { #define EXC_FLAG_NODIR 1 #define EXC_FLAG_ENDSWITH 4 #define EXC_FLAG_MUSTBEDIR 8 +#define EXC_FLAG_NEGATIVE 16 struct exclude_list { int nr; @@ -21,7 +22,6 @@ struct exclude_list { int nowildcardlen; const char *base; int baselen; - int to_exclude; int flags; } **excludes; }; @@ -81,6 +81,16 @@ extern int excluded_from_list(const char *pathname, int pathlen, const char *bas struct dir_entry *dir_add_ignored(struct dir_struct *dir, const char *pathname, int len); /* + * these implement the matching logic for dir.c:excluded_from_list and + * attr.c:path_matches() + */ +extern int match_basename(const char *, int, + const char *, int, int, int); +extern int match_pathname(const char *, int, + const char *, int, + const char *, int, int, int); + +/* * The excluded() API is meant for callers that check each level of leading * directory hierarchies with excluded() to avoid recursing into excluded * directories. Callers that do not do so should use this API instead. @@ -97,6 +107,7 @@ extern int path_excluded(struct path_exclude_check *, const char *, int namelen, extern int add_excludes_from_file_to_list(const char *fname, const char *base, int baselen, char **buf_p, struct exclude_list *which, int check_index); extern void add_excludes_from_file(struct dir_struct *, const char *fname); +extern void parse_exclude_pattern(const char **string, int *patternlen, int *flags, int *nowildcardlen); extern void add_exclude(const char *string, const char *base, int baselen, struct exclude_list *which); extern void free_excludes(struct exclude_list *el); diff --git a/fetch-pack.c b/fetch-pack.c new file mode 100644 index 0000000000..099ff4ddff --- /dev/null +++ b/fetch-pack.c @@ -0,0 +1,951 @@ +#include "cache.h" +#include "refs.h" +#include "pkt-line.h" +#include "commit.h" +#include "tag.h" +#include "exec_cmd.h" +#include "pack.h" +#include "sideband.h" +#include "fetch-pack.h" +#include "remote.h" +#include "run-command.h" +#include "transport.h" +#include "version.h" + +static int transfer_unpack_limit = -1; +static int fetch_unpack_limit = -1; +static int unpack_limit = 100; +static int prefer_ofs_delta = 1; +static int no_done; +static int fetch_fsck_objects = -1; +static int transfer_fsck_objects = -1; +static int agent_supported; + +#define COMPLETE (1U << 0) +#define COMMON (1U << 1) +#define COMMON_REF (1U << 2) +#define SEEN (1U << 3) +#define POPPED (1U << 4) + +static int marked; + +/* + * After sending this many "have"s if we do not get any new ACK , we + * give up traversing our history. + */ +#define MAX_IN_VAIN 256 + +static struct commit_list *rev_list; +static int non_common_revs, multi_ack, use_sideband; + +static void rev_list_push(struct commit *commit, int mark) +{ + if (!(commit->object.flags & mark)) { + commit->object.flags |= mark; + + if (!(commit->object.parsed)) + if (parse_commit(commit)) + return; + + commit_list_insert_by_date(commit, &rev_list); + + if (!(commit->object.flags & COMMON)) + non_common_revs++; + } +} + +static int rev_list_insert_ref(const char *refname, const unsigned char *sha1, int flag, void *cb_data) +{ + struct object *o = deref_tag(parse_object(sha1), refname, 0); + + if (o && o->type == OBJ_COMMIT) + rev_list_push((struct commit *)o, SEEN); + + return 0; +} + +static int clear_marks(const char *refname, const unsigned char *sha1, int flag, void *cb_data) +{ + struct object *o = deref_tag(parse_object(sha1), refname, 0); + + if (o && o->type == OBJ_COMMIT) + clear_commit_marks((struct commit *)o, + COMMON | COMMON_REF | SEEN | POPPED); + return 0; +} + +/* + This function marks a rev and its ancestors as common. + In some cases, it is desirable to mark only the ancestors (for example + when only the server does not yet know that they are common). +*/ + +static void mark_common(struct commit *commit, + int ancestors_only, int dont_parse) +{ + if (commit != NULL && !(commit->object.flags & COMMON)) { + struct object *o = (struct object *)commit; + + if (!ancestors_only) + o->flags |= COMMON; + + if (!(o->flags & SEEN)) + rev_list_push(commit, SEEN); + else { + struct commit_list *parents; + + if (!ancestors_only && !(o->flags & POPPED)) + non_common_revs--; + if (!o->parsed && !dont_parse) + if (parse_commit(commit)) + return; + + for (parents = commit->parents; + parents; + parents = parents->next) + mark_common(parents->item, 0, dont_parse); + } + } +} + +/* + Get the next rev to send, ignoring the common. +*/ + +static const unsigned char *get_rev(void) +{ + struct commit *commit = NULL; + + while (commit == NULL) { + unsigned int mark; + struct commit_list *parents; + + if (rev_list == NULL || non_common_revs == 0) + return NULL; + + commit = rev_list->item; + if (!commit->object.parsed) + parse_commit(commit); + parents = commit->parents; + + commit->object.flags |= POPPED; + if (!(commit->object.flags & COMMON)) + non_common_revs--; + + if (commit->object.flags & COMMON) { + /* do not send "have", and ignore ancestors */ + commit = NULL; + mark = COMMON | SEEN; + } else if (commit->object.flags & COMMON_REF) + /* send "have", and ignore ancestors */ + mark = COMMON | SEEN; + else + /* send "have", also for its ancestors */ + mark = SEEN; + + while (parents) { + if (!(parents->item->object.flags & SEEN)) + rev_list_push(parents->item, mark); + if (mark & COMMON) + mark_common(parents->item, 1, 0); + parents = parents->next; + } + + rev_list = rev_list->next; + } + + return commit->object.sha1; +} + +enum ack_type { + NAK = 0, + ACK, + ACK_continue, + ACK_common, + ACK_ready +}; + +static void consume_shallow_list(struct fetch_pack_args *args, int fd) +{ + if (args->stateless_rpc && args->depth > 0) { + /* If we sent a depth we will get back "duplicate" + * shallow and unshallow commands every time there + * is a block of have lines exchanged. + */ + char line[1000]; + while (packet_read_line(fd, line, sizeof(line))) { + if (!prefixcmp(line, "shallow ")) + continue; + if (!prefixcmp(line, "unshallow ")) + continue; + die("git fetch-pack: expected shallow list"); + } + } +} + +struct write_shallow_data { + struct strbuf *out; + int use_pack_protocol; + int count; +}; + +static int write_one_shallow(const struct commit_graft *graft, void *cb_data) +{ + struct write_shallow_data *data = cb_data; + const char *hex = sha1_to_hex(graft->sha1); + data->count++; + if (data->use_pack_protocol) + packet_buf_write(data->out, "shallow %s", hex); + else { + strbuf_addstr(data->out, hex); + strbuf_addch(data->out, '\n'); + } + return 0; +} + +static int write_shallow_commits(struct strbuf *out, int use_pack_protocol) +{ + struct write_shallow_data data; + data.out = out; + data.use_pack_protocol = use_pack_protocol; + data.count = 0; + for_each_commit_graft(write_one_shallow, &data); + return data.count; +} + +static enum ack_type get_ack(int fd, unsigned char *result_sha1) +{ + static char line[1000]; + int len = packet_read_line(fd, line, sizeof(line)); + + if (!len) + die("git fetch-pack: expected ACK/NAK, got EOF"); + if (line[len-1] == '\n') + line[--len] = 0; + if (!strcmp(line, "NAK")) + return NAK; + if (!prefixcmp(line, "ACK ")) { + if (!get_sha1_hex(line+4, result_sha1)) { + if (strstr(line+45, "continue")) + return ACK_continue; + if (strstr(line+45, "common")) + return ACK_common; + if (strstr(line+45, "ready")) + return ACK_ready; + return ACK; + } + } + die("git fetch_pack: expected ACK/NAK, got '%s'", line); +} + +static void send_request(struct fetch_pack_args *args, + int fd, struct strbuf *buf) +{ + if (args->stateless_rpc) { + send_sideband(fd, -1, buf->buf, buf->len, LARGE_PACKET_MAX); + packet_flush(fd); + } else + safe_write(fd, buf->buf, buf->len); +} + +static void insert_one_alternate_ref(const struct ref *ref, void *unused) +{ + rev_list_insert_ref(NULL, ref->old_sha1, 0, NULL); +} + +#define INITIAL_FLUSH 16 +#define PIPESAFE_FLUSH 32 +#define LARGE_FLUSH 1024 + +static int next_flush(struct fetch_pack_args *args, int count) +{ + int flush_limit = args->stateless_rpc ? LARGE_FLUSH : PIPESAFE_FLUSH; + + if (count < flush_limit) + count <<= 1; + else + count += flush_limit; + return count; +} + +static int find_common(struct fetch_pack_args *args, + int fd[2], unsigned char *result_sha1, + struct ref *refs) +{ + int fetching; + int count = 0, flushes = 0, flush_at = INITIAL_FLUSH, retval; + const unsigned char *sha1; + unsigned in_vain = 0; + int got_continue = 0; + int got_ready = 0; + struct strbuf req_buf = STRBUF_INIT; + size_t state_len = 0; + + if (args->stateless_rpc && multi_ack == 1) + die("--stateless-rpc requires multi_ack_detailed"); + if (marked) + for_each_ref(clear_marks, NULL); + marked = 1; + + for_each_ref(rev_list_insert_ref, NULL); + for_each_alternate_ref(insert_one_alternate_ref, NULL); + + fetching = 0; + for ( ; refs ; refs = refs->next) { + unsigned char *remote = refs->old_sha1; + const char *remote_hex; + struct object *o; + + /* + * If that object is complete (i.e. it is an ancestor of a + * local ref), we tell them we have it but do not have to + * tell them about its ancestors, which they already know + * about. + * + * We use lookup_object here because we are only + * interested in the case we *know* the object is + * reachable and we have already scanned it. + */ + if (((o = lookup_object(remote)) != NULL) && + (o->flags & COMPLETE)) { + continue; + } + + remote_hex = sha1_to_hex(remote); + if (!fetching) { + struct strbuf c = STRBUF_INIT; + if (multi_ack == 2) strbuf_addstr(&c, " multi_ack_detailed"); + if (multi_ack == 1) strbuf_addstr(&c, " multi_ack"); + if (no_done) strbuf_addstr(&c, " no-done"); + if (use_sideband == 2) strbuf_addstr(&c, " side-band-64k"); + if (use_sideband == 1) strbuf_addstr(&c, " side-band"); + if (args->use_thin_pack) strbuf_addstr(&c, " thin-pack"); + if (args->no_progress) strbuf_addstr(&c, " no-progress"); + if (args->include_tag) strbuf_addstr(&c, " include-tag"); + if (prefer_ofs_delta) strbuf_addstr(&c, " ofs-delta"); + if (agent_supported) strbuf_addf(&c, " agent=%s", + git_user_agent_sanitized()); + packet_buf_write(&req_buf, "want %s%s\n", remote_hex, c.buf); + strbuf_release(&c); + } else + packet_buf_write(&req_buf, "want %s\n", remote_hex); + fetching++; + } + + if (!fetching) { + strbuf_release(&req_buf); + packet_flush(fd[1]); + return 1; + } + + if (is_repository_shallow()) + write_shallow_commits(&req_buf, 1); + if (args->depth > 0) + packet_buf_write(&req_buf, "deepen %d", args->depth); + packet_buf_flush(&req_buf); + state_len = req_buf.len; + + if (args->depth > 0) { + char line[1024]; + unsigned char sha1[20]; + + send_request(args, fd[1], &req_buf); + while (packet_read_line(fd[0], line, sizeof(line))) { + if (!prefixcmp(line, "shallow ")) { + if (get_sha1_hex(line + 8, sha1)) + die("invalid shallow line: %s", line); + register_shallow(sha1); + continue; + } + if (!prefixcmp(line, "unshallow ")) { + if (get_sha1_hex(line + 10, sha1)) + die("invalid unshallow line: %s", line); + if (!lookup_object(sha1)) + die("object not found: %s", line); + /* make sure that it is parsed as shallow */ + if (!parse_object(sha1)) + die("error in object: %s", line); + if (unregister_shallow(sha1)) + die("no shallow found: %s", line); + continue; + } + die("expected shallow/unshallow, got %s", line); + } + } else if (!args->stateless_rpc) + send_request(args, fd[1], &req_buf); + + if (!args->stateless_rpc) { + /* If we aren't using the stateless-rpc interface + * we don't need to retain the headers. + */ + strbuf_setlen(&req_buf, 0); + state_len = 0; + } + + flushes = 0; + retval = -1; + while ((sha1 = get_rev())) { + packet_buf_write(&req_buf, "have %s\n", sha1_to_hex(sha1)); + if (args->verbose) + fprintf(stderr, "have %s\n", sha1_to_hex(sha1)); + in_vain++; + if (flush_at <= ++count) { + int ack; + + packet_buf_flush(&req_buf); + send_request(args, fd[1], &req_buf); + strbuf_setlen(&req_buf, state_len); + flushes++; + flush_at = next_flush(args, count); + + /* + * We keep one window "ahead" of the other side, and + * will wait for an ACK only on the next one + */ + if (!args->stateless_rpc && count == INITIAL_FLUSH) + continue; + + consume_shallow_list(args, fd[0]); + do { + ack = get_ack(fd[0], result_sha1); + if (args->verbose && ack) + fprintf(stderr, "got ack %d %s\n", ack, + sha1_to_hex(result_sha1)); + switch (ack) { + case ACK: + flushes = 0; + multi_ack = 0; + retval = 0; + goto done; + case ACK_common: + case ACK_ready: + case ACK_continue: { + struct commit *commit = + lookup_commit(result_sha1); + if (!commit) + die("invalid commit %s", sha1_to_hex(result_sha1)); + if (args->stateless_rpc + && ack == ACK_common + && !(commit->object.flags & COMMON)) { + /* We need to replay the have for this object + * on the next RPC request so the peer knows + * it is in common with us. + */ + const char *hex = sha1_to_hex(result_sha1); + packet_buf_write(&req_buf, "have %s\n", hex); + state_len = req_buf.len; + } + mark_common(commit, 0, 1); + retval = 0; + in_vain = 0; + got_continue = 1; + if (ack == ACK_ready) { + rev_list = NULL; + got_ready = 1; + } + break; + } + } + } while (ack); + flushes--; + if (got_continue && MAX_IN_VAIN < in_vain) { + if (args->verbose) + fprintf(stderr, "giving up\n"); + break; /* give up */ + } + } + } +done: + if (!got_ready || !no_done) { + packet_buf_write(&req_buf, "done\n"); + send_request(args, fd[1], &req_buf); + } + if (args->verbose) + fprintf(stderr, "done\n"); + if (retval != 0) { + multi_ack = 0; + flushes++; + } + strbuf_release(&req_buf); + + consume_shallow_list(args, fd[0]); + while (flushes || multi_ack) { + int ack = get_ack(fd[0], result_sha1); + if (ack) { + if (args->verbose) + fprintf(stderr, "got ack (%d) %s\n", ack, + sha1_to_hex(result_sha1)); + if (ack == ACK) + return 0; + multi_ack = 1; + continue; + } + flushes--; + } + /* it is no error to fetch into a completely empty repo */ + return count ? retval : 0; +} + +static struct commit_list *complete; + +static int mark_complete(const char *refname, const unsigned char *sha1, int flag, void *cb_data) +{ + struct object *o = parse_object(sha1); + + while (o && o->type == OBJ_TAG) { + struct tag *t = (struct tag *) o; + if (!t->tagged) + break; /* broken repository */ + o->flags |= COMPLETE; + o = parse_object(t->tagged->sha1); + } + if (o && o->type == OBJ_COMMIT) { + struct commit *commit = (struct commit *)o; + if (!(commit->object.flags & COMPLETE)) { + commit->object.flags |= COMPLETE; + commit_list_insert_by_date(commit, &complete); + } + } + return 0; +} + +static void mark_recent_complete_commits(struct fetch_pack_args *args, + unsigned long cutoff) +{ + while (complete && cutoff <= complete->item->date) { + if (args->verbose) + fprintf(stderr, "Marking %s as complete\n", + sha1_to_hex(complete->item->object.sha1)); + pop_most_recent_commit(&complete, COMPLETE); + } +} + +static int non_matching_ref(struct string_list_item *item, void *unused) +{ + if (item->util) { + item->util = NULL; + return 0; + } + else + return 1; +} + +static void filter_refs(struct fetch_pack_args *args, + struct ref **refs, struct string_list *sought) +{ + struct ref *newlist = NULL; + struct ref **newtail = &newlist; + struct ref *ref, *next; + int sought_pos; + + sought_pos = 0; + for (ref = *refs; ref; ref = next) { + int keep = 0; + next = ref->next; + if (!memcmp(ref->name, "refs/", 5) && + check_refname_format(ref->name + 5, 0)) + ; /* trash */ + else { + while (sought_pos < sought->nr) { + int cmp = strcmp(ref->name, sought->items[sought_pos].string); + if (cmp < 0) + break; /* definitely do not have it */ + else if (cmp == 0) { + keep = 1; /* definitely have it */ + sought->items[sought_pos++].util = "matched"; + break; + } + else + sought_pos++; /* might have it; keep looking */ + } + } + + if (! keep && args->fetch_all && + (!args->depth || prefixcmp(ref->name, "refs/tags/"))) + keep = 1; + + if (keep) { + *newtail = ref; + ref->next = NULL; + newtail = &ref->next; + } else { + free(ref); + } + } + + filter_string_list(sought, 0, non_matching_ref, NULL); + *refs = newlist; +} + +static void mark_alternate_complete(const struct ref *ref, void *unused) +{ + mark_complete(NULL, ref->old_sha1, 0, NULL); +} + +static int everything_local(struct fetch_pack_args *args, + struct ref **refs, struct string_list *sought) +{ + struct ref *ref; + int retval; + unsigned long cutoff = 0; + + save_commit_buffer = 0; + + for (ref = *refs; ref; ref = ref->next) { + struct object *o; + + o = parse_object(ref->old_sha1); + if (!o) + continue; + + /* We already have it -- which may mean that we were + * in sync with the other side at some time after + * that (it is OK if we guess wrong here). + */ + if (o->type == OBJ_COMMIT) { + struct commit *commit = (struct commit *)o; + if (!cutoff || cutoff < commit->date) + cutoff = commit->date; + } + } + + if (!args->depth) { + for_each_ref(mark_complete, NULL); + for_each_alternate_ref(mark_alternate_complete, NULL); + if (cutoff) + mark_recent_complete_commits(args, cutoff); + } + + /* + * Mark all complete remote refs as common refs. + * Don't mark them common yet; the server has to be told so first. + */ + for (ref = *refs; ref; ref = ref->next) { + struct object *o = deref_tag(lookup_object(ref->old_sha1), + NULL, 0); + + if (!o || o->type != OBJ_COMMIT || !(o->flags & COMPLETE)) + continue; + + if (!(o->flags & SEEN)) { + rev_list_push((struct commit *)o, COMMON_REF | SEEN); + + mark_common((struct commit *)o, 1, 1); + } + } + + filter_refs(args, refs, sought); + + for (retval = 1, ref = *refs; ref ; ref = ref->next) { + const unsigned char *remote = ref->old_sha1; + unsigned char local[20]; + struct object *o; + + o = lookup_object(remote); + if (!o || !(o->flags & COMPLETE)) { + retval = 0; + if (!args->verbose) + continue; + fprintf(stderr, + "want %s (%s)\n", sha1_to_hex(remote), + ref->name); + continue; + } + + hashcpy(ref->new_sha1, local); + if (!args->verbose) + continue; + fprintf(stderr, + "already have %s (%s)\n", sha1_to_hex(remote), + ref->name); + } + return retval; +} + +static int sideband_demux(int in, int out, void *data) +{ + int *xd = data; + + int ret = recv_sideband("fetch-pack", xd[0], out); + close(out); + return ret; +} + +static int get_pack(struct fetch_pack_args *args, + int xd[2], char **pack_lockfile) +{ + struct async demux; + const char *argv[20]; + char keep_arg[256]; + char hdr_arg[256]; + const char **av; + int do_keep = args->keep_pack; + struct child_process cmd; + + memset(&demux, 0, sizeof(demux)); + if (use_sideband) { + /* xd[] is talking with upload-pack; subprocess reads from + * xd[0], spits out band#2 to stderr, and feeds us band#1 + * through demux->out. + */ + demux.proc = sideband_demux; + demux.data = xd; + demux.out = -1; + if (start_async(&demux)) + die("fetch-pack: unable to fork off sideband" + " demultiplexer"); + } + else + demux.out = xd[0]; + + memset(&cmd, 0, sizeof(cmd)); + cmd.argv = argv; + av = argv; + *hdr_arg = 0; + if (!args->keep_pack && unpack_limit) { + struct pack_header header; + + if (read_pack_header(demux.out, &header)) + die("protocol error: bad pack header"); + snprintf(hdr_arg, sizeof(hdr_arg), + "--pack_header=%"PRIu32",%"PRIu32, + ntohl(header.hdr_version), ntohl(header.hdr_entries)); + if (ntohl(header.hdr_entries) < unpack_limit) + do_keep = 0; + else + do_keep = 1; + } + + if (do_keep) { + if (pack_lockfile) + cmd.out = -1; + *av++ = "index-pack"; + *av++ = "--stdin"; + if (!args->quiet && !args->no_progress) + *av++ = "-v"; + if (args->use_thin_pack) + *av++ = "--fix-thin"; + if (args->lock_pack || unpack_limit) { + int s = sprintf(keep_arg, + "--keep=fetch-pack %"PRIuMAX " on ", (uintmax_t) getpid()); + if (gethostname(keep_arg + s, sizeof(keep_arg) - s)) + strcpy(keep_arg + s, "localhost"); + *av++ = keep_arg; + } + } + else { + *av++ = "unpack-objects"; + if (args->quiet || args->no_progress) + *av++ = "-q"; + } + if (*hdr_arg) + *av++ = hdr_arg; + if (fetch_fsck_objects >= 0 + ? fetch_fsck_objects + : transfer_fsck_objects >= 0 + ? transfer_fsck_objects + : 0) + *av++ = "--strict"; + *av++ = NULL; + + cmd.in = demux.out; + cmd.git_cmd = 1; + if (start_command(&cmd)) + die("fetch-pack: unable to fork off %s", argv[0]); + if (do_keep && pack_lockfile) { + *pack_lockfile = index_pack_lockfile(cmd.out); + close(cmd.out); + } + + if (finish_command(&cmd)) + die("%s failed", argv[0]); + if (use_sideband && finish_async(&demux)) + die("error in sideband demultiplexer"); + return 0; +} + +static struct ref *do_fetch_pack(struct fetch_pack_args *args, + int fd[2], + const struct ref *orig_ref, + struct string_list *sought, + char **pack_lockfile) +{ + struct ref *ref = copy_ref_list(orig_ref); + unsigned char sha1[20]; + const char *agent_feature; + int agent_len; + + sort_ref_list(&ref, ref_compare_name); + + if (is_repository_shallow() && !server_supports("shallow")) + die("Server does not support shallow clients"); + if (server_supports("multi_ack_detailed")) { + if (args->verbose) + fprintf(stderr, "Server supports multi_ack_detailed\n"); + multi_ack = 2; + if (server_supports("no-done")) { + if (args->verbose) + fprintf(stderr, "Server supports no-done\n"); + if (args->stateless_rpc) + no_done = 1; + } + } + else if (server_supports("multi_ack")) { + if (args->verbose) + fprintf(stderr, "Server supports multi_ack\n"); + multi_ack = 1; + } + if (server_supports("side-band-64k")) { + if (args->verbose) + fprintf(stderr, "Server supports side-band-64k\n"); + use_sideband = 2; + } + else if (server_supports("side-band")) { + if (args->verbose) + fprintf(stderr, "Server supports side-band\n"); + use_sideband = 1; + } + if (!server_supports("thin-pack")) + args->use_thin_pack = 0; + if (!server_supports("no-progress")) + args->no_progress = 0; + if (!server_supports("include-tag")) + args->include_tag = 0; + if (server_supports("ofs-delta")) { + if (args->verbose) + fprintf(stderr, "Server supports ofs-delta\n"); + } else + prefer_ofs_delta = 0; + + if ((agent_feature = server_feature_value("agent", &agent_len))) { + agent_supported = 1; + if (args->verbose && agent_len) + fprintf(stderr, "Server version is %.*s\n", + agent_len, agent_feature); + } + + if (everything_local(args, &ref, sought)) { + packet_flush(fd[1]); + goto all_done; + } + if (find_common(args, fd, sha1, ref) < 0) + if (!args->keep_pack) + /* When cloning, it is not unusual to have + * no common commit. + */ + warning("no common commits"); + + if (args->stateless_rpc) + packet_flush(fd[1]); + if (get_pack(args, fd, pack_lockfile)) + die("git fetch-pack: fetch failed."); + + all_done: + return ref; +} + +static int fetch_pack_config(const char *var, const char *value, void *cb) +{ + if (strcmp(var, "fetch.unpacklimit") == 0) { + fetch_unpack_limit = git_config_int(var, value); + return 0; + } + + if (strcmp(var, "transfer.unpacklimit") == 0) { + transfer_unpack_limit = git_config_int(var, value); + return 0; + } + + if (strcmp(var, "repack.usedeltabaseoffset") == 0) { + prefer_ofs_delta = git_config_bool(var, value); + return 0; + } + + if (!strcmp(var, "fetch.fsckobjects")) { + fetch_fsck_objects = git_config_bool(var, value); + return 0; + } + + if (!strcmp(var, "transfer.fsckobjects")) { + transfer_fsck_objects = git_config_bool(var, value); + return 0; + } + + return git_default_config(var, value, cb); +} + +static struct lock_file lock; + +static void fetch_pack_setup(void) +{ + static int did_setup; + if (did_setup) + return; + git_config(fetch_pack_config, NULL); + if (0 <= transfer_unpack_limit) + unpack_limit = transfer_unpack_limit; + else if (0 <= fetch_unpack_limit) + unpack_limit = fetch_unpack_limit; + did_setup = 1; +} + +struct ref *fetch_pack(struct fetch_pack_args *args, + int fd[], struct child_process *conn, + const struct ref *ref, + const char *dest, + struct string_list *sought, + char **pack_lockfile) +{ + struct stat st; + struct ref *ref_cpy; + + fetch_pack_setup(); + if (args->depth > 0) { + if (stat(git_path("shallow"), &st)) + st.st_mtime = 0; + } + + if (sought->nr) { + sort_string_list(sought); + string_list_remove_duplicates(sought, 0); + } + + if (!ref) { + packet_flush(fd[1]); + die("no matching remote head"); + } + ref_cpy = do_fetch_pack(args, fd, ref, sought, pack_lockfile); + + if (args->depth > 0) { + struct cache_time mtime; + struct strbuf sb = STRBUF_INIT; + char *shallow = git_path("shallow"); + int fd; + + mtime.sec = st.st_mtime; + mtime.nsec = ST_MTIME_NSEC(st); + if (stat(shallow, &st)) { + if (mtime.sec) + die("shallow file was removed during fetch"); + } else if (st.st_mtime != mtime.sec +#ifdef USE_NSEC + || ST_MTIME_NSEC(st) != mtime.nsec +#endif + ) + die("shallow file was changed during fetch"); + + fd = hold_lock_file_for_update(&lock, shallow, + LOCK_DIE_ON_ERROR); + if (!write_shallow_commits(&sb, 0) + || write_in_full(fd, sb.buf, sb.len) != sb.len) { + unlink_or_warn(shallow); + rollback_lock_file(&lock); + } else { + commit_lock_file(&lock); + } + strbuf_release(&sb); + } + + reprepare_packed_git(); + return ref_cpy; +} diff --git a/git-compat-util.h b/git-compat-util.h index 2fbf1fd8b1..2e79b8a2f3 100644 --- a/git-compat-util.h +++ b/git-compat-util.h @@ -506,6 +506,7 @@ extern const char tolower_trans_tbl[256]; #undef isdigit #undef isalpha #undef isalnum +#undef isprint #undef islower #undef isupper #undef tolower @@ -523,6 +524,7 @@ extern unsigned char sane_ctype[256]; #define isdigit(x) sane_istest(x,GIT_DIGIT) #define isalpha(x) sane_istest(x,GIT_ALPHA) #define isalnum(x) sane_istest(x,GIT_ALPHA | GIT_DIGIT) +#define isprint(x) ((x) >= 0x20 && (x) <= 0x7e) #define islower(x) sane_iscase(x, 1) #define isupper(x) sane_iscase(x, 0) #define is_glob_special(x) sane_istest(x,GIT_GLOB_SPECIAL) diff --git a/git-cvsimport.perl b/git-cvsimport.perl index 8032f23202..0a31ebd820 100755 --- a/git-cvsimport.perl +++ b/git-cvsimport.perl @@ -24,14 +24,14 @@ use File::Basename qw(basename dirname); use Time::Local; use IO::Socket; use IO::Pipe; -use POSIX qw(strftime dup2 ENOENT); +use POSIX qw(strftime tzset dup2 ENOENT); use IPC::Open2; $SIG{'PIPE'}="IGNORE"; -$ENV{'TZ'}="UTC"; +set_timezone('UTC'); our ($opt_h,$opt_o,$opt_v,$opt_k,$opt_u,$opt_d,$opt_p,$opt_C,$opt_z,$opt_i,$opt_P, $opt_s,$opt_m,@opt_M,$opt_A,$opt_S,$opt_L, $opt_a, $opt_r, $opt_R); -my (%conv_author_name, %conv_author_email); +my (%conv_author_name, %conv_author_email, %conv_author_tz); sub usage(;$) { my $msg = shift; @@ -59,6 +59,14 @@ sub read_author_info($) { $conv_author_name{$user} = $2; $conv_author_email{$user} = $3; } + # or with an optional timezone: + # spawn=Simon Pawn <spawn@frog-pond.org> America/Chicago + elsif (m/^(\S+?)\s*=\s*(.+?)\s*<(.+)>\s*(\S+?)\s*$/) { + $user = $1; + $conv_author_name{$user} = $2; + $conv_author_email{$user} = $3; + $conv_author_tz{$user} = $4; + } # However, we also read from CVSROOT/users format # to ease migration. elsif (/^(\w+):(['"]?)(.+?)\2\s*$/) { @@ -84,11 +92,22 @@ sub write_author_info($) { die("Failed to open $file for writing: $!"); foreach (keys %conv_author_name) { - print $f "$_=$conv_author_name{$_} <$conv_author_email{$_}>\n"; + print $f "$_=$conv_author_name{$_} <$conv_author_email{$_}>"; + print $f " $conv_author_tz{$_}" if ($conv_author_tz{$_}); + print $f "\n"; } close ($f); } +# Versions of perl before 5.10.0 may not automatically check $TZ each +# time localtime is run (most platforms will do so only the first time). +# We can work around this by using tzset() to update the internal +# variable whenever we change the environment. +sub set_timezone { + $ENV{TZ} = shift; + tzset(); +} + # convert getopts specs for use by git config my %longmap = ( 'A:' => 'authors-file', @@ -795,7 +814,7 @@ sub write_tree () { return $tree; } -my ($patchset,$date,$author_name,$author_email,$branch,$ancestor,$tag,$logmsg); +my ($patchset,$date,$author_name,$author_email,$author_tz,$branch,$ancestor,$tag,$logmsg); my (@old,@new,@skipped,%ignorebranch,@commit_revisions); # commits that cvsps cannot place anywhere... @@ -844,7 +863,9 @@ sub commit { } } - my $commit_date = strftime("+0000 %Y-%m-%d %H:%M:%S",gmtime($date)); + set_timezone($author_tz); + my $commit_date = strftime("%s %z", localtime($date)); + set_timezone('UTC'); $ENV{GIT_AUTHOR_NAME} = $author_name; $ENV{GIT_AUTHOR_EMAIL} = $author_email; $ENV{GIT_AUTHOR_DATE} = $commit_date; @@ -945,12 +966,14 @@ while (<CVS>) { } $state=3; } elsif ($state == 3 and s/^Author:\s+//) { + $author_tz = "UTC"; s/\s+$//; if (/^(.*?)\s+<(.*)>/) { ($author_name, $author_email) = ($1, $2); } elsif ($conv_author_name{$_}) { $author_name = $conv_author_name{$_}; $author_email = $conv_author_email{$_}; + $author_tz = $conv_author_tz{$_} if ($conv_author_tz{$_}); } else { $author_name = $author_email = $_; } diff --git a/git-cvsserver.perl b/git-cvsserver.perl index b8eddabc94..c5ebfa0636 100755 --- a/git-cvsserver.perl +++ b/git-cvsserver.perl @@ -51,6 +51,10 @@ $| = 1; #### Definition and mappings of functions #### +# NOTE: Despite the existence of req_CATCHALL and req_EMPTY unimplemented +# requests, this list is incomplete. It is missing many rarer/optional +# requests. Perhaps some clients require a claim of support for +# these specific requests for main functionality to work? my $methods = { 'Root' => \&req_Root, 'Valid-responses' => \&req_Validresponses, @@ -80,7 +84,6 @@ my $methods = { 'noop' => \&req_EMPTY, 'annotate' => \&req_annotate, 'Global_option' => \&req_Globaloption, - #'annotate' => \&req_CATCHALL, }; ############################################## @@ -499,7 +502,7 @@ sub req_Entry #$log->debug("req_Entry : $data"); - my @data = split(/\//, $data); + my @data = split(/\//, $data, -1); $state->{entries}{$state->{directory}.$data[1]} = { revision => $data[2], @@ -540,8 +543,6 @@ sub req_add my $updater = GITCVS::updater->new($state->{CVSROOT}, $state->{module}, $log); $updater->update(); - argsfromdir($updater); - my $addcount = 0; foreach my $filename ( @{$state->{args}} ) @@ -551,10 +552,10 @@ sub req_add my $meta = $updater->getmeta($filename); my $wrev = revparse($filename); - if ($wrev && $meta && ($wrev < 0)) + if ($wrev && $meta && ($wrev=~/^-/)) { # previously removed file, add back - $log->info("added file $filename was previously removed, send 1.$meta->{revision}"); + $log->info("added file $filename was previously removed, send $meta->{revision}"); print "MT +updated\n"; print "MT text U \n"; @@ -571,8 +572,8 @@ sub req_add # this is an "entries" line my $kopts = kopts_from_path($filename,"sha1",$meta->{filehash}); - $log->debug("/$filepart/1.$meta->{revision}//$kopts/"); - print "/$filepart/1.$meta->{revision}//$kopts/\n"; + $log->debug("/$filepart/$meta->{revision}//$kopts/"); + print "/$filepart/$meta->{revision}//$kopts/\n"; # permissions $log->debug("SEND : u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}"); print "u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}\n"; @@ -680,13 +681,13 @@ sub req_remove next; } - if ( defined($wrev) and $wrev < 0 ) + if ( defined($wrev) and ($wrev=~/^-/) ) { print "E cvs remove: file `$filename' already scheduled for removal\n"; next; } - unless ( $wrev == $meta->{revision} ) + unless ( $wrev eq $meta->{revision} ) { # TODO : not sure if the format of this message is quite correct. print "E cvs remove: Up to date check failed for `$filename'\n"; @@ -701,7 +702,7 @@ sub req_remove print "Checked-in $dirpart\n"; print "$filename\n"; my $kopts = kopts_from_path($filename,"sha1",$meta->{filehash}); - print "/$filepart/-1.$wrev//$kopts/\n"; + print "/$filepart/-$wrev//$kopts/\n"; $rmcount++; } @@ -992,7 +993,7 @@ sub req_co # this is an "entries" line my $kopts = kopts_from_path($fullName,"sha1",$git->{filehash}); - print "/$git->{name}/1.$git->{revision}//$kopts/\n"; + print "/$git->{name}/$git->{revision}//$kopts/\n"; # permissions print "u=$git->{mode},g=$git->{mode},o=$git->{mode}\n"; @@ -1080,7 +1081,7 @@ sub req_update } my $meta; - if ( defined($state->{opt}{r}) and $state->{opt}{r} =~ /^1\.(\d+)/ ) + if ( defined($state->{opt}{r}) and $state->{opt}{r} =~ /^(1\.\d+)$/ ) { $meta = $updater->getmeta($filename, $1); } else { @@ -1102,7 +1103,7 @@ sub req_update { $meta = { name => $filename, - revision => 0, + revision => '0', filehash => 'added' }; } @@ -1112,7 +1113,7 @@ sub req_update my $wrev = revparse($filename); # If the working copy is an old revision, lets get that version too for comparison. - if ( defined($wrev) and $wrev != $meta->{revision} ) + if ( defined($wrev) and $wrev ne $meta->{revision} ) { $oldmeta = $updater->getmeta($filename, $wrev); } @@ -1123,7 +1124,7 @@ sub req_update # and the working copy is unmodified _and_ the user hasn't specified -C next if ( defined ( $wrev ) and defined($meta->{revision}) - and $wrev == $meta->{revision} + and $wrev eq $meta->{revision} and $state->{entries}{$filename}{unchanged} and not exists ( $state->{opt}{C} ) ); @@ -1131,7 +1132,7 @@ sub req_update # but the working copy is modified, tell the client it's modified if ( defined ( $wrev ) and defined($meta->{revision}) - and $wrev == $meta->{revision} + and $wrev eq $meta->{revision} and defined($state->{entries}{$filename}{modified_hash}) and not exists ( $state->{opt}{C} ) ) { @@ -1144,6 +1145,10 @@ sub req_update if ( $meta->{filehash} eq "deleted" ) { + # TODO: If it has been modified in the sandbox, error out + # with the appropriate message, rather than deleting a modified + # file. + my ( $filepart, $dirpart ) = filenamesplit($filename,1); $log->info("Removing '$filename' from working copy (no longer in the repo)"); @@ -1161,7 +1166,7 @@ sub req_update { # normal update, just send the new revision (either U=Update, # or A=Add, or R=Remove) - if ( defined($wrev) && $wrev < 0 ) + if ( defined($wrev) && ($wrev=~/^-/) ) { $log->info("Tell the client the file is scheduled for removal"); print "MT text R \n"; @@ -1169,7 +1174,8 @@ sub req_update print "MT newline\n"; next; } - elsif ( (!defined($wrev) || $wrev == 0) && (!defined($meta->{revision}) || $meta->{revision} == 0) ) + elsif ( (!defined($wrev) || $wrev eq '0') && + (!defined($meta->{revision}) || $meta->{revision} eq '0') ) { $log->info("Tell the client the file is scheduled for addition"); print "MT text A \n"; @@ -1179,7 +1185,7 @@ sub req_update } else { - $log->info("Updating '$filename' to ".$meta->{revision}); + $log->info("UpdatingX3 '$filename' to ".$meta->{revision}); print "MT +updated\n"; print "MT text U \n"; print "MT fname $filename\n"; @@ -1211,8 +1217,8 @@ sub req_update # this is an "entries" line my $kopts = kopts_from_path($filename,"sha1",$meta->{filehash}); - $log->debug("/$filepart/1.$meta->{revision}//$kopts/"); - print "/$filepart/1.$meta->{revision}//$kopts/\n"; + $log->debug("/$filepart/$meta->{revision}//$kopts/"); + print "/$filepart/$meta->{revision}//$kopts/\n"; # permissions $log->debug("SEND : u=$meta->{mode},g=$meta->{mode},o=$meta->{mode}"); @@ -1222,7 +1228,6 @@ sub req_update transmitfile($meta->{filehash}); } } else { - $log->info("Updating '$filename'"); my ( $filepart, $dirpart ) = filenamesplit($meta->{name},1); my $mergeDir = setupTmpDir(); @@ -1237,7 +1242,7 @@ sub req_update # we need to merge with the local changes ( M=successful merge, C=conflict merge ) $log->info("Merging $file_local, $file_old, $file_new"); - print "M Merging differences between 1.$oldmeta->{revision} and 1.$meta->{revision} into $filename\n"; + print "M Merging differences between $oldmeta->{revision} and $meta->{revision} into $filename\n"; $log->debug("Temporary directory for merge is $mergeDir"); @@ -1260,8 +1265,8 @@ sub req_update print $state->{CVSROOT} . "/$state->{module}/$filename\n"; my $kopts = kopts_from_path("$dirpart/$filepart", "file",$mergedFile); - $log->debug("/$filepart/1.$meta->{revision}//$kopts/"); - print "/$filepart/1.$meta->{revision}//$kopts/\n"; + $log->debug("/$filepart/$meta->{revision}//$kopts/"); + print "/$filepart/$meta->{revision}//$kopts/\n"; } } elsif ( $return == 1 ) @@ -1277,7 +1282,7 @@ sub req_update print $state->{CVSROOT} . "/$state->{module}/$filename\n"; my $kopts = kopts_from_path("$dirpart/$filepart", "file",$mergedFile); - print "/$filepart/1.$meta->{revision}/+/$kopts/\n"; + print "/$filepart/$meta->{revision}/+/$kopts/\n"; } } else @@ -1380,11 +1385,12 @@ sub req_ci my $addflag = 0; my $rmflag = 0; - $rmflag = 1 if ( defined($wrev) and $wrev < 0 ); + $rmflag = 1 if ( defined($wrev) and ($wrev=~/^-/) ); $addflag = 1 unless ( -e $filename ); # Do up to date checking - unless ( $addflag or $wrev == $meta->{revision} or ( $rmflag and -$wrev == $meta->{revision} ) ) + unless ( $addflag or $wrev eq $meta->{revision} or + ( $rmflag and $wrev eq "-$meta->{revision}" ) ) { # fail everything if an up to date check fails print "error 1 Up to date check failed for $filename\n"; @@ -1421,7 +1427,7 @@ sub req_ci $log->info("Adding file '$filename'"); system("git", "update-index", "--add", $filename); } else { - $log->info("Updating file '$filename'"); + $log->info("UpdatingX2 file '$filename'"); system("git", "update-index", $filename); } } @@ -1512,7 +1518,7 @@ sub req_ci my $meta = $updater->getmeta($filename); unless (defined $meta->{revision}) { - $meta->{revision} = 1; + $meta->{revision} = "1.1"; } my ( $filepart, $dirpart ) = filenamesplit($filename, 1); @@ -1522,19 +1528,19 @@ sub req_ci print "M $state->{CVSROOT}/$state->{module}/$filename,v <-- $dirpart$filepart\n"; if ( defined $meta->{filehash} && $meta->{filehash} eq "deleted" ) { - print "M new revision: delete; previous revision: 1.$oldmeta{$filename}{revision}\n"; + print "M new revision: delete; previous revision: $oldmeta{$filename}{revision}\n"; print "Remove-entry $dirpart\n"; print "$filename\n"; } else { - if ($meta->{revision} == 1) { + if ($meta->{revision} eq "1.1") { print "M initial revision: 1.1\n"; } else { - print "M new revision: 1.$meta->{revision}; previous revision: 1.$oldmeta{$filename}{revision}\n"; + print "M new revision: $meta->{revision}; previous revision: $oldmeta{$filename}{revision}\n"; } print "Checked-in $dirpart\n"; print "$filename\n"; my $kopts = kopts_from_path($filename,"sha1",$meta->{filehash}); - print "/$filepart/1.$meta->{revision}//$kopts/\n"; + print "/$filepart/$meta->{revision}//$kopts/\n"; } } @@ -1552,10 +1558,12 @@ sub req_status #$log->debug("status state : " . Dumper($state)); # Grab a handle to the SQLite db and do any necessary updates - my $updater = GITCVS::updater->new($state->{CVSROOT}, $state->{module}, $log); + my $updater; + $updater = GITCVS::updater->new($state->{CVSROOT}, $state->{module}, $log); $updater->update(); - # if no files were specified, we need to work out what files we should be providing status on ... + # if no files were specified, we need to work out what files we should + # be providing status on ... argsfromdir($updater); # foreach file specified on the command line ... @@ -1563,67 +1571,135 @@ sub req_status { $filename = filecleanup($filename); - next if exists($state->{opt}{l}) && index($filename, '/', length($state->{prependdir})) >= 0; + if ( exists($state->{opt}{l}) && + index($filename, '/', length($state->{prependdir})) >= 0 ) + { + next; + } my $meta = $updater->getmeta($filename); my $oldmeta = $meta; my $wrev = revparse($filename); - # If the working copy is an old revision, lets get that version too for comparison. - if ( defined($wrev) and $wrev != $meta->{revision} ) + # If the working copy is an old revision, lets get that + # version too for comparison. + if ( defined($wrev) and $wrev ne $meta->{revision} ) { $oldmeta = $updater->getmeta($filename, $wrev); } # TODO : All possible statuses aren't yet implemented my $status; - # Files are up to date if the working copy and repo copy have the same revision, and the working copy is unmodified - $status = "Up-to-date" if ( defined ( $wrev ) and defined($meta->{revision}) and $wrev == $meta->{revision} - and - ( ( $state->{entries}{$filename}{unchanged} and ( not defined ( $state->{entries}{$filename}{conflict} ) or $state->{entries}{$filename}{conflict} !~ /^\+=/ ) ) - or ( defined($state->{entries}{$filename}{modified_hash}) and $state->{entries}{$filename}{modified_hash} eq $meta->{filehash} ) ) - ); - - # Need checkout if the working copy has an older revision than the repo copy, and the working copy is unmodified - $status ||= "Needs Checkout" if ( defined ( $wrev ) and defined ( $meta->{revision} ) and $meta->{revision} > $wrev - and - ( $state->{entries}{$filename}{unchanged} - or ( defined($state->{entries}{$filename}{modified_hash}) and $state->{entries}{$filename}{modified_hash} eq $oldmeta->{filehash} ) ) - ); - - # Need checkout if it exists in the repo but doesn't have a working copy - $status ||= "Needs Checkout" if ( not defined ( $wrev ) and defined ( $meta->{revision} ) ); - - # Locally modified if working copy and repo copy have the same revision but there are local changes - $status ||= "Locally Modified" if ( defined ( $wrev ) and defined($meta->{revision}) and $wrev == $meta->{revision} and $state->{entries}{$filename}{modified_filename} ); - - # Needs Merge if working copy revision is less than repo copy and there are local changes - $status ||= "Needs Merge" if ( defined ( $wrev ) and defined ( $meta->{revision} ) and $meta->{revision} > $wrev and $state->{entries}{$filename}{modified_filename} ); - - $status ||= "Locally Added" if ( defined ( $state->{entries}{$filename}{revision} ) and not defined ( $meta->{revision} ) ); - $status ||= "Locally Removed" if ( defined ( $wrev ) and defined ( $meta->{revision} ) and -$wrev == $meta->{revision} ); - $status ||= "Unresolved Conflict" if ( defined ( $state->{entries}{$filename}{conflict} ) and $state->{entries}{$filename}{conflict} =~ /^\+=/ ); - $status ||= "File had conflicts on merge" if ( 0 ); + # Files are up to date if the working copy and repo copy have + # the same revision, and the working copy is unmodified + if ( defined ( $wrev ) and defined($meta->{revision}) and + $wrev eq $meta->{revision} and + ( ( $state->{entries}{$filename}{unchanged} and + ( not defined ( $state->{entries}{$filename}{conflict} ) or + $state->{entries}{$filename}{conflict} !~ /^\+=/ ) ) or + ( defined($state->{entries}{$filename}{modified_hash}) and + $state->{entries}{$filename}{modified_hash} eq + $meta->{filehash} ) ) ) + { + $status = "Up-to-date" + } + + # Need checkout if the working copy has a different (usually + # older) revision than the repo copy, and the working copy is + # unmodified + if ( defined ( $wrev ) and defined ( $meta->{revision} ) and + $meta->{revision} ne $wrev and + ( $state->{entries}{$filename}{unchanged} or + ( defined($state->{entries}{$filename}{modified_hash}) and + $state->{entries}{$filename}{modified_hash} eq + $oldmeta->{filehash} ) ) ) + { + $status ||= "Needs Checkout"; + } + + # Need checkout if it exists in the repo but doesn't have a working + # copy + if ( not defined ( $wrev ) and defined ( $meta->{revision} ) ) + { + $status ||= "Needs Checkout"; + } + + # Locally modified if working copy and repo copy have the + # same revision but there are local changes + if ( defined ( $wrev ) and defined($meta->{revision}) and + $wrev eq $meta->{revision} and + $state->{entries}{$filename}{modified_filename} ) + { + $status ||= "Locally Modified"; + } + + # Needs Merge if working copy revision is different + # (usually older) than repo copy and there are local changes + if ( defined ( $wrev ) and defined ( $meta->{revision} ) and + $meta->{revision} ne $wrev and + $state->{entries}{$filename}{modified_filename} ) + { + $status ||= "Needs Merge"; + } + + if ( defined ( $state->{entries}{$filename}{revision} ) and + not defined ( $meta->{revision} ) ) + { + $status ||= "Locally Added"; + } + if ( defined ( $wrev ) and defined ( $meta->{revision} ) and + $wrev eq "-$meta->{revision}" ) + { + $status ||= "Locally Removed"; + } + if ( defined ( $state->{entries}{$filename}{conflict} ) and + $state->{entries}{$filename}{conflict} =~ /^\+=/ ) + { + $status ||= "Unresolved Conflict"; + } + if ( 0 ) + { + $status ||= "File had conflicts on merge"; + } $status ||= "Unknown"; my ($filepart) = filenamesplit($filename); - print "M ===================================================================\n"; + print "M =======" . ( "=" x 60 ) . "\n"; print "M File: $filepart\tStatus: $status\n"; if ( defined($state->{entries}{$filename}{revision}) ) { - print "M Working revision:\t" . $state->{entries}{$filename}{revision} . "\n"; + print "M Working revision:\t" . + $state->{entries}{$filename}{revision} . "\n"; } else { print "M Working revision:\tNo entry for $filename\n"; } if ( defined($meta->{revision}) ) { - print "M Repository revision:\t1." . $meta->{revision} . "\t$state->{CVSROOT}/$state->{module}/$filename,v\n"; - print "M Sticky Tag:\t\t(none)\n"; - print "M Sticky Date:\t\t(none)\n"; - print "M Sticky Options:\t\t(none)\n"; + print "M Repository revision:\t" . + $meta->{revision} . + "\t$state->{CVSROOT}/$state->{module}/$filename,v\n"; + my($tagOrDate)=$state->{entries}{$filename}{tag_or_date}; + my($tag)=($tagOrDate=~m/^T(.+)$/); + if( !defined($tag) ) + { + $tag="(none)"; + } + print "M Sticky Tag:\t\t$tag\n"; + my($date)=($tagOrDate=~m/^D(.+)$/); + if( !defined($date) ) + { + $date="(none)"; + } + print "M Sticky Date:\t\t$date\n"; + my($options)=$state->{entries}{$filename}{options}; + if( $options eq "" ) + { + $options="(none)"; + } + print "M Sticky Options:\t\t$options\n"; } else { print "M Repository revision:\tNo revision control file\n"; } @@ -1651,16 +1727,17 @@ sub req_diff $revision1 = $state->{opt}{r}; } - $revision1 =~ s/^1\.// if ( defined ( $revision1 ) ); - $revision2 =~ s/^1\.// if ( defined ( $revision2 ) ); - - $log->debug("Diffing revisions " . ( defined($revision1) ? $revision1 : "[NULL]" ) . " and " . ( defined($revision2) ? $revision2 : "[NULL]" ) ); + $log->debug("Diffing revisions " . + ( defined($revision1) ? $revision1 : "[NULL]" ) . + " and " . ( defined($revision2) ? $revision2 : "[NULL]" ) ); # Grab a handle to the SQLite db and do any necessary updates - my $updater = GITCVS::updater->new($state->{CVSROOT}, $state->{module}, $log); + my $updater; + $updater = GITCVS::updater->new($state->{CVSROOT}, $state->{module}, $log); $updater->update(); - # if no files were specified, we need to work out what files we should be providing status on ... + # if no files were specified, we need to work out what files we should + # be providing status on ... argsfromdir($updater); # foreach file specified on the command line ... @@ -1682,7 +1759,7 @@ sub req_diff $meta1 = $updater->getmeta($filename, $revision1); unless ( defined ( $meta1 ) and $meta1->{filehash} ne "deleted" ) { - print "E File $filename at revision 1.$revision1 doesn't exist\n"; + print "E File $filename at revision $revision1 doesn't exist\n"; next; } transmitfile($meta1->{filehash}, { targetfile => $file1 }); @@ -1703,7 +1780,7 @@ sub req_diff unless ( defined ( $meta2 ) and $meta2->{filehash} ne "deleted" ) { - print "E File $filename at revision 1.$revision2 doesn't exist\n"; + print "E File $filename at revision $revision2 doesn't exist\n"; next; } @@ -1715,7 +1792,8 @@ sub req_diff $file2 = $state->{entries}{$filename}{modified_filename}; } - # if we have been given -r, and we don't have a $file2 yet, lets get one + # if we have been given -r, and we don't have a $file2 yet, lets + # get one if ( defined ( $revision1 ) and not defined ( $file2 ) ) { ( undef, $file2 ) = tempfile( DIR => $TEMP_DIR, OPEN => 0 ); @@ -1726,21 +1804,37 @@ sub req_diff # We need to have retrieved something useful next unless ( defined ( $meta1 ) ); - # Files to date if the working copy and repo copy have the same revision, and the working copy is unmodified - next if ( not defined ( $meta2 ) and $wrev == $meta1->{revision} - and - ( ( $state->{entries}{$filename}{unchanged} and ( not defined ( $state->{entries}{$filename}{conflict} ) or $state->{entries}{$filename}{conflict} !~ /^\+=/ ) ) - or ( defined($state->{entries}{$filename}{modified_hash}) and $state->{entries}{$filename}{modified_hash} eq $meta1->{filehash} ) ) - ); + # Files to date if the working copy and repo copy have the same + # revision, and the working copy is unmodified + if ( not defined ( $meta2 ) and $wrev eq $meta1->{revision} and + ( ( $state->{entries}{$filename}{unchanged} and + ( not defined ( $state->{entries}{$filename}{conflict} ) or + $state->{entries}{$filename}{conflict} !~ /^\+=/ ) ) or + ( defined($state->{entries}{$filename}{modified_hash}) and + $state->{entries}{$filename}{modified_hash} eq + $meta1->{filehash} ) ) ) + { + next; + } # Apparently we only show diffs for locally modified files - next unless ( defined($meta2) or defined ( $state->{entries}{$filename}{modified_filename} ) ); + unless ( defined($meta2) or + defined ( $state->{entries}{$filename}{modified_filename} ) ) + { + next; + } print "M Index: $filename\n"; - print "M ===================================================================\n"; + print "M =======" . ( "=" x 60 ) . "\n"; print "M RCS file: $state->{CVSROOT}/$state->{module}/$filename,v\n"; - print "M retrieving revision 1.$meta1->{revision}\n" if ( defined ( $meta1 ) ); - print "M retrieving revision 1.$meta2->{revision}\n" if ( defined ( $meta2 ) ); + if ( defined ( $meta1 ) ) + { + print "M retrieving revision $meta1->{revision}\n" + } + if ( defined ( $meta2 ) ) + { + print "M retrieving revision $meta2->{revision}\n" + } print "M diff "; foreach my $opt ( keys %{$state->{opt}} ) { @@ -1752,18 +1846,27 @@ sub req_diff } } else { print "-$opt "; - print "$state->{opt}{$opt} " if ( defined ( $state->{opt}{$opt} ) ); + if ( defined ( $state->{opt}{$opt} ) ) + { + print "$state->{opt}{$opt} " + } } } print "$filename\n"; - $log->info("Diffing $filename -r $meta1->{revision} -r " . ( $meta2->{revision} or "workingcopy" )); + $log->info("Diffing $filename -r $meta1->{revision} -r " . + ( $meta2->{revision} or "workingcopy" )); ( $fh, $filediff ) = tempfile ( DIR => $TEMP_DIR ); if ( exists $state->{opt}{u} ) { - system("diff -u -L '$filename revision 1.$meta1->{revision}' -L '$filename " . ( defined($meta2->{revision}) ? "revision 1.$meta2->{revision}" : "working copy" ) . "' $file1 $file2 > $filediff"); + system("diff -u -L '$filename revision $meta1->{revision}'" . + " -L '$filename " . + ( defined($meta2->{revision}) ? + "revision $meta2->{revision}" : + "working copy" ) . + "' $file1 $file2 > $filediff" ); } else { system("diff $file1 $file2 > $filediff"); } @@ -1787,22 +1890,19 @@ sub req_log $log->debug("req_log : " . ( defined($data) ? $data : "[NULL]" )); #$log->debug("log state : " . Dumper($state)); - my ( $minrev, $maxrev ); - if ( defined ( $state->{opt}{r} ) and $state->{opt}{r} =~ /([\d.]+)?(::?)([\d.]+)?/ ) + my ( $revFilter ); + if ( defined ( $state->{opt}{r} ) ) { - my $control = $2; - $minrev = $1; - $maxrev = $3; - $minrev =~ s/^1\.// if ( defined ( $minrev ) ); - $maxrev =~ s/^1\.// if ( defined ( $maxrev ) ); - $minrev++ if ( defined($minrev) and $control eq "::" ); + $revFilter = $state->{opt}{r}; } # Grab a handle to the SQLite db and do any necessary updates - my $updater = GITCVS::updater->new($state->{CVSROOT}, $state->{module}, $log); + my $updater; + $updater = GITCVS::updater->new($state->{CVSROOT}, $state->{module}, $log); $updater->update(); - # if no files were specified, we need to work out what files we should be providing status on ... + # if no files were specified, we need to work out what files we + # should be providing status on ... argsfromdir($updater); # foreach file specified on the command line ... @@ -1812,53 +1912,46 @@ sub req_log my $headmeta = $updater->getmeta($filename); - my $revisions = $updater->getlog($filename); - my $totalrevisions = scalar(@$revisions); - - if ( defined ( $minrev ) ) - { - $log->debug("Removing revisions less than $minrev"); - while ( scalar(@$revisions) > 0 and $revisions->[-1]{revision} < $minrev ) - { - pop @$revisions; - } - } - if ( defined ( $maxrev ) ) - { - $log->debug("Removing revisions greater than $maxrev"); - while ( scalar(@$revisions) > 0 and $revisions->[0]{revision} > $maxrev ) - { - shift @$revisions; - } - } + my ($revisions,$totalrevisions) = $updater->getlog($filename, + $revFilter); next unless ( scalar(@$revisions) ); print "M \n"; print "M RCS file: $state->{CVSROOT}/$state->{module}/$filename,v\n"; print "M Working file: $filename\n"; - print "M head: 1.$headmeta->{revision}\n"; + print "M head: $headmeta->{revision}\n"; print "M branch:\n"; print "M locks: strict\n"; print "M access list:\n"; print "M symbolic names:\n"; print "M keyword substitution: kv\n"; - print "M total revisions: $totalrevisions;\tselected revisions: " . scalar(@$revisions) . "\n"; + print "M total revisions: $totalrevisions;\tselected revisions: " . + scalar(@$revisions) . "\n"; print "M description:\n"; foreach my $revision ( @$revisions ) { print "M ----------------------------\n"; - print "M revision 1.$revision->{revision}\n"; + print "M revision $revision->{revision}\n"; # reformat the date for log output - $revision->{modified} = sprintf('%04d/%02d/%02d %s', $3, $DATE_LIST->{$2}, $1, $4 ) if ( $revision->{modified} =~ /(\d+)\s+(\w+)\s+(\d+)\s+(\S+)/ and defined($DATE_LIST->{$2}) ); + if ( $revision->{modified} =~ /(\d+)\s+(\w+)\s+(\d+)\s+(\S+)/ and + defined($DATE_LIST->{$2}) ) + { + $revision->{modified} = sprintf('%04d/%02d/%02d %s', + $3, $DATE_LIST->{$2}, $1, $4 ); + } $revision->{author} = cvs_author($revision->{author}); - print "M date: $revision->{modified}; author: $revision->{author}; state: " . ( $revision->{filehash} eq "deleted" ? "dead" : "Exp" ) . "; lines: +2 -3\n"; - my $commitmessage = $updater->commitmessage($revision->{commithash}); + print "M date: $revision->{modified};" . + " author: $revision->{author}; state: " . + ( $revision->{filehash} eq "deleted" ? "dead" : "Exp" ) . + "; lines: +2 -3\n"; + my $commitmessage; + $commitmessage = $updater->commitmessage($revision->{commithash}); $commitmessage =~ s/^/M /mg; print $commitmessage . "\n"; } - print "M =============================================================================\n"; + print "M =======" . ( "=" x 70 ) . "\n"; } print "ok\n"; @@ -1964,7 +2057,7 @@ sub req_annotate $metadata->{$commithash}{author} = cvs_author($metadata->{$commithash}{author}); $metadata->{$commithash}{modified} = sprintf("%02d-%s-%02d", $1, $2, $3) if ( $metadata->{$commithash}{modified} =~ /^(\d+)\s(\w+)\s\d\d(\d\d)/ ); } - printf("M 1.%-5d (%-8s %10s): %s\n", + printf("M %-7s (%-8s %10s): %s\n", $metadata->{$commithash}{revision}, $metadata->{$commithash}{author}, $metadata->{$commithash}{modified}, @@ -2085,7 +2178,7 @@ sub argsfromdir # push added files foreach my $file (keys %{$state->{entries}}) { if ( exists $state->{entries}{$file}{revision} && - $state->{entries}{$file}{revision} == 0 ) + $state->{entries}{$file}{revision} eq '0' ) { push @gethead, { name => $file, filehash => 'added' }; } @@ -2129,16 +2222,15 @@ sub statecleanup $state->{entries} = {}; } +# Return working directory CVS revision "1.X" out +# of the the working directory "entries" state, for the given filename. +# This is prefixed with a dash if the file is scheduled for removal +# when it is committed. sub revparse { my $filename = shift; - return undef unless ( defined ( $state->{entries}{$filename}{revision} ) ); - - return $1 if ( $state->{entries}{$filename}{revision} =~ /^1\.(\d+)/ ); - return -$1 if ( $state->{entries}{$filename}{revision} =~ /^-1\.(\d+)/ ); - - return undef; + return $state->{entries}{$filename}{revision}; } # This method takes a file hash and does a CVS "file transfer". Its @@ -2444,42 +2536,14 @@ sub kopts_from_path } elsif( ($cfg->{gitcvs}{allbinary} =~ /^\s*guess\s*$/i) ) { - if( $srcType eq "sha1Or-k" && - !defined($name) ) + if( is_binary($srcType,$name) ) { - my ($ret)=$state->{entries}{$path}{options}; - if( !defined($ret) ) - { - $ret=$state->{opt}{k}; - if(defined($ret)) - { - $ret="-k$ret"; - } - else - { - $ret=""; - } - } - if( ! ($ret=~/^(|-kb|-kkv|-kkvl|-kk|-ko|-kv)$/) ) - { - print "E Bad -k option\n"; - $log->warn("Bad -k option: $ret"); - die "Error: Bad -k option: $ret\n"; - } - - return $ret; + $log->debug("... as binary"); + return "-kb"; } else { - if( is_binary($srcType,$name) ) - { - $log->debug("... as binary"); - return "-kb"; - } - else - { - $log->debug("... as text"); - } + $log->debug("... as text"); } } } @@ -2586,7 +2650,7 @@ sub open_blob_or_die die "Unable to open file $name: $!\n"; } } - elsif( $srcType eq "sha1" || $srcType eq "sha1Or-k" ) + elsif( $srcType eq "sha1" ) { unless ( defined ( $name ) and $name =~ /^[a-zA-Z0-9]{40}$/ ) { @@ -2929,6 +2993,16 @@ sub new } # Construct the revision table if required + # The revision table stores an entry for each file, each time that file + # changes. + # numberOfRecords = O( numCommits * averageNumChangedFilesPerCommit ) + # This is not sufficient to support "-r {commithash}" for any + # files except files that were modified by that commit (also, + # some places in the code ignore/effectively strip out -r in + # some cases, before it gets passed to getmeta()). + # The "filehash" field typically has a git blob hash, but can also + # be set to "dead" to indicate that the given version of the file + # should not exist in the sandbox. unless ( $self->{tables}{$self->tablename("revision")} ) { my $tablename = $self->tablename("revision"); @@ -2956,6 +3030,15 @@ sub new } # Construct the head table if required + # The head table (along with the "last_commit" entry in the property + # table) is the persisted working state of the "sub update" subroutine. + # All of it's data is read entirely first, and completely recreated + # last, every time "sub update" runs. + # This is also used by "sub getmeta" when it is asked for the latest + # version of a file (as opposed to some specific version). + # Another way of thinking about it is as a single slice out of + # "revisions", giving just the most recent revision information for + # each file. unless ( $self->{tables}{$self->tablename("head")} ) { my $tablename = $self->tablename("head"); @@ -2978,6 +3061,7 @@ sub new } # Construct the properties table if required + # - "last_commit" - Used by "sub update". unless ( $self->{tables}{$self->tablename("properties")} ) { my $tablename = $self->tablename("properties"); @@ -2990,6 +3074,11 @@ sub new } # Construct the commitmsgs table if required + # The commitmsgs table is only used for merge commits, since + # "sub update" will only keep one branch of parents. Shortlogs + # for ignored commits (i.e. not on the chosen branch) will be used + # to construct a replacement "collapsed" merge commit message, + # which will be stored in this table. See also "sub commitmessage". unless ( $self->{tables}{$self->tablename("commitmsgs")} ) { my $tablename = $self->tablename("commitmsgs"); @@ -3021,6 +3110,14 @@ sub tablename =head2 update +Bring the database up to date with the latest changes from +the git repository. + +Internal working state is read out of the "head" table and the +"last_commit" property, then it updates "revisions" based on that, and +finally it writes the new internal state back to the "head" table +so it can be used as a starting point the next time update is called. + =cut sub update { @@ -3116,7 +3213,7 @@ sub update my $commitcount = 0; # Load the head table into $head (for cached lookups during the update process) - foreach my $file ( @{$self->gethead()} ) + foreach my $file ( @{$self->gethead(1)} ) { $head->{$file->{name}} = $file; } @@ -3134,17 +3231,18 @@ sub update next; } elsif (@{$commit->{parents}} > 1) { # it is a merge commit, for each parent that is - # not $lastpicked, see if we can get a log + # not $lastpicked (not given a CVS revision number), + # see if we can get a log # from the merge-base to that parent to put it # in the message as a merge summary. my @parents = @{$commit->{parents}}; foreach my $parent (@parents) { - # git-merge-base can potentially (but rarely) throw - # several candidate merge bases. let's assume - # that the first one is the best one. if ($parent eq $lastpicked) { next; } + # git-merge-base can potentially (but rarely) throw + # several candidate merge bases. let's assume + # that the first one is the best one. my $base = eval { safe_pipe_capture('git', 'merge-base', $lastpicked, $parent); @@ -3426,19 +3524,6 @@ sub insert_head $insert_head->execute($name, $revision, $filehash, $commithash, $modified, $author, $mode); } -sub _headrev -{ - my $self = shift; - my $filename = shift; - my $tablename = $self->tablename("head"); - - my $db_query = $self->{dbh}->prepare_cached("SELECT filehash, revision, mode FROM $tablename WHERE name=?",{},1); - $db_query->execute($filename); - my ( $hash, $revision, $mode ) = $db_query->fetchrow_array; - - return ( $hash, $revision, $mode ); -} - sub _get_prop { my $self = shift; @@ -3478,6 +3563,7 @@ sub _set_prop sub gethead { my $self = shift; + my $intRev = shift; my $tablename = $self->tablename("head"); return $self->{gethead_cache} if ( defined ( $self->{gethead_cache} ) ); @@ -3488,6 +3574,10 @@ sub gethead my $tree = []; while ( my $file = $db_query->fetchrow_hashref ) { + if(!$intRev) + { + $file->{revision} = "1.$file->{revision}" + } push @$tree, $file; } @@ -3498,24 +3588,57 @@ sub gethead =head2 getlog +See also gethistorydense(). + =cut sub getlog { my $self = shift; my $filename = shift; + my $revFilter = shift; + my $tablename = $self->tablename("revision"); + # Filters: + # TODO: date, state, or by specific logins filters? + # TODO: Handle comma-separated list of revFilter items, each item + # can be a range [only case currently handled] or individual + # rev or branch or "branch.". + # TODO: Adjust $db_query WHERE clause based on revFilter, instead of + # manually filtering the results of the query? + my ( $minrev, $maxrev ); + if( defined($revFilter) and + $state->{opt}{r} =~ /^(1.(\d+))?(::?)(1.(\d.+))?$/ ) + { + my $control = $3; + $minrev = $2; + $maxrev = $5; + $minrev++ if ( defined($minrev) and $control eq "::" ); + } + my $db_query = $self->{dbh}->prepare_cached("SELECT name, filehash, author, mode, revision, modified, commithash FROM $tablename WHERE name=? ORDER BY revision DESC",{},1); $db_query->execute($filename); + my $totalRevs=0; my $tree = []; while ( my $file = $db_query->fetchrow_hashref ) { + $totalRevs++; + if( defined($minrev) and $file->{revision} < $minrev ) + { + next; + } + if( defined($maxrev) and $file->{revision} > $maxrev ) + { + next; + } + + $file->{revision} = "1." . $file->{revision}; push @$tree, $file; } - return $tree; + return ($tree,$totalRevs); } =head2 getmeta @@ -3534,10 +3657,11 @@ sub getmeta my $tablename_head = $self->tablename("head"); my $db_query; - if ( defined($revision) and $revision =~ /^\d+$/ ) + if ( defined($revision) and $revision =~ /^1\.(\d+)$/ ) { + my ($intRev) = $1; $db_query = $self->{dbh}->prepare_cached("SELECT * FROM $tablename_rev WHERE name=? AND revision=?",{},1); - $db_query->execute($filename, $revision); + $db_query->execute($filename, $intRev); } elsif ( defined($revision) and $revision =~ /^[a-zA-Z0-9]{40}$/ ) { @@ -3548,7 +3672,12 @@ sub getmeta $db_query->execute($filename); } - return $db_query->fetchrow_hashref; + my $meta = $db_query->fetchrow_hashref; + if($meta) + { + $meta->{revision} = "1.$meta->{revision}"; + } + return $meta; } =head2 commitmessage @@ -3583,25 +3712,6 @@ sub commitmessage return $message; } -=head2 gethistory - -This function takes a filename (with path) argument and returns an arrayofarrays -containing revision,filehash,commithash ordered by revision descending - -=cut -sub gethistory -{ - my $self = shift; - my $filename = shift; - my $tablename = $self->tablename("revision"); - - my $db_query; - $db_query = $self->{dbh}->prepare_cached("SELECT revision, filehash, commithash FROM $tablename WHERE name=? ORDER BY revision DESC",{},1); - $db_query->execute($filename); - - return $db_query->fetchall_arrayref; -} - =head2 gethistorydense This function takes a filename (with path) argument and returns an arrayofarrays @@ -3611,6 +3721,8 @@ This version of gethistory skips deleted entries -- so it is useful for annotate The 'dense' part is a reference to a '--dense' option available for git-rev-list and other git tools that depend on it. +See also getlog(). + =cut sub gethistorydense { @@ -3622,7 +3734,15 @@ sub gethistorydense $db_query = $self->{dbh}->prepare_cached("SELECT revision, filehash, commithash FROM $tablename WHERE name=? AND filehash!='deleted' ORDER BY revision DESC",{},1); $db_query->execute($filename); - return $db_query->fetchall_arrayref; + my $result = $db_query->fetchall_arrayref; + + my $i; + for($i=0 ; $i<scalar(@$result) ; $i++) + { + $result->[$i][0]="1." . $result->[$i][0]; + } + + return $result; } =head2 in_array() diff --git a/git-pull.sh b/git-pull.sh index 2a10047eb7..266e682f6c 100755 --- a/git-pull.sh +++ b/git-pull.sh @@ -200,6 +200,7 @@ test true = "$rebase" && { require_clean_work_tree "pull with rebase" "Please commit or stash them." fi oldremoteref= && + test -n "$curr_branch" && . git-parse-remote && remoteref="$(get_remote_merge_branch "$@" 2>/dev/null)" && oldremoteref="$(git rev-parse -q --verify "$remoteref")" && diff --git a/git-send-email.perl b/git-send-email.perl index aea66a0d47..5a7c29db93 100755 --- a/git-send-email.perl +++ b/git-send-email.perl @@ -56,6 +56,7 @@ git send-email [options] <file | directory | rev-list options > --in-reply-to <str> * Email "In-Reply-To:" --annotate * Review each patch that will be sent in an editor. --compose * Open an editor for introduction. + --compose-encoding <str> * Encoding to assume for introduction. --8bit-encoding <str> * Encoding to assume 8bit mails if undeclared Sending: @@ -198,6 +199,7 @@ my ($identity, $aliasfiletype, @alias_files, $smtp_domain); my ($validate, $confirm); my (@suppress_cc); my ($auto_8bit_encoding); +my ($compose_encoding); my ($debug_net_smtp) = 0; # Net::SMTP, see send_message() @@ -231,6 +233,7 @@ my %config_settings = ( "confirm" => \$confirm, "from" => \$sender, "assume8bitencoding" => \$auto_8bit_encoding, + "composeencoding" => \$compose_encoding, ); my %config_path_settings = ( @@ -315,6 +318,7 @@ my $rc = GetOptions("h" => \$help, "validate!" => \$validate, "format-patch!" => \$format_patch, "8bit-encoding=s" => \$auto_8bit_encoding, + "compose-encoding=s" => \$compose_encoding, "force" => \$force, ); @@ -632,6 +636,9 @@ EOT my $need_8bit_cte = file_has_nonascii($compose_filename); my $in_body = 0; my $summary_empty = 1; + if (!defined $compose_encoding) { + $compose_encoding = "UTF-8"; + } while(<$c>) { next if m/^GIT:/; if ($in_body) { @@ -641,7 +648,7 @@ EOT if ($need_8bit_cte) { print $c2 "MIME-Version: 1.0\n", "Content-Type: text/plain; ", - "charset=UTF-8\n", + "charset=$compose_encoding\n", "Content-Transfer-Encoding: 8bit\n"; } } elsif (/^MIME-Version:/i) { @@ -650,9 +657,7 @@ EOT $initial_subject = $1; my $subject = $initial_subject; $_ = "Subject: " . - ($subject =~ /[^[:ascii:]]/ ? - quote_rfc2047($subject) : - $subject) . + quote_subject($subject, $compose_encoding) . "\n"; } elsif (/^In-Reply-To:\s*(.+)\s*$/i) { $initial_reply_to = $1; @@ -900,6 +905,22 @@ sub is_rfc2047_quoted { $s =~ m/^(?:"[[:ascii:]]*"|=\?$token\?$token\?$encoded_text\?=)$/o; } +sub subject_needs_rfc2047_quoting { + my $s = shift; + + return ($s =~ /[^[:ascii:]]/) || ($s =~ /=\?/); +} + +sub quote_subject { + local $subject = shift; + my $encoding = shift || 'UTF-8'; + + if (subject_needs_rfc2047_quoting($subject)) { + return quote_rfc2047($subject, $encoding); + } + return $subject; +} + # use the simplest quoting being able to handle the recipient sub sanitize_address { my ($recipient) = @_; @@ -1321,7 +1342,7 @@ foreach my $t (@files) { } if ($broken_encoding{$t} && !is_rfc2047_quoted($subject)) { - $subject = quote_rfc2047($subject, $auto_8bit_encoding); + $subject = quote_subject($subject, $auto_8bit_encoding); } if (defined $author and $author ne $sender) { diff --git a/git-submodule.sh b/git-submodule.sh index 9e1e1f4410..0522c3871a 100755 --- a/git-submodule.sh +++ b/git-submodule.sh @@ -11,7 +11,7 @@ USAGE="[--quiet] add [-b branch] [-f|--force] [--name <name>] [--reference <repo or: $dashless [--quiet] update [--init] [-N|--no-fetch] [-f|--force] [--rebase] [--reference <repository>] [--merge] [--recursive] [--] [<path>...] or: $dashless [--quiet] summary [--cached|--files] [--summary-limit <n>] [commit] [--] [<path>...] or: $dashless [--quiet] foreach [--recursive] <command> - or: $dashless [--quiet] sync [--] [<path>...]" + or: $dashless [--quiet] sync [--recursive] [--] [<path>...]" OPTIONS_SPEC= . git-sh-setup . git-sh-i18n @@ -270,7 +270,6 @@ cmd_add() ;; --reference=*) reference="$1" - shift ;; --name) case "$2" in '') usage ;; esac @@ -951,7 +950,6 @@ cmd_summary() { cmd_status() { # parse $args after "submodule ... status". - orig_flags= while test $# -ne 0 do case "$1" in @@ -975,7 +973,6 @@ cmd_status() break ;; esac - orig_flags="$orig_flags $(git rev-parse --sq-quote "$1")" shift done @@ -1015,7 +1012,7 @@ cmd_status() prefix="$displaypath/" clear_local_git_env cd "$sm_path" && - eval cmd_status "$orig_args" + eval cmd_status ) || die "$(eval_gettext "Failed to recurse into submodule path '\$sm_path'")" fi @@ -1035,6 +1032,10 @@ cmd_sync() GIT_QUIET=1 shift ;; + --recursive) + recursive=1 + shift + ;; --) shift break @@ -1076,7 +1077,7 @@ cmd_sync() if git config "submodule.$name.url" >/dev/null 2>/dev/null then - say "$(eval_gettext "Synchronizing submodule url for '\$name'")" + say "$(eval_gettext "Synchronizing submodule url for '\$prefix\$sm_path'")" git config submodule."$name".url "$super_config_url" if test -e "$sm_path"/.git @@ -1086,6 +1087,12 @@ cmd_sync() cd "$sm_path" remote=$(get_default_remote) git config remote."$remote".url "$sub_origin_url" + + if test -n "$recursive" + then + prefix="$prefix$sm_path/" + eval cmd_sync + fi ) fi fi @@ -17,39 +17,6 @@ const char git_more_info_string[] = static struct startup_info git_startup_info; static int use_pager = -1; -struct pager_config { - const char *cmd; - int want; - char *value; -}; - -static int pager_command_config(const char *var, const char *value, void *data) -{ - struct pager_config *c = data; - if (!prefixcmp(var, "pager.") && !strcmp(var + 6, c->cmd)) { - int b = git_config_maybe_bool(var, value); - if (b >= 0) - c->want = b; - else { - c->want = 1; - c->value = xstrdup(value); - } - } - return 0; -} - -/* returns 0 for "no pager", 1 for "use pager", and -1 for "not specified" */ -int check_pager_config(const char *cmd) -{ - struct pager_config c; - c.cmd = cmd; - c.want = -1; - c.value = NULL; - git_config(pager_command_config, &c); - if (c.value) - pager_program = c.value; - return c.want; -} static void commit_pager_choice(void) { switch (use_pager) { @@ -631,6 +631,18 @@ void run_active_slot(struct active_request_slot *slot) FD_ZERO(&excfds); curl_multi_fdset(curlm, &readfds, &writefds, &excfds, &max_fd); + /* + * It can happen that curl_multi_timeout returns a pathologically + * long timeout when curl_multi_fdset returns no file descriptors + * to read. See commit message for more details. + */ + if (max_fd < 0 && + (select_timeout.tv_sec > 0 || + select_timeout.tv_usec > 50000)) { + select_timeout.tv_sec = 0; + select_timeout.tv_usec = 50000; + } + select(max_fd+1, &readfds, &writefds, &excfds, &select_timeout); } } @@ -118,7 +118,7 @@ static char *parse_name_and_email(char *buffer, char **name, while (isspace(*nstart) && nstart < left) ++nstart; nend = left-1; - while (isspace(*nend) && nend > nstart) + while (nend > nstart && isspace(*nend)) --nend; *name = (nstart < nend ? nstart : NULL); diff --git a/merge-recursive.h b/merge-recursive.h index 58f3435e9e..9e090a3470 100644 --- a/merge-recursive.h +++ b/merge-recursive.h @@ -59,9 +59,4 @@ struct tree *write_tree_from_memory(struct merge_options *o); int parse_merge_opt(struct merge_options *out, const char *s); -/* builtin/merge.c */ -int try_merge_command(const char *strategy, size_t xopts_nr, - const char **xopts, struct commit_list *common, - const char *head_arg, struct commit_list *remotes); - #endif diff --git a/merge.c b/merge.c new file mode 100644 index 0000000000..70f1000fcb --- /dev/null +++ b/merge.c @@ -0,0 +1,112 @@ +#include "cache.h" +#include "commit.h" +#include "run-command.h" +#include "resolve-undo.h" +#include "tree-walk.h" +#include "unpack-trees.h" +#include "dir.h" + +static const char *merge_argument(struct commit *commit) +{ + if (commit) + return sha1_to_hex(commit->object.sha1); + else + return EMPTY_TREE_SHA1_HEX; +} + +int try_merge_command(const char *strategy, size_t xopts_nr, + const char **xopts, struct commit_list *common, + const char *head_arg, struct commit_list *remotes) +{ + const char **args; + int i = 0, x = 0, ret; + struct commit_list *j; + struct strbuf buf = STRBUF_INIT; + + args = xmalloc((4 + xopts_nr + commit_list_count(common) + + commit_list_count(remotes)) * sizeof(char *)); + strbuf_addf(&buf, "merge-%s", strategy); + args[i++] = buf.buf; + for (x = 0; x < xopts_nr; x++) { + char *s = xmalloc(strlen(xopts[x])+2+1); + strcpy(s, "--"); + strcpy(s+2, xopts[x]); + args[i++] = s; + } + for (j = common; j; j = j->next) + args[i++] = xstrdup(merge_argument(j->item)); + args[i++] = "--"; + args[i++] = head_arg; + for (j = remotes; j; j = j->next) + args[i++] = xstrdup(merge_argument(j->item)); + args[i] = NULL; + ret = run_command_v_opt(args, RUN_GIT_CMD); + strbuf_release(&buf); + i = 1; + for (x = 0; x < xopts_nr; x++) + free((void *)args[i++]); + for (j = common; j; j = j->next) + free((void *)args[i++]); + i += 2; + for (j = remotes; j; j = j->next) + free((void *)args[i++]); + free(args); + discard_cache(); + if (read_cache() < 0) + die(_("failed to read the cache")); + resolve_undo_clear(); + + return ret; +} + +int checkout_fast_forward(const unsigned char *head, + const unsigned char *remote, + int overwrite_ignore) +{ + struct tree *trees[MAX_UNPACK_TREES]; + struct unpack_trees_options opts; + struct tree_desc t[MAX_UNPACK_TREES]; + int i, fd, nr_trees = 0; + struct dir_struct dir; + struct lock_file *lock_file = xcalloc(1, sizeof(struct lock_file)); + + refresh_cache(REFRESH_QUIET); + + fd = hold_locked_index(lock_file, 1); + + memset(&trees, 0, sizeof(trees)); + memset(&opts, 0, sizeof(opts)); + memset(&t, 0, sizeof(t)); + if (overwrite_ignore) { + memset(&dir, 0, sizeof(dir)); + dir.flags |= DIR_SHOW_IGNORED; + setup_standard_excludes(&dir); + opts.dir = &dir; + } + + opts.head_idx = 1; + opts.src_index = &the_index; + opts.dst_index = &the_index; + opts.update = 1; + opts.verbose_update = 1; + opts.merge = 1; + opts.fn = twoway_merge; + setup_unpack_trees_porcelain(&opts, "merge"); + + trees[nr_trees] = parse_tree_indirect(head); + if (!trees[nr_trees++]) + return -1; + trees[nr_trees] = parse_tree_indirect(remote); + if (!trees[nr_trees++]) + return -1; + for (i = 0; i < nr_trees; i++) { + parse_tree(trees[i]); + init_tree_desc(t+i, trees[i]->buffer, trees[i]->size); + } + if (unpack_trees(nr_trees, t, &opts)) + return -1; + if (write_cache(fd, active_cache, active_nr) || + commit_locked_index(lock_file)) + die(_("unable to write new index file")); + return 0; +} @@ -1231,7 +1231,7 @@ static void format_note(struct notes_tree *t, const unsigned char *object_sha1, } if (output_encoding && *output_encoding && - strcmp(utf8, output_encoding)) { + !is_encoding_utf8(output_encoding)) { char *reencoded = reencode_string(msg, output_encoding, utf8); if (reencoded) { free(msg); @@ -6,6 +6,12 @@ #define DEFAULT_PAGER "less" #endif +struct pager_config { + const char *cmd; + int want; + char *value; +}; + /* * This is split up from the rest of git so that we can do * something different on Windows. @@ -141,3 +147,31 @@ int decimal_width(int number) i *= 10; return width; } + +static int pager_command_config(const char *var, const char *value, void *data) +{ + struct pager_config *c = data; + if (!prefixcmp(var, "pager.") && !strcmp(var + 6, c->cmd)) { + int b = git_config_maybe_bool(var, value); + if (b >= 0) + c->want = b; + else { + c->want = 1; + c->value = xstrdup(value); + } + } + return 0; +} + +/* returns 0 for "no pager", 1 for "use pager", and -1 for "not specified" */ +int check_pager_config(const char *cmd) +{ + struct pager_config c; + c.cmd = cmd; + c.want = -1; + c.value = NULL; + git_config(pager_command_config, &c); + if (c.value) + pager_program = c.value; + return c.want; +} @@ -231,7 +231,7 @@ static int is_rfc822_special(char ch) } } -static int has_rfc822_specials(const char *s, int len) +static int needs_rfc822_quoting(const char *s, int len) { int i; for (i = 0; i < len; i++) @@ -240,6 +240,17 @@ static int has_rfc822_specials(const char *s, int len) return 0; } +static int last_line_length(struct strbuf *sb) +{ + int i; + + /* How many bytes are already used on the last line? */ + for (i = sb->len - 1; i >= 0; i--) + if (sb->buf[i] == '\n') + break; + return sb->len - (i + 1); +} + static void add_rfc822_quoted(struct strbuf *out, const char *s, int len) { int i; @@ -261,57 +272,110 @@ static void add_rfc822_quoted(struct strbuf *out, const char *s, int len) strbuf_addch(out, '"'); } -static int is_rfc2047_special(char ch) +enum rfc2047_type { + RFC2047_SUBJECT, + RFC2047_ADDRESS, +}; + +static int is_rfc2047_special(char ch, enum rfc2047_type type) { - return (non_ascii(ch) || (ch == '=') || (ch == '?') || (ch == '_')); + /* + * rfc2047, section 4.2: + * + * 8-bit values which correspond to printable ASCII characters other + * than "=", "?", and "_" (underscore), MAY be represented as those + * characters. (But see section 5 for restrictions.) In + * particular, SPACE and TAB MUST NOT be represented as themselves + * within encoded words. + */ + + /* + * rule out non-ASCII characters and non-printable characters (the + * non-ASCII check should be redundant as isprint() is not localized + * and only knows about ASCII, but be defensive about that) + */ + if (non_ascii(ch) || !isprint(ch)) + return 1; + + /* + * rule out special printable characters (' ' should be the only + * whitespace character considered printable, but be defensive and use + * isspace()) + */ + if (isspace(ch) || ch == '=' || ch == '?' || ch == '_') + return 1; + + /* + * rfc2047, section 5.3: + * + * As a replacement for a 'word' entity within a 'phrase', for example, + * one that precedes an address in a From, To, or Cc header. The ABNF + * definition for 'phrase' from RFC 822 thus becomes: + * + * phrase = 1*( encoded-word / word ) + * + * In this case the set of characters that may be used in a "Q"-encoded + * 'encoded-word' is restricted to: <upper and lower case ASCII + * letters, decimal digits, "!", "*", "+", "-", "/", "=", and "_" + * (underscore, ASCII 95.)>. An 'encoded-word' that appears within a + * 'phrase' MUST be separated from any adjacent 'word', 'text' or + * 'special' by 'linear-white-space'. + */ + + if (type != RFC2047_ADDRESS) + return 0; + + /* '=' and '_' are special cases and have been checked above */ + return !(isalnum(ch) || ch == '!' || ch == '*' || ch == '+' || ch == '-' || ch == '/'); } -static void add_rfc2047(struct strbuf *sb, const char *line, int len, - const char *encoding) +static int needs_rfc2047_encoding(const char *line, int len, + enum rfc2047_type type) { - static const int max_length = 78; /* per rfc2822 */ int i; - int line_len; - - /* How many bytes are already used on the current line? */ - for (i = sb->len - 1; i >= 0; i--) - if (sb->buf[i] == '\n') - break; - line_len = sb->len - (i+1); for (i = 0; i < len; i++) { int ch = line[i]; if (non_ascii(ch) || ch == '\n') - goto needquote; + return 1; if ((i + 1 < len) && (ch == '=' && line[i+1] == '?')) - goto needquote; + return 1; } - strbuf_add_wrapped_bytes(sb, line, len, 0, 1, max_length - line_len); - return; -needquote: + return 0; +} + +static void add_rfc2047(struct strbuf *sb, const char *line, int len, + const char *encoding, enum rfc2047_type type) +{ + static const int max_encoded_length = 76; /* per rfc2047 */ + int i; + int line_len = last_line_length(sb); + strbuf_grow(sb, len * 3 + strlen(encoding) + 100); strbuf_addf(sb, "=?%s?q?", encoding); line_len += strlen(encoding) + 5; /* 5 for =??q? */ for (i = 0; i < len; i++) { unsigned ch = line[i] & 0xFF; + int is_special = is_rfc2047_special(ch, type); + + /* + * According to RFC 2047, we could encode the special character + * ' ' (space) with '_' (underscore) for readability. But many + * programs do not understand this and just leave the + * underscore in place. Thus, we do nothing special here, which + * causes ' ' to be encoded as '=20', avoiding this problem. + */ - if (line_len >= max_length - 2) { + if (line_len + 2 + (is_special ? 3 : 1) > max_encoded_length) { strbuf_addf(sb, "?=\n =?%s?q?", encoding); line_len = strlen(encoding) + 5 + 1; /* =??q? plus SP */ } - /* - * We encode ' ' using '=20' even though rfc2047 - * allows using '_' for readability. Unfortunately, - * many programs do not understand this and just - * leave the underscore in place. - */ - if (is_rfc2047_special(ch) || ch == ' ' || ch == '\n') { + if (is_special) { strbuf_addf(sb, "=%02X", ch); line_len += 3; - } - else { + } else { strbuf_addch(sb, ch); line_len++; } @@ -323,6 +387,7 @@ void pp_user_info(const struct pretty_print_context *pp, const char *what, struct strbuf *sb, const char *line, const char *encoding) { + int max_length = 78; /* per rfc2822 */ char *date; int namelen; unsigned long time; @@ -340,25 +405,27 @@ void pp_user_info(const struct pretty_print_context *pp, if (pp->fmt == CMIT_FMT_EMAIL) { char *name_tail = strchr(line, '<'); int display_name_length; - int final_line; if (!name_tail) return; while (line < name_tail && isspace(name_tail[-1])) name_tail--; display_name_length = name_tail - line; strbuf_addstr(sb, "From: "); - if (!has_rfc822_specials(line, display_name_length)) { - add_rfc2047(sb, line, display_name_length, encoding); - } else { + if (needs_rfc2047_encoding(line, display_name_length, RFC2047_ADDRESS)) { + add_rfc2047(sb, line, display_name_length, + encoding, RFC2047_ADDRESS); + max_length = 76; /* per rfc2047 */ + } else if (needs_rfc822_quoting(line, display_name_length)) { struct strbuf quoted = STRBUF_INIT; add_rfc822_quoted("ed, line, display_name_length); - add_rfc2047(sb, quoted.buf, quoted.len, encoding); + strbuf_add_wrapped_bytes(sb, quoted.buf, quoted.len, + -6, 1, max_length); strbuf_release("ed); + } else { + strbuf_add_wrapped_bytes(sb, line, display_name_length, + -6, 1, max_length); } - for (final_line = 0; final_line < sb->len; final_line++) - if (sb->buf[sb->len - final_line - 1] == '\n') - break; - if (namelen - display_name_length + final_line > 78) { + if (namelen - display_name_length + last_line_length(sb) > max_length) { strbuf_addch(sb, '\n'); if (!isspace(name_tail[0])) strbuf_addch(sb, ' '); @@ -504,7 +571,7 @@ char *logmsg_reencode(const struct commit *commit, return NULL; encoding = get_header(commit, "encoding"); use_encoding = encoding ? encoding : utf8; - if (!strcmp(use_encoding, output_encoding)) + if (same_encoding(use_encoding, output_encoding)) if (encoding) /* we'll strip encoding header later */ out = xstrdup(commit->buffer); else @@ -1278,6 +1345,7 @@ void pp_title_line(const struct pretty_print_context *pp, const char *encoding, int need_8bit_cte) { + static const int max_length = 78; /* per rfc2047 */ struct strbuf title; strbuf_init(&title, 80); @@ -1287,7 +1355,12 @@ void pp_title_line(const struct pretty_print_context *pp, strbuf_grow(sb, title.len + 1024); if (pp->subject) { strbuf_addstr(sb, pp->subject); - add_rfc2047(sb, title.buf, title.len, encoding); + if (needs_rfc2047_encoding(title.buf, title.len, RFC2047_SUBJECT)) + add_rfc2047(sb, title.buf, title.len, + encoding, RFC2047_SUBJECT); + else + strbuf_add_wrapped_bytes(sb, title.buf, title.len, + -last_line_length(sb), 1, max_length); } else { strbuf_addbuf(sb, &title); } @@ -1762,26 +1762,18 @@ int delete_ref(const char *refname, const unsigned char *sha1, int delopt) struct ref_lock *lock; int err, i = 0, ret = 0, flag = 0; - lock = lock_ref_sha1_basic(refname, sha1, 0, &flag); + lock = lock_ref_sha1_basic(refname, sha1, delopt, &flag); if (!lock) return 1; if (!(flag & REF_ISPACKED) || flag & REF_ISSYMREF) { /* loose */ - const char *path; - - if (!(delopt & REF_NODEREF)) { - i = strlen(lock->lk->filename) - 5; /* .lock */ - lock->lk->filename[i] = 0; - path = lock->lk->filename; - } else { - path = git_path("%s", refname); - } - err = unlink_or_warn(path); + i = strlen(lock->lk->filename) - 5; /* .lock */ + lock->lk->filename[i] = 0; + err = unlink_or_warn(lock->lk->filename); if (err && errno != ENOENT) ret = 1; - if (!(delopt & REF_NODEREF)) - lock->lk->filename[i] = '.'; + lock->lk->filename[i] = '.'; } /* removing the loose one could have resurrected an earlier * packed one. Also, if it was not loose we need to repack @@ -1458,8 +1458,8 @@ int get_fetch_map(const struct ref *remote_refs, for (rmp = &ref_map; *rmp; ) { if ((*rmp)->peer_ref) { - if (check_refname_format((*rmp)->peer_ref->name + 5, - REFNAME_ALLOW_ONELEVEL)) { + if (prefixcmp((*rmp)->peer_ref->name, "refs/") || + check_refname_format((*rmp)->peer_ref->name, 0)) { struct ref *ignore = *rmp; error("* Ignoring funny ref '%s' locally", (*rmp)->peer_ref->name); diff --git a/send-pack.c b/send-pack.c new file mode 100644 index 0000000000..f50dfd9f48 --- /dev/null +++ b/send-pack.c @@ -0,0 +1,344 @@ +#include "builtin.h" +#include "commit.h" +#include "refs.h" +#include "pkt-line.h" +#include "sideband.h" +#include "run-command.h" +#include "remote.h" +#include "send-pack.h" +#include "quote.h" +#include "transport.h" +#include "version.h" + +static int feed_object(const unsigned char *sha1, int fd, int negative) +{ + char buf[42]; + + if (negative && !has_sha1_file(sha1)) + return 1; + + memcpy(buf + negative, sha1_to_hex(sha1), 40); + if (negative) + buf[0] = '^'; + buf[40 + negative] = '\n'; + return write_or_whine(fd, buf, 41 + negative, "send-pack: send refs"); +} + +/* + * Make a pack stream and spit it out into file descriptor fd + */ +static int pack_objects(int fd, struct ref *refs, struct extra_have_objects *extra, struct send_pack_args *args) +{ + /* + * The child becomes pack-objects --revs; we feed + * the revision parameters to it via its stdin and + * let its stdout go back to the other end. + */ + const char *argv[] = { + "pack-objects", + "--all-progress-implied", + "--revs", + "--stdout", + NULL, + NULL, + NULL, + NULL, + NULL, + }; + struct child_process po; + int i; + + i = 4; + if (args->use_thin_pack) + argv[i++] = "--thin"; + if (args->use_ofs_delta) + argv[i++] = "--delta-base-offset"; + if (args->quiet || !args->progress) + argv[i++] = "-q"; + if (args->progress) + argv[i++] = "--progress"; + memset(&po, 0, sizeof(po)); + po.argv = argv; + po.in = -1; + po.out = args->stateless_rpc ? -1 : fd; + po.git_cmd = 1; + if (start_command(&po)) + die_errno("git pack-objects failed"); + + /* + * We feed the pack-objects we just spawned with revision + * parameters by writing to the pipe. + */ + for (i = 0; i < extra->nr; i++) + if (!feed_object(extra->array[i], po.in, 1)) + break; + + while (refs) { + if (!is_null_sha1(refs->old_sha1) && + !feed_object(refs->old_sha1, po.in, 1)) + break; + if (!is_null_sha1(refs->new_sha1) && + !feed_object(refs->new_sha1, po.in, 0)) + break; + refs = refs->next; + } + + close(po.in); + + if (args->stateless_rpc) { + char *buf = xmalloc(LARGE_PACKET_MAX); + while (1) { + ssize_t n = xread(po.out, buf, LARGE_PACKET_MAX); + if (n <= 0) + break; + send_sideband(fd, -1, buf, n, LARGE_PACKET_MAX); + } + free(buf); + close(po.out); + po.out = -1; + } + + if (finish_command(&po)) + return -1; + return 0; +} + +static int receive_status(int in, struct ref *refs) +{ + struct ref *hint; + char line[1000]; + int ret = 0; + int len = packet_read_line(in, line, sizeof(line)); + if (len < 10 || memcmp(line, "unpack ", 7)) + return error("did not receive remote status"); + if (memcmp(line, "unpack ok\n", 10)) { + char *p = line + strlen(line) - 1; + if (*p == '\n') + *p = '\0'; + error("unpack failed: %s", line + 7); + ret = -1; + } + hint = NULL; + while (1) { + char *refname; + char *msg; + len = packet_read_line(in, line, sizeof(line)); + if (!len) + break; + if (len < 3 || + (memcmp(line, "ok ", 3) && memcmp(line, "ng ", 3))) { + fprintf(stderr, "protocol error: %s\n", line); + ret = -1; + break; + } + + line[strlen(line)-1] = '\0'; + refname = line + 3; + msg = strchr(refname, ' '); + if (msg) + *msg++ = '\0'; + + /* first try searching at our hint, falling back to all refs */ + if (hint) + hint = find_ref_by_name(hint, refname); + if (!hint) + hint = find_ref_by_name(refs, refname); + if (!hint) { + warning("remote reported status on unknown ref: %s", + refname); + continue; + } + if (hint->status != REF_STATUS_EXPECTING_REPORT) { + warning("remote reported status on unexpected ref: %s", + refname); + continue; + } + + if (line[0] == 'o' && line[1] == 'k') + hint->status = REF_STATUS_OK; + else { + hint->status = REF_STATUS_REMOTE_REJECT; + ret = -1; + } + if (msg) + hint->remote_status = xstrdup(msg); + /* start our next search from the next ref */ + hint = hint->next; + } + return ret; +} + +static int sideband_demux(int in, int out, void *data) +{ + int *fd = data, ret; +#ifdef NO_PTHREADS + close(fd[1]); +#endif + ret = recv_sideband("send-pack", fd[0], out); + close(out); + return ret; +} + +int send_pack(struct send_pack_args *args, + int fd[], struct child_process *conn, + struct ref *remote_refs, + struct extra_have_objects *extra_have) +{ + int in = fd[0]; + int out = fd[1]; + struct strbuf req_buf = STRBUF_INIT; + struct ref *ref; + int new_refs; + int allow_deleting_refs = 0; + int status_report = 0; + int use_sideband = 0; + int quiet_supported = 0; + int agent_supported = 0; + unsigned cmds_sent = 0; + int ret; + struct async demux; + + /* Does the other end support the reporting? */ + if (server_supports("report-status")) + status_report = 1; + if (server_supports("delete-refs")) + allow_deleting_refs = 1; + if (server_supports("ofs-delta")) + args->use_ofs_delta = 1; + if (server_supports("side-band-64k")) + use_sideband = 1; + if (server_supports("quiet")) + quiet_supported = 1; + if (server_supports("agent")) + agent_supported = 1; + + if (!remote_refs) { + fprintf(stderr, "No refs in common and none specified; doing nothing.\n" + "Perhaps you should specify a branch such as 'master'.\n"); + return 0; + } + + /* + * Finally, tell the other end! + */ + new_refs = 0; + for (ref = remote_refs; ref; ref = ref->next) { + if (!ref->peer_ref && !args->send_mirror) + continue; + + /* Check for statuses set by set_ref_status_for_push() */ + switch (ref->status) { + case REF_STATUS_REJECT_NONFASTFORWARD: + case REF_STATUS_UPTODATE: + continue; + default: + ; /* do nothing */ + } + + if (ref->deletion && !allow_deleting_refs) { + ref->status = REF_STATUS_REJECT_NODELETE; + continue; + } + + if (!ref->deletion) + new_refs++; + + if (args->dry_run) { + ref->status = REF_STATUS_OK; + } else { + char *old_hex = sha1_to_hex(ref->old_sha1); + char *new_hex = sha1_to_hex(ref->new_sha1); + int quiet = quiet_supported && (args->quiet || !args->progress); + + if (!cmds_sent && (status_report || use_sideband || + quiet || agent_supported)) { + packet_buf_write(&req_buf, + "%s %s %s%c%s%s%s%s%s", + old_hex, new_hex, ref->name, 0, + status_report ? " report-status" : "", + use_sideband ? " side-band-64k" : "", + quiet ? " quiet" : "", + agent_supported ? " agent=" : "", + agent_supported ? git_user_agent_sanitized() : "" + ); + } + else + packet_buf_write(&req_buf, "%s %s %s", + old_hex, new_hex, ref->name); + ref->status = status_report ? + REF_STATUS_EXPECTING_REPORT : + REF_STATUS_OK; + cmds_sent++; + } + } + + if (args->stateless_rpc) { + if (!args->dry_run && cmds_sent) { + packet_buf_flush(&req_buf); + send_sideband(out, -1, req_buf.buf, req_buf.len, LARGE_PACKET_MAX); + } + } else { + safe_write(out, req_buf.buf, req_buf.len); + packet_flush(out); + } + strbuf_release(&req_buf); + + if (use_sideband && cmds_sent) { + memset(&demux, 0, sizeof(demux)); + demux.proc = sideband_demux; + demux.data = fd; + demux.out = -1; + if (start_async(&demux)) + die("send-pack: unable to fork off sideband demultiplexer"); + in = demux.out; + } + + if (new_refs && cmds_sent) { + if (pack_objects(out, remote_refs, extra_have, args) < 0) { + for (ref = remote_refs; ref; ref = ref->next) + ref->status = REF_STATUS_NONE; + if (args->stateless_rpc) + close(out); + if (git_connection_is_socket(conn)) + shutdown(fd[0], SHUT_WR); + if (use_sideband) + finish_async(&demux); + return -1; + } + } + if (args->stateless_rpc && cmds_sent) + packet_flush(out); + + if (status_report && cmds_sent) + ret = receive_status(in, remote_refs); + else + ret = 0; + if (args->stateless_rpc) + packet_flush(out); + + if (use_sideband && cmds_sent) { + if (finish_async(&demux)) { + error("error in sideband demultiplexer"); + ret = -1; + } + close(demux.out); + } + + if (ret < 0) + return ret; + + if (args->porcelain) + return 0; + + for (ref = remote_refs; ref; ref = ref->next) { + switch (ref->status) { + case REF_STATUS_NONE: + case REF_STATUS_UPTODATE: + case REF_STATUS_OK: + break; + default: + return -1; + } + } + return 0; +} diff --git a/sequencer.c b/sequencer.c index e3723d2095..22604902aa 100644 --- a/sequencer.c +++ b/sequencer.c @@ -60,7 +60,7 @@ static int get_message(struct commit *commit, struct commit_message *out) out->reencoded_message = NULL; out->message = commit->buffer; - if (strcmp(encoding, git_commit_encoding)) + if (same_encoding(encoding, git_commit_encoding)) out->reencoded_message = reencode_string(commit->buffer, git_commit_encoding, encoding); if (out->reencoded_message) @@ -191,7 +191,7 @@ static int fast_forward_to(const unsigned char *to, const unsigned char *from) struct ref_lock *ref_lock; read_cache(); - if (checkout_fast_forward(from, to)) + if (checkout_fast_forward(from, to, 1)) exit(1); /* the callee should have complained already */ ref_lock = lock_any_ref_for_update("HEAD", from, 0); return write_ref_sha1(ref_lock, to, "cherry-pick"); diff --git a/t/t0003-attributes.sh b/t/t0003-attributes.sh index febc45c9cc..807b8b88e2 100755 --- a/t/t0003-attributes.sh +++ b/t/t0003-attributes.sh @@ -196,6 +196,16 @@ test_expect_success 'root subdir attribute test' ' attr_check subdir/a/i unspecified ' +test_expect_success 'negative patterns' ' + echo "!f test=bar" >.gitattributes && + test_must_fail git check-attr test -- f +' + +test_expect_success 'patterns starting with exclamation' ' + echo "\!f test=foo" >.gitattributes && + attr_check "!f" foo +' + test_expect_success 'setup bare' ' git clone --bare . bare.git && cd bare.git diff --git a/t/t1401-symbolic-ref.sh b/t/t1401-symbolic-ref.sh index 2c96551ed0..36378b0e3f 100755 --- a/t/t1401-symbolic-ref.sh +++ b/t/t1401-symbolic-ref.sh @@ -33,4 +33,34 @@ test_expect_success 'symbolic-ref refuses bare sha1' ' ' reset_to_sane +test_expect_success 'symbolic-ref deletes HEAD' ' + git symbolic-ref -d HEAD && + test_path_is_file .git/refs/heads/foo && + test_path_is_missing .git/HEAD +' +reset_to_sane + +test_expect_success 'symbolic-ref deletes dangling HEAD' ' + git symbolic-ref HEAD refs/heads/missing && + git symbolic-ref -d HEAD && + test_path_is_missing .git/refs/heads/missing && + test_path_is_missing .git/HEAD +' +reset_to_sane + +test_expect_success 'symbolic-ref fails to delete missing FOO' ' + echo "fatal: Cannot delete FOO, not a symbolic ref" >expect && + test_must_fail git symbolic-ref -d FOO >actual 2>&1 && + test_cmp expect actual +' +reset_to_sane + +test_expect_success 'symbolic-ref fails to delete real ref' ' + echo "fatal: Cannot delete refs/heads/foo, not a symbolic ref" >expect && + test_must_fail git symbolic-ref -d refs/heads/foo >actual 2>&1 && + test_path_is_file .git/refs/heads/foo && + test_cmp expect actual +' +reset_to_sane + test_done diff --git a/t/t3001-ls-files-others-exclude.sh b/t/t3001-ls-files-others-exclude.sh index c8fe978267..dc2f0458fd 100755 --- a/t/t3001-ls-files-others-exclude.sh +++ b/t/t3001-ls-files-others-exclude.sh @@ -214,4 +214,10 @@ test_expect_success 'subdirectory ignore (l1)' ' test_cmp expect actual ' +test_expect_success 'pattern matches prefix completely' ' + : >expect && + git ls-files -i -o --exclude "/three/a.3[abc]" >actual && + test_cmp expect actual +' + test_done diff --git a/t/t4014-format-patch.sh b/t/t4014-format-patch.sh index 959aa26ef5..ad9f69e607 100755 --- a/t/t4014-format-patch.sh +++ b/t/t4014-format-patch.sh @@ -110,73 +110,107 @@ test_expect_success 'replay did not screw up the log message' ' test_expect_success 'extra headers' ' - git config format.headers "To: R. E. Cipient <rcipient@example.com> + git config format.headers "To: R E Cipient <rcipient@example.com> " && - git config --add format.headers "Cc: S. E. Cipient <scipient@example.com> + git config --add format.headers "Cc: S E Cipient <scipient@example.com> " && git format-patch --stdout master..side > patch2 && sed -e "/^\$/q" patch2 > hdrs2 && - grep "^To: R. E. Cipient <rcipient@example.com>\$" hdrs2 && - grep "^Cc: S. E. Cipient <scipient@example.com>\$" hdrs2 + grep "^To: R E Cipient <rcipient@example.com>\$" hdrs2 && + grep "^Cc: S E Cipient <scipient@example.com>\$" hdrs2 ' test_expect_success 'extra headers without newlines' ' - git config --replace-all format.headers "To: R. E. Cipient <rcipient@example.com>" && - git config --add format.headers "Cc: S. E. Cipient <scipient@example.com>" && + git config --replace-all format.headers "To: R E Cipient <rcipient@example.com>" && + git config --add format.headers "Cc: S E Cipient <scipient@example.com>" && git format-patch --stdout master..side >patch3 && sed -e "/^\$/q" patch3 > hdrs3 && - grep "^To: R. E. Cipient <rcipient@example.com>\$" hdrs3 && - grep "^Cc: S. E. Cipient <scipient@example.com>\$" hdrs3 + grep "^To: R E Cipient <rcipient@example.com>\$" hdrs3 && + grep "^Cc: S E Cipient <scipient@example.com>\$" hdrs3 ' test_expect_success 'extra headers with multiple To:s' ' - git config --replace-all format.headers "To: R. E. Cipient <rcipient@example.com>" && - git config --add format.headers "To: S. E. Cipient <scipient@example.com>" && + git config --replace-all format.headers "To: R E Cipient <rcipient@example.com>" && + git config --add format.headers "To: S E Cipient <scipient@example.com>" && git format-patch --stdout master..side > patch4 && sed -e "/^\$/q" patch4 > hdrs4 && - grep "^To: R. E. Cipient <rcipient@example.com>,\$" hdrs4 && - grep "^ *S. E. Cipient <scipient@example.com>\$" hdrs4 + grep "^To: R E Cipient <rcipient@example.com>,\$" hdrs4 && + grep "^ *S E Cipient <scipient@example.com>\$" hdrs4 ' -test_expect_success 'additional command line cc' ' +test_expect_success 'additional command line cc (ascii)' ' - git config --replace-all format.headers "Cc: R. E. Cipient <rcipient@example.com>" && + git config --replace-all format.headers "Cc: R E Cipient <rcipient@example.com>" && + git format-patch --cc="S E Cipient <scipient@example.com>" --stdout master..side | sed -e "/^\$/q" >patch5 && + grep "^Cc: R E Cipient <rcipient@example.com>,\$" patch5 && + grep "^ *S E Cipient <scipient@example.com>\$" patch5 +' + +test_expect_failure 'additional command line cc (rfc822)' ' + + git config --replace-all format.headers "Cc: R E Cipient <rcipient@example.com>" && git format-patch --cc="S. E. Cipient <scipient@example.com>" --stdout master..side | sed -e "/^\$/q" >patch5 && - grep "^Cc: R. E. Cipient <rcipient@example.com>,\$" patch5 && - grep "^ *S. E. Cipient <scipient@example.com>\$" patch5 + grep "^Cc: R E Cipient <rcipient@example.com>,\$" patch5 && + grep "^ *"S. E. Cipient" <scipient@example.com>\$" patch5 ' test_expect_success 'command line headers' ' git config --unset-all format.headers && - git format-patch --add-header="Cc: R. E. Cipient <rcipient@example.com>" --stdout master..side | sed -e "/^\$/q" >patch6 && - grep "^Cc: R. E. Cipient <rcipient@example.com>\$" patch6 + git format-patch --add-header="Cc: R E Cipient <rcipient@example.com>" --stdout master..side | sed -e "/^\$/q" >patch6 && + grep "^Cc: R E Cipient <rcipient@example.com>\$" patch6 ' test_expect_success 'configuration headers and command line headers' ' - git config --replace-all format.headers "Cc: R. E. Cipient <rcipient@example.com>" && - git format-patch --add-header="Cc: S. E. Cipient <scipient@example.com>" --stdout master..side | sed -e "/^\$/q" >patch7 && - grep "^Cc: R. E. Cipient <rcipient@example.com>,\$" patch7 && - grep "^ *S. E. Cipient <scipient@example.com>\$" patch7 + git config --replace-all format.headers "Cc: R E Cipient <rcipient@example.com>" && + git format-patch --add-header="Cc: S E Cipient <scipient@example.com>" --stdout master..side | sed -e "/^\$/q" >patch7 && + grep "^Cc: R E Cipient <rcipient@example.com>,\$" patch7 && + grep "^ *S E Cipient <scipient@example.com>\$" patch7 ' -test_expect_success 'command line To: header' ' +test_expect_success 'command line To: header (ascii)' ' git config --unset-all format.headers && + git format-patch --to="R E Cipient <rcipient@example.com>" --stdout master..side | sed -e "/^\$/q" >patch8 && + grep "^To: R E Cipient <rcipient@example.com>\$" patch8 +' + +test_expect_failure 'command line To: header (rfc822)' ' + git format-patch --to="R. E. Cipient <rcipient@example.com>" --stdout master..side | sed -e "/^\$/q" >patch8 && - grep "^To: R. E. Cipient <rcipient@example.com>\$" patch8 + grep "^To: "R. E. Cipient" <rcipient@example.com>\$" patch8 ' -test_expect_success 'configuration To: header' ' +test_expect_failure 'command line To: header (rfc2047)' ' + + git format-patch --to="R Ä Cipient <rcipient@example.com>" --stdout master..side | sed -e "/^\$/q" >patch8 && + grep "^To: =?UTF-8?q?R=20=C3=84=20Cipient?= <rcipient@example.com>\$" patch8 +' + +test_expect_success 'configuration To: header (ascii)' ' + + git config format.to "R E Cipient <rcipient@example.com>" && + git format-patch --stdout master..side | sed -e "/^\$/q" >patch9 && + grep "^To: R E Cipient <rcipient@example.com>\$" patch9 +' + +test_expect_failure 'configuration To: header (rfc822)' ' git config format.to "R. E. Cipient <rcipient@example.com>" && git format-patch --stdout master..side | sed -e "/^\$/q" >patch9 && - grep "^To: R. E. Cipient <rcipient@example.com>\$" patch9 + grep "^To: "R. E. Cipient" <rcipient@example.com>\$" patch9 +' + +test_expect_failure 'configuration To: header (rfc2047)' ' + + git config format.to "R Ä Cipient <rcipient@example.com>" && + git format-patch --stdout master..side | sed -e "/^\$/q" >patch9 && + grep "^To: =?UTF-8?q?R=20=C3=84=20Cipient?= <rcipient@example.com>\$" patch9 ' # check_patch <patch>: Verify that <patch> looks like a half-sane @@ -190,11 +224,11 @@ check_patch () { test_expect_success '--no-to overrides config.to' ' git config --replace-all format.to \ - "R. E. Cipient <rcipient@example.com>" && + "R E Cipient <rcipient@example.com>" && git format-patch --no-to --stdout master..side | sed -e "/^\$/q" >patch10 && check_patch patch10 && - ! grep "^To: R. E. Cipient <rcipient@example.com>\$" patch10 + ! grep "^To: R E Cipient <rcipient@example.com>\$" patch10 ' test_expect_success '--no-to and --to replaces config.to' ' @@ -212,21 +246,21 @@ test_expect_success '--no-to and --to replaces config.to' ' test_expect_success '--no-cc overrides config.cc' ' git config --replace-all format.cc \ - "C. E. Cipient <rcipient@example.com>" && + "C E Cipient <rcipient@example.com>" && git format-patch --no-cc --stdout master..side | sed -e "/^\$/q" >patch12 && check_patch patch12 && - ! grep "^Cc: C. E. Cipient <rcipient@example.com>\$" patch12 + ! grep "^Cc: C E Cipient <rcipient@example.com>\$" patch12 ' test_expect_success '--no-add-header overrides config.headers' ' git config --replace-all format.headers \ - "Header1: B. E. Cipient <rcipient@example.com>" && + "Header1: B E Cipient <rcipient@example.com>" && git format-patch --no-add-header --stdout master..side | sed -e "/^\$/q" >patch13 && check_patch patch13 && - ! grep "^Header1: B. E. Cipient <rcipient@example.com>\$" patch13 + ! grep "^Header1: B E Cipient <rcipient@example.com>\$" patch13 ' test_expect_success 'multiple files' ' @@ -752,16 +786,14 @@ M64=$M8$M8$M8$M8$M8$M8$M8$M8 M512=$M64$M64$M64$M64$M64$M64$M64$M64 cat >expect <<'EOF' Subject: [PATCH] foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo - bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar - foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo - bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar - foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo - bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar - foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo - bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar - foo bar foo bar foo bar foo bar + bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar + foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo + bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar + foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo + bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar + foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar foo bar EOF -test_expect_success 'format-patch wraps extremely long headers (ascii)' ' +test_expect_success 'format-patch wraps extremely long subject (ascii)' ' echo content >>file && git add file && git commit -m "$M512" && @@ -774,30 +806,31 @@ M8="föö bar " M64=$M8$M8$M8$M8$M8$M8$M8$M8 M512=$M64$M64$M64$M64$M64$M64$M64$M64 cat >expect <<'EOF' -Subject: [PATCH] =?UTF-8?q?f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= - =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar?= +Subject: [PATCH] =?UTF-8?q?f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f?= + =?UTF-8?q?=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar?= + =?UTF-8?q?=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20?= + =?UTF-8?q?bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6?= + =?UTF-8?q?=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3?= + =?UTF-8?q?=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= + =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3?= + =?UTF-8?q?=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f?= + =?UTF-8?q?=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar?= + =?UTF-8?q?=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20?= + =?UTF-8?q?bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6?= + =?UTF-8?q?=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3?= + =?UTF-8?q?=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= + =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3?= + =?UTF-8?q?=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f?= + =?UTF-8?q?=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar?= + =?UTF-8?q?=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20?= + =?UTF-8?q?bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6?= + =?UTF-8?q?=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3?= + =?UTF-8?q?=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6?= + =?UTF-8?q?=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3?= + =?UTF-8?q?=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar=20f?= + =?UTF-8?q?=C3=B6=C3=B6=20bar=20f=C3=B6=C3=B6=20bar?= EOF -test_expect_success 'format-patch wraps extremely long headers (rfc2047)' ' +test_expect_success 'format-patch wraps extremely long subject (rfc2047)' ' rm -rf patches/ && echo content >>file && git add file && @@ -807,52 +840,80 @@ test_expect_success 'format-patch wraps extremely long headers (rfc2047)' ' test_cmp expect subject ' -M8="foo_bar_" -M64=$M8$M8$M8$M8$M8$M8$M8$M8 -cat >expect <<EOF -From: $M64 - <foobar@foo.bar> -EOF -test_expect_success 'format-patch wraps non-quotable headers' ' - rm -rf patches/ && - echo content >>file && - git add file && - git commit -mfoo --author "$M64 <foobar@foo.bar>" && - git format-patch --stdout -1 >patch && - sed -n "/^From: /p; /^ /p; /^$/q" <patch >from && - test_cmp expect from -' - check_author() { echo content >>file && git add file && GIT_AUTHOR_NAME=$1 git commit -m author-check && git format-patch --stdout -1 >patch && - grep ^From: patch >actual && + sed -n "/^From: /p; /^ /p; /^$/q" <patch >actual && test_cmp expect actual } cat >expect <<'EOF' From: "Foo B. Bar" <author@example.com> EOF -test_expect_success 'format-patch quotes dot in headers' ' +test_expect_success 'format-patch quotes dot in from-headers' ' check_author "Foo B. Bar" ' cat >expect <<'EOF' From: "Foo \"The Baz\" Bar" <author@example.com> EOF -test_expect_success 'format-patch quotes double-quote in headers' ' +test_expect_success 'format-patch quotes double-quote in from-headers' ' check_author "Foo \"The Baz\" Bar" ' cat >expect <<'EOF' -From: =?UTF-8?q?"F=C3=B6o=20B.=20Bar"?= <author@example.com> +From: =?UTF-8?q?F=C3=B6o=20Bar?= <author@example.com> +EOF +test_expect_success 'format-patch uses rfc2047-encoded from-headers when necessary' ' + check_author "Föo Bar" +' + +cat >expect <<'EOF' +From: =?UTF-8?q?F=C3=B6o=20B=2E=20Bar?= <author@example.com> EOF -test_expect_success 'rfc2047-encoded headers also double-quote 822 specials' ' +test_expect_success 'rfc2047-encoded from-headers leave no rfc822 specials' ' check_author "Föo B. Bar" ' +cat >expect <<EOF +From: foo_bar_foo_bar_foo_bar_foo_bar_foo_bar_foo_bar_foo_bar_foo_bar_ + <author@example.com> +EOF +test_expect_success 'format-patch wraps moderately long from-header (ascii)' ' + check_author "foo_bar_foo_bar_foo_bar_foo_bar_foo_bar_foo_bar_foo_bar_foo_bar_" +' + +cat >expect <<'EOF' +From: Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar + Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo + Bar Foo Bar Foo Bar Foo Bar <author@example.com> +EOF +test_expect_success 'format-patch wraps extremely long from-header (ascii)' ' + check_author "Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar" +' + +cat >expect <<'EOF' +From: "Foo.Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar + Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo + Bar Foo Bar Foo Bar Foo Bar" <author@example.com> +EOF +test_expect_success 'format-patch wraps extremely long from-header (rfc822)' ' + check_author "Foo.Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar" +' + +cat >expect <<'EOF' +From: =?UTF-8?q?Fo=C3=B6=20Bar=20Foo=20Bar=20Foo=20Bar=20Foo=20Bar=20Foo?= + =?UTF-8?q?=20Bar=20Foo=20Bar=20Foo=20Bar=20Foo=20Bar=20Foo=20Bar=20Foo=20?= + =?UTF-8?q?Bar=20Foo=20Bar=20Foo=20Bar=20Foo=20Bar=20Foo=20Bar=20Foo=20Bar?= + =?UTF-8?q?=20Foo=20Bar=20Foo=20Bar=20Foo=20Bar=20Foo=20Bar=20Foo=20Bar=20?= + =?UTF-8?q?Foo=20Bar=20Foo=20Bar?= <author@example.com> +EOF +test_expect_success 'format-patch wraps extremely long from-header (rfc2047)' ' + check_author "Foö Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar Foo Bar" +' + cat >expect <<'EOF' Subject: header with . in it EOF diff --git a/t/t4030-diff-textconv.sh b/t/t4030-diff-textconv.sh index eebb1eed8b..461d27ac20 100755 --- a/t/t4030-diff-textconv.sh +++ b/t/t4030-diff-textconv.sh @@ -84,6 +84,18 @@ test_expect_success 'status -v produces text' ' git reset --soft HEAD@{1} ' +test_expect_success 'grep-diff (-G) operates on textconv data (add)' ' + echo one >expect && + git log --root --format=%s -G0 >actual && + test_cmp expect actual +' + +test_expect_success 'grep-diff (-G) operates on textconv data (modification)' ' + echo two >expect && + git log --root --format=%s -G1 >actual && + test_cmp expect actual +' + cat >expect.stat <<'EOF' file | Bin 2 -> 4 bytes 1 file changed, 0 insertions(+), 0 deletions(-) diff --git a/t/t4202-log.sh b/t/t4202-log.sh index e6537abe1d..a343bf6c62 100755 --- a/t/t4202-log.sh +++ b/t/t4202-log.sh @@ -72,9 +72,9 @@ cat > expect << EOF commit. EOF -test_expect_success 'format %w(12,1,2)' ' +test_expect_success 'format %w(11,1,2)' ' - git log -2 --format="%w(12,1,2)This is the %s commit." > actual && + git log -2 --format="%w(11,1,2)This is the %s commit." > actual && test_cmp expect actual ' diff --git a/t/t7403-submodule-sync.sh b/t/t7403-submodule-sync.sh index 524d5c1b21..94e26c47ea 100755 --- a/t/t7403-submodule-sync.sh +++ b/t/t7403-submodule-sync.sh @@ -17,18 +17,25 @@ test_expect_success setup ' git commit -m upstream && git clone . super && git clone super submodule && + (cd submodule && + git submodule add ../submodule sub-submodule && + test_tick && + git commit -m "sub-submodule" + ) && (cd super && git submodule add ../submodule submodule && test_tick && git commit -m "submodule" ) && git clone super super-clone && - (cd super-clone && git submodule update --init) && + (cd super-clone && git submodule update --init --recursive) && git clone super empty-clone && (cd empty-clone && git submodule init) && git clone super top-only-clone && git clone super relative-clone && - (cd relative-clone && git submodule update --init) + (cd relative-clone && git submodule update --init --recursive) && + git clone super recursive-clone && + (cd recursive-clone && git submodule update --init --recursive) ' test_expect_success 'change submodule' ' @@ -46,6 +53,11 @@ test_expect_success 'change submodule url' ' git pull ) && mv submodule moved-submodule && + (cd moved-submodule && + git config -f .gitmodules submodule.sub-submodule.url ../moved-submodule && + test_tick && + git commit -a -m moved-sub-submodule + ) && (cd super && git config -f .gitmodules submodule.submodule.url ../moved-submodule && test_tick && @@ -61,6 +73,9 @@ test_expect_success '"git submodule sync" should update submodule URLs' ' test -d "$(cd super-clone/submodule && git config remote.origin.url )" && + test ! -d "$(cd super-clone/submodule/sub-submodule && + git config remote.origin.url + )" && (cd super-clone/submodule && git checkout master && git pull @@ -70,6 +85,25 @@ test_expect_success '"git submodule sync" should update submodule URLs' ' ) ' +test_expect_success '"git submodule sync --recursive" should update all submodule URLs' ' + (cd super-clone && + (cd submodule && + git pull --no-recurse-submodules + ) && + git submodule sync --recursive + ) && + test -d "$(cd super-clone/submodule && + git config remote.origin.url + )" && + test -d "$(cd super-clone/submodule/sub-submodule && + git config remote.origin.url + )" && + (cd super-clone/submodule/sub-submodule && + git checkout master && + git pull + ) +' + test_expect_success '"git submodule sync" should update known submodule URLs' ' (cd empty-clone && git pull && @@ -107,6 +141,23 @@ test_expect_success '"git submodule sync" handles origin URL of the form foo/bar #actual foo/submodule test "$(git config remote.origin.url)" = "../foo/submodule" ) + (cd submodule/sub-submodule && + test "$(git config remote.origin.url)" != "../../foo/submodule" + ) + ) +' + +test_expect_success '"git submodule sync --recursive" propagates changes in origin' ' + (cd recursive-clone && + git remote set-url origin foo/bar && + git submodule sync --recursive && + (cd submodule && + #actual foo/submodule + test "$(git config remote.origin.url)" = "../foo/submodule" + ) + (cd submodule/sub-submodule && + test "$(git config remote.origin.url)" = "../../foo/submodule" + ) ) ' diff --git a/t/t7407-submodule-foreach.sh b/t/t7407-submodule-foreach.sh index 9b69fe2e14..107b4b7c45 100755 --- a/t/t7407-submodule-foreach.sh +++ b/t/t7407-submodule-foreach.sh @@ -226,14 +226,14 @@ test_expect_success 'test "status --recursive"' ' test_cmp expect actual ' -sed -e "/nested1 /s/.*/+$nested1sha1 nested1 (file2~1)/;/sub[1-3]/d" < expect > expect2 +sed -e "/nested2 /s/.*/+$nested2sha1 nested1\/nested2 (file2~1)/;/sub[1-3]/d" < expect > expect2 mv -f expect2 expect test_expect_success 'ensure "status --cached --recursive" preserves the --cached flag' ' ( cd clone3 && ( - cd nested1 && + cd nested1/nested2 && test_commit file2 ) && git submodule status --cached --recursive -- nested1 > ../actual diff --git a/t/t9001-send-email.sh b/t/t9001-send-email.sh index 035122808b..6c6af7d13f 100755 --- a/t/t9001-send-email.sh +++ b/t/t9001-send-email.sh @@ -854,6 +854,75 @@ test_expect_success $PREREQ 'utf8 author is correctly passed on' ' grep "^From: Füñný Nâmé <odd_?=mail@example.com>" msgtxt1 ' +test_expect_success $PREREQ 'sendemail.composeencoding works' ' + clean_fake_sendmail && + git config sendemail.composeencoding iso-8859-1 && + (echo "#!$SHELL_PATH" && + echo "echo utf8 body: àéìöú >>\"\$1\"" + ) >fake-editor-utf8 && + chmod +x fake-editor-utf8 && + GIT_EDITOR="\"$(pwd)/fake-editor-utf8\"" \ + git send-email \ + --compose --subject foo \ + --from="Example <nobody@example.com>" \ + --to=nobody@example.com \ + --smtp-server="$(pwd)/fake.sendmail" \ + $patches && + grep "^utf8 body" msgtxt1 && + grep "^Content-Type: text/plain; charset=iso-8859-1" msgtxt1 +' + +test_expect_success $PREREQ '--compose-encoding works' ' + clean_fake_sendmail && + (echo "#!$SHELL_PATH" && + echo "echo utf8 body: àéìöú >>\"\$1\"" + ) >fake-editor-utf8 && + chmod +x fake-editor-utf8 && + GIT_EDITOR="\"$(pwd)/fake-editor-utf8\"" \ + git send-email \ + --compose-encoding iso-8859-1 \ + --compose --subject foo \ + --from="Example <nobody@example.com>" \ + --to=nobody@example.com \ + --smtp-server="$(pwd)/fake.sendmail" \ + $patches && + grep "^utf8 body" msgtxt1 && + grep "^Content-Type: text/plain; charset=iso-8859-1" msgtxt1 +' + +test_expect_success $PREREQ '--compose-encoding overrides sendemail.composeencoding' ' + clean_fake_sendmail && + git config sendemail.composeencoding iso-8859-1 && + (echo "#!$SHELL_PATH" && + echo "echo utf8 body: àéìöú >>\"\$1\"" + ) >fake-editor-utf8 && + chmod +x fake-editor-utf8 && + GIT_EDITOR="\"$(pwd)/fake-editor-utf8\"" \ + git send-email \ + --compose-encoding iso-8859-2 \ + --compose --subject foo \ + --from="Example <nobody@example.com>" \ + --to=nobody@example.com \ + --smtp-server="$(pwd)/fake.sendmail" \ + $patches && + grep "^utf8 body" msgtxt1 && + grep "^Content-Type: text/plain; charset=iso-8859-2" msgtxt1 +' + +test_expect_success $PREREQ '--compose-encoding adds correct MIME for subject' ' + clean_fake_sendmail && + GIT_EDITOR="\"$(pwd)/fake-editor\"" \ + git send-email \ + --compose-encoding iso-8859-2 \ + --compose --subject utf8-sübjëct \ + --from="Example <nobody@example.com>" \ + --to=nobody@example.com \ + --smtp-server="$(pwd)/fake.sendmail" \ + $patches && + grep "^fake edit" msgtxt1 && + grep "^Subject: =?iso-8859-2?q?utf8-s=C3=BCbj=C3=ABct?=" msgtxt1 +' + test_expect_success $PREREQ 'detects ambiguous reference/file conflict' ' echo master > master && git add master && @@ -1074,6 +1143,23 @@ EOF ' test_expect_success $PREREQ 'setup expect' ' +cat >expected <<EOF +Subject: subject goes here +EOF +' + +test_expect_success $PREREQ 'ASCII subject is not RFC2047 quoted' ' + clean_fake_sendmail && + echo bogus | + git send-email --from=author@example.com --to=nobody@example.com \ + --smtp-server="$(pwd)/fake.sendmail" \ + --8bit-encoding=UTF-8 \ + email-using-8bit >stdout && + grep "Subject" msgtxt1 >actual && + test_cmp expected actual +' + +test_expect_success $PREREQ 'setup expect' ' cat >content-type-decl <<EOF MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 diff --git a/t/t9200-git-cvsexportcommit.sh b/t/t9200-git-cvsexportcommit.sh index b59be9a894..69934b2e77 100755 --- a/t/t9200-git-cvsexportcommit.sh +++ b/t/t9200-git-cvsexportcommit.sh @@ -19,7 +19,7 @@ then test_done fi -CVSROOT=$PWD/cvsroot +CVSROOT=$PWD/tmpcvsroot CVSWORK=$PWD/cvswork GIT_DIR=$PWD/.git export CVSROOT CVSWORK GIT_DIR diff --git a/t/t9400-git-cvsserver-server.sh b/t/t9400-git-cvsserver-server.sh index 806623e885..9502f2438a 100755 --- a/t/t9400-git-cvsserver-server.sh +++ b/t/t9400-git-cvsserver-server.sh @@ -400,7 +400,7 @@ cat >expected.C <<EOF Line 0 ======= LINE 0 ->>>>>>> merge.3 +>>>>>>> merge.1.3 EOF for i in 1 2 3 4 5 6 7 8 @@ -505,6 +505,76 @@ test_expect_success 'cvs co -c (shows module database)' ' ' #------------ +# CVS LOG +#------------ + +# Known issues with git-cvsserver current log output: +# - Hard coded "lines: +2 -3" placeholder, instead of real numbers. +# - CVS normally does not internally add a blank first line +# nor a last line with nothing but a space to log messages. +# - The latest cvs 1.12.x server sends +0000 timezone (with some hidden "MT" +# tagging in the protocol), and if cvs 1.12.x client sees the MT tags, +# it converts to local time zone. git-cvsserver doesn't do the +0000 +# or the MT tags... +# - The latest 1.12.x releases add a "commitid:" field on to the end of the +# "date:" line (after "lines:"). Maybe we could stick git's commit id +# in it? Or does CVS expect a certain number of bits (too few for +# a full sha1)? +# +# Given the above, expect the following test to break if git-cvsserver's +# log output is improved. The test is just to ensure it doesn't +# accidentally get worse. + +sed -e 's/^x//' -e 's/SP$/ /' > "$WORKDIR/expect" <<EOF +x +xRCS file: $WORKDIR/gitcvs.git/master/merge,v +xWorking file: merge +xhead: 1.4 +xbranch: +xlocks: strict +xaccess list: +xsymbolic names: +xkeyword substitution: kv +xtotal revisions: 4; selected revisions: 4 +xdescription: +x---------------------------- +xrevision 1.4 +xdate: __DATE__; author: author; state: Exp; lines: +2 -3 +x +xMerge test (no-op) +xSP +x---------------------------- +xrevision 1.3 +xdate: __DATE__; author: author; state: Exp; lines: +2 -3 +x +xMerge test (conflict) +xSP +x---------------------------- +xrevision 1.2 +xdate: __DATE__; author: author; state: Exp; lines: +2 -3 +x +xMerge test (merge) +xSP +x---------------------------- +xrevision 1.1 +xdate: __DATE__; author: author; state: Exp; lines: +2 -3 +x +xMerge test (pre-merge) +xSP +x============================================================================= +EOF +expectStat="$?" + +cd "$WORKDIR" +test_expect_success 'cvs log' ' + cd cvswork && + test x"$expectStat" = x"0" && + GIT_CONFIG="$git_config" cvs log merge >../out && + sed -e "s%2[0-9][0-9][0-9]/[01][0-9]/[0-3][0-9] [0-2][0-9]:[0-5][0-9]:[0-5][0-9]%__DATE__%" ../out > ../actual && + test_cmp ../expect ../actual +' + +#------------ # CVS ANNOTATE #------------ diff --git a/t/t9401-git-cvsserver-crlf.sh b/t/t9401-git-cvsserver-crlf.sh index ff6d6fb473..1c5bc84fa7 100755 --- a/t/t9401-git-cvsserver-crlf.sh +++ b/t/t9401-git-cvsserver-crlf.sh @@ -38,6 +38,25 @@ not_present() { fi } +check_status_options() { + (cd "$1" && + GIT_CONFIG="$git_config" cvs -Q status "$2" > "${WORKDIR}/status.out" 2>&1 + ) + if [ x"$?" != x"0" ] ; then + echo "Error from cvs status: $1 $2" >> "${WORKDIR}/marked.log" + return 1; + fi + got="$(sed -n -e 's/^[ ]*Sticky Options:[ ]*//p' "${WORKDIR}/status.out")" + expect="$3" + if [ x"$expect" = x"" ] ; then + expect="(none)" + fi + test x"$got" = x"$expect" + stat=$? + echo "cvs status: $1 $2 $stat '$3' '$got'" >> "${WORKDIR}/marked.log" + return $stat +} + cvs >/dev/null 2>&1 if test $? -ne 1 then @@ -233,6 +252,22 @@ test_expect_success 'cvs co another copy (guess)' ' marked_as cvswork2/subdir newfile.c "" ' +test_expect_success 'cvs status - sticky options' ' + check_status_options cvswork2 textfile.c "" && + check_status_options cvswork2 binfile.bin -kb && + check_status_options cvswork2 .gitattributes "" && + check_status_options cvswork2 mixedUp.c -kb && + check_status_options cvswork2 multiline.c -kb && + check_status_options cvswork2 multilineTxt.c "" && + check_status_options cvswork2/subdir withCr.bin -kb && + check_status_options cvswork2 subdir/withCr.bin -kb && + check_status_options cvswork2/subdir file.h "" && + check_status_options cvswork2 subdir/file.h "" && + check_status_options cvswork2/subdir unspecified.other "" && + check_status_options cvswork2/subdir newfile.bin "" && + check_status_options cvswork2/subdir newfile.c "" +' + test_expect_success 'add text (guess)' ' (cd cvswork && echo "simpleText" > simpleText.c && diff --git a/t/t9604-cvsimport-timestamps.sh b/t/t9604-cvsimport-timestamps.sh new file mode 100755 index 0000000000..1fd51423ee --- /dev/null +++ b/t/t9604-cvsimport-timestamps.sh @@ -0,0 +1,71 @@ +#!/bin/sh + +test_description='git cvsimport timestamps' +. ./lib-cvs.sh + +setup_cvs_test_repository t9604 + +test_expect_success 'check timestamps are UTC (TZ=CST6CDT)' ' + + TZ=CST6CDT git cvsimport -p"-x" -C module-1 module && + git cvsimport -p"-x" -C module-1 module && + ( + cd module-1 && + git log --format="%s %ai" + ) >actual-1 && + cat >expect-1 <<-EOF && + Rev 16 2006-10-29 07:00:01 +0000 + Rev 15 2006-10-29 06:59:59 +0000 + Rev 14 2006-04-02 08:00:01 +0000 + Rev 13 2006-04-02 07:59:59 +0000 + Rev 12 2005-12-01 00:00:00 +0000 + Rev 11 2005-11-01 00:00:00 +0000 + Rev 10 2005-10-01 00:00:00 +0000 + Rev 9 2005-09-01 00:00:00 +0000 + Rev 8 2005-08-01 00:00:00 +0000 + Rev 7 2005-07-01 00:00:00 +0000 + Rev 6 2005-06-01 00:00:00 +0000 + Rev 5 2005-05-01 00:00:00 +0000 + Rev 4 2005-04-01 00:00:00 +0000 + Rev 3 2005-03-01 00:00:00 +0000 + Rev 2 2005-02-01 00:00:00 +0000 + Rev 1 2005-01-01 00:00:00 +0000 + EOF + test_cmp actual-1 expect-1 +' + +test_expect_success 'check timestamps with author-specific timezones' ' + + cat >cvs-authors <<-EOF && + user1=User One <user1@domain.org> + user2=User Two <user2@domain.org> CST6CDT + user3=User Three <user3@domain.org> EST5EDT + user4=User Four <user4@domain.org> MST7MDT + EOF + git cvsimport -p"-x" -A cvs-authors -C module-2 module && + ( + cd module-2 && + git log --format="%s %ai %an" + ) >actual-2 && + cat >expect-2 <<-EOF && + Rev 16 2006-10-29 01:00:01 -0600 User Two + Rev 15 2006-10-29 01:59:59 -0500 User Two + Rev 14 2006-04-02 03:00:01 -0500 User Two + Rev 13 2006-04-02 01:59:59 -0600 User Two + Rev 12 2005-11-30 17:00:00 -0700 User Four + Rev 11 2005-10-31 19:00:00 -0500 User Three + Rev 10 2005-09-30 19:00:00 -0500 User Two + Rev 9 2005-09-01 00:00:00 +0000 User One + Rev 8 2005-07-31 18:00:00 -0600 User Four + Rev 7 2005-06-30 20:00:00 -0400 User Three + Rev 6 2005-05-31 19:00:00 -0500 User Two + Rev 5 2005-05-01 00:00:00 +0000 User One + Rev 4 2005-03-31 17:00:00 -0700 User Four + Rev 3 2005-02-28 19:00:00 -0500 User Three + Rev 2 2005-01-31 18:00:00 -0600 User Two + Rev 1 2005-01-01 00:00:00 +0000 User One + EOF + test_cmp actual-2 expect-2 +' + +test_done diff --git a/t/t9604/cvsroot/.gitattributes b/t/t9604/cvsroot/.gitattributes new file mode 100644 index 0000000000..562b12e16e --- /dev/null +++ b/t/t9604/cvsroot/.gitattributes @@ -0,0 +1 @@ +* -whitespace diff --git a/t/t9604/cvsroot/CVSROOT/.gitignore b/t/t9604/cvsroot/CVSROOT/.gitignore new file mode 100644 index 0000000000..3bb9b34173 --- /dev/null +++ b/t/t9604/cvsroot/CVSROOT/.gitignore @@ -0,0 +1,2 @@ +history +val-tags diff --git a/t/t9604/cvsroot/module/a,v b/t/t9604/cvsroot/module/a,v new file mode 100644 index 0000000000..80658c3c8d --- /dev/null +++ b/t/t9604/cvsroot/module/a,v @@ -0,0 +1,264 @@ +head 1.16; +access; +symbols; +locks; strict; +comment @# @; + + +1.16 +date 2006.10.29.07.00.01; author user2; state Exp; +branches; +next 1.15; + +1.15 +date 2006.10.29.06.59.59; author user2; state Exp; +branches; +next 1.14; + +1.14 +date 2006.04.02.08.00.01; author user2; state Exp; +branches; +next 1.13; + +1.13 +date 2006.04.02.07.59.59; author user2; state Exp; +branches; +next 1.12; + +1.12 +date 2005.12.01.00.00.00; author user4; state Exp; +branches; +next 1.11; + +1.11 +date 2005.11.01.00.00.00; author user3; state Exp; +branches; +next 1.10; + +1.10 +date 2005.10.01.00.00.00; author user2; state Exp; +branches; +next 1.9; + +1.9 +date 2005.09.01.00.00.00; author user1; state Exp; +branches; +next 1.8; + +1.8 +date 2005.08.01.00.00.00; author user4; state Exp; +branches; +next 1.7; + +1.7 +date 2005.07.01.00.00.00; author user3; state Exp; +branches; +next 1.6; + +1.6 +date 2005.06.01.00.00.00; author user2; state Exp; +branches; +next 1.5; + +1.5 +date 2005.05.01.00.00.00; author user1; state Exp; +branches; +next 1.4; + +1.4 +date 2005.04.01.00.00.00; author user4; state Exp; +branches; +next 1.3; + +1.3 +date 2005.03.01.00.00.00; author user3; state Exp; +branches; +next 1.2; + +1.2 +date 2005.02.01.00.00.00; author user2; state Exp; +branches; +next 1.1; + +1.1 +date 2005.01.01.00.00.00; author user1; state Exp; +branches; +next ; + + +desc +@@ + + +1.16 +log +@Rev 16 +@ +text +@Rev 16 +@ + + +1.15 +log +@Rev 15 +@ +text +@d1 1 +a1 1 +Rev 15 +@ + + +1.14 +log +@Rev 14 +@ +text +@d1 1 +a1 1 +Rev 14 +@ + + +1.13 +log +@Rev 13 +@ +text +@d1 1 +a1 1 +Rev 13 +@ + + +1.12 +log +@Rev 12 +@ +text +@d1 1 +a1 1 +Rev 12 +@ + + +1.11 +log +@Rev 11 +@ +text +@d1 1 +a1 1 +Rev 11 +@ + + +1.10 +log +@Rev 10 +@ +text +@d1 1 +a1 1 +Rev 10 +@ + + +1.9 +log +@Rev 9 +@ +text +@d1 1 +a1 1 +Rev 9 +@ + + +1.8 +log +@Rev 8 +@ +text +@d1 1 +a1 1 +Rev 8 +@ + + +1.7 +log +@Rev 7 +@ +text +@d1 1 +a1 1 +Rev 7 +@ + + +1.6 +log +@Rev 6 +@ +text +@d1 1 +a1 1 +Rev 6 +@ + + +1.5 +log +@Rev 5 +@ +text +@d1 1 +a1 1 +Rev 5 +@ + + +1.4 +log +@Rev 4 +@ +text +@d1 1 +a1 1 +Rev 4 +@ + + +1.3 +log +@Rev 3 +@ +text +@d1 1 +a1 1 +Rev 3 +@ + + +1.2 +log +@Rev 2 +@ +text +@d1 1 +a1 1 +Rev 2 +@ + + +1.1 +log +@Rev 1 +@ +text +@d1 1 +a1 1 +Rev 1 +@ diff --git a/t/t9902-completion.sh b/t/t9902-completion.sh index cbd0fb66f9..8fa025f9d4 100755 --- a/t/t9902-completion.sh +++ b/t/t9902-completion.sh @@ -288,4 +288,9 @@ test_expect_failure 'complete tree filename with metacharacters' ' EOF ' +test_expect_success 'send-email' ' + test_completion "git send-email --cov" "--cover-letter " && + test_completion "git send-email ma" "master " +' + test_done diff --git a/transport.h b/transport.h index 3b21c4abe6..4a61c0c3f2 100644 --- a/transport.h +++ b/transport.h @@ -175,4 +175,9 @@ void transport_print_push_status(const char *dest, struct ref *refs, typedef void alternate_ref_fn(const struct ref *, void *); extern void for_each_alternate_ref(alternate_ref_fn, void *); +struct send_pack_args; +extern int send_pack(struct send_pack_args *args, + int fd[], struct child_process *conn, + struct ref *remote_refs, + struct extra_have_objects *extra_have); #endif diff --git a/tree-walk.c b/tree-walk.c index 492c7cd744..3f54c02d76 100644 --- a/tree-walk.c +++ b/tree-walk.c @@ -490,11 +490,11 @@ int get_tree_entry(const unsigned char *tree_sha1, const char *name, unsigned ch static int match_entry(const struct name_entry *entry, int pathlen, const char *match, int matchlen, - int *never_interesting) + enum interesting *never_interesting) { int m = -1; /* signals that we haven't called strncmp() */ - if (*never_interesting) { + if (*never_interesting != entry_not_interesting) { /* * We have not seen any match that sorts later * than the current path. @@ -522,7 +522,7 @@ static int match_entry(const struct name_entry *entry, int pathlen, * the variable to -1 and that is what will be * returned, allowing the caller to terminate early. */ - *never_interesting = 0; + *never_interesting = entry_not_interesting; } if (pathlen > matchlen) @@ -584,7 +584,7 @@ enum interesting tree_entry_interesting(const struct name_entry *entry, { int i; int pathlen, baselen = base->len - base_offset; - int never_interesting = ps->has_wildcard ? + enum interesting never_interesting = ps->has_wildcard ? entry_not_interesting : all_entries_not_interesting; if (!ps->nr) { @@ -353,7 +353,7 @@ retry: c = *text; if (!c || isspace(c)) { - if (w < width || !space) { + if (w <= width || !space) { const char *start = bol; if (!c && text == start) return w; @@ -423,6 +423,13 @@ int is_encoding_utf8(const char *name) return 0; } +int same_encoding(const char *src, const char *dst) +{ + if (is_encoding_utf8(src) && is_encoding_utf8(dst)) + return 1; + return !strcasecmp(src, dst); +} + /* * Given a buffer and its encoding, return it re-encoded * with iconv. If the conversion fails, returns NULL. @@ -7,6 +7,7 @@ int utf8_width(const char **start, size_t *remainder_p); int utf8_strwidth(const char *string); int is_utf8(const char *text); int is_encoding_utf8(const char *name); +int same_encoding(const char *, const char *); int strbuf_add_wrapped_text(struct strbuf *buf, const char *text, int indent, int indent2, int width); |