summaryrefslogtreecommitdiff
path: root/builtin
AgeCommit message (Collapse)AuthorFilesLines
2014-12-12expire_reflog(): it's not an each_ref_fn anymoreLibravatar Michael Haggerty1-5/+5
Prior to v1.5.4~14, expire_reflog() had to be an each_ref_fn because it was passed to for_each_reflog(). Since then, there has been no reason for it to implement the each_ref_fn interface. So... * Remove the "unused" parameter (which took the place of "flags", but was really unused). * Declare the last parameter to be (struct cmd_reflog_expire_cb *) rather than (void *). Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-11-06Merge branch 'jk/fetch-reflog-df-conflict'Libravatar Junio C Hamano1-1/+1
Corner-case bugfixes for "git fetch" around reflog handling. * jk/fetch-reflog-df-conflict: ignore stale directories when checking reflog existence fetch: load all default config at startup
2014-11-04fetch: load all default config at startupLibravatar Jeff King1-1/+1
When we start the git-fetch program, we call git_config to load all config, but our callback only processes the fetch.prune option; we do not chain to git_default_config at all. This means that we may not load some core configuration which will have an effect. For instance, we do not load core.logAllRefUpdates, which impacts whether or not we create reflogs in a bare repository. Note that I said "may" above. It gets even more exciting. If we have to transfer actual objects as part of the fetch, then we call fetch_pack as part of the same process. That function loads its own config, which does chain to git_default_config, impacting global variables which are used by the rest of fetch. But if the fetch is a pure ref update (e.g., a new ref which is a copy of an old one), we skip fetch_pack entirely. So we get inconsistent results depending on whether or not we have actual objects to transfer or not! Let's just load the core config at the start of fetch, so we know we have it (we may also load it again as part of fetch_pack, but that's OK; it's designed to be idempotent). Our tests check both cases (with and without a pack). We also check similar behavior for push for good measure, but it already works as expected. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-31Merge branch 'jc/push-cert'Libravatar Junio C Hamano1-2/+2
* jc/push-cert: receive-pack: avoid minor leak in case start_async() fails
2014-10-29Merge branch 'jk/prune-mtime'Libravatar Junio C Hamano7-199/+157
Tighten the logic to decide that an unreachable cruft is sufficiently old by covering corner cases such as an ancient object becoming reachable and then going unreachable again, in which case its retention period should be prolonged. * jk/prune-mtime: (28 commits) drop add_object_array_with_mode revision: remove definition of unused 'add_object' function pack-objects: double-check options before discarding objects repack: pack objects mentioned by the index pack-objects: use argv_array reachable: use revision machinery's --indexed-objects code rev-list: add --indexed-objects option rev-list: document --reflog option t5516: test pushing a tag of an otherwise unreferenced blob traverse_commit_list: support pending blobs/trees with paths make add_object_array_with_context interface more sane write_sha1_file: freshen existing objects pack-objects: match prune logic for discarding objects pack-objects: refactor unpack-unreachable expiration check prune: keep objects reachable from recent objects sha1_file: add for_each iterators for loose and packed objects count-objects: use for_each_loose_file_in_objdir count-objects: do not use xsize_t when counting object size prune-packed: use for_each_loose_file_in_objdir reachable: mark index blobs as SEEN ...
2014-10-28receive-pack: avoid minor leak in case start_async() failsLibravatar René Scharfe1-2/+2
If the asynchronous start of copy_to_sideband() fails, then any env_array entries added to struct child_process proc by prepare_push_cert_sha1() are leaked. Call the latter function only after start_async() succeeded so that the allocated entries are cleaned up automatically by start_command() or finish_command(). Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-24Merge branch 'jc/push-cert'Libravatar Junio C Hamano1-1/+12
* jc/push-cert: push: heed user.signingkey for signed pushes
2014-10-24Merge branch 'eb/no-pthreads'Libravatar Junio C Hamano3-7/+7
Allow us build with NO_PTHREADS=NoThanks compilation option. * eb/no-pthreads: Handle atexit list internaly for unthreaded builds pack-objects: set number of threads before checking and warning index-pack: fix compilation with NO_PTHREADS
2014-10-24Merge branch 'rs/run-command-env-array'Libravatar Junio C Hamano1-9/+14
Add managed "env" array to child_process to clarify the lifetime rules. * rs/run-command-env-array: use env_array member of struct child_process run-command: add env_array, an optional argv_array for env
2014-10-24Merge branch 'jk/pack-objects-no-bitmap-when-splitting'Libravatar Junio C Hamano1-0/+1
Splitting pack-objects output into multiple packs is incompatible with the use of reachability bitmap. * jk/pack-objects-no-bitmap-when-splitting: pack-objects: turn off bitmaps when we split packs
2014-10-24push: heed user.signingkey for signed pushesLibravatar Michael J Gruber1-1/+12
push --signed promises to take user.signingkey as the signing key but fails to read the config. Make it do so. Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-21Merge branch 'rs/ref-transaction'Libravatar Junio C Hamano19-57/+97
The API to update refs have been restructured to allow introducing a true transactional updates later. We would even allow storing refs in backends other than the traditional filesystem-based one. * rs/ref-transaction: (25 commits) ref_transaction_commit: bail out on failure to remove a ref lockfile: remove unable_to_lock_error refs.c: do not permit err == NULL remote rm/prune: print a message when writing packed-refs fails for-each-ref: skip and warn about broken ref names refs.c: allow listing and deleting badly named refs test: put tests for handling of bad ref names in one place packed-ref cache: forbid dot-components in refnames branch -d: simplify by using RESOLVE_REF_READING branch -d: avoid repeated symref resolution reflog test: test interaction with detached HEAD refs.c: change resolve_ref_unsafe reading argument to be a flags field refs.c: make write_ref_sha1 static fetch.c: change s_update_ref to use a ref transaction refs.c: ref_transaction_commit: distinguish name conflicts from other errors refs.c: pass a list of names to skip to is_refname_available refs.c: call lock_ref_sha1_basic directly from commit refs.c: refuse to lock badly named refs in lock_ref_sha1_basic rename_ref: don't ask read_ref_full where the ref came from refs.c: pass the ref log message to _create/delete/update instead of _commit ...
2014-10-20Merge branch 'cc/interpret-trailers'Libravatar Junio C Hamano1-0/+44
A new filter to programatically edit the tail end of the commit log messages. * cc/interpret-trailers: Documentation: add documentation for 'git interpret-trailers' trailer: add tests for commands in config file trailer: execute command from 'trailer.<name>.command' trailer: add tests for "git interpret-trailers" trailer: add interpret-trailers command trailer: put all the processing together and print trailer: parse trailers from file or stdin trailer: process command line trailer arguments trailer: read and process config information trailer: process trailers from input message and arguments trailer: add data structures and basic functions
2014-10-20Merge branch 'jn/parse-config-slot'Libravatar Junio C Hamano6-30/+31
Code cleanup. * jn/parse-config-slot: color_parse: do not mention variable name in error message pass config slots as pointers instead of offsets
2014-10-20Merge branch 'rs/receive-pack-argv-leak-fix'Libravatar Junio C Hamano1-10/+8
* rs/receive-pack-argv-leak-fix: receive-pack: plug minor memory leak in unpack()
2014-10-19Handle atexit list internaly for unthreaded buildsLibravatar Etienne Buira1-5/+0
Wrap atexit()s calls on unthreaded builds to handle callback list internally. This is needed because on unthreaded builds, asyncs inherits parent's atexit() list, that gets run as soon as the async exit()s (and again at the end of async's parent process). That led to remove temporary files too early. Also remove a by-atexit-callback guard against this kind of issue in clone.c, as this patch makes it redundant. Fixes test 5537 (temporary shallow file vanished before unpack-objects could open it) BTW remove an unused variable in shallow.c. Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Andreas Schwab <schwab@linux-m68k.org> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Etienne Buira <etienne.buira@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-19use env_array member of struct child_processLibravatar René Scharfe1-9/+14
Convert users of struct child_process to using the managed env_array for specifying environment variables instead of supplying an array on the stack or bringing their own argv_array. This shortens and simplifies the code and ensures automatically that the allocated memory is freed after use. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-19pack-objects: turn off bitmaps when we split packsLibravatar Jeff King1-0/+1
If a pack.packSizeLimit is set, we may split the pack data across multiple packfiles. This means we cannot generate .bitmap files, as they require that all of the reachable objects are in the same pack. We check that condition when we are generating the list of objects to pack (and disable bitmaps if we are not packing everything), but we forgot to update it when we notice that we needed to split (which doesn't happen until the actual write phase). The resulting bitmaps are quite bogus (they mention entries that do not exist in the pack!) and can cause a fetch or push to send insufficient objects. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-19pack-objects: double-check options before discarding objectsLibravatar Jeff King1-0/+2
When we are given an expiration time like --unpack-unreachable=2.weeks.ago, we avoid writing out old, unreachable loose objects entirely, under the assumption that running "prune" would simply delete them immediately anyway. However, this is only valid if we computed the same set of reachable objects as prune would. In practice, this is the case, because only git-repack uses the --unpack-unreachable option with an expiration, and it always feeds as many objects into the pack as possible. But we can double-check at runtime just to be sure. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-19repack: pack objects mentioned by the indexLibravatar Jeff King2-0/+9
When we pack all objects, we use only the objects reachable from references and reflogs. This misses any objects which are reachable from the index, but not yet referenced. By itself this isn't a big deal; the objects can remain loose until they are actually used in a commit. However, it does create a problem when we drop packed but unreachable objects. We try to optimize out the writing of objects that we will immediately prune, which means we must follow the same rules as prune in determining what is reachable. And prune uses the index for this purpose. This is rather uncommon in practice, as objects in the index would not usually have been packed in the first place. But it could happen in a sequence like: 1. You make a commit on a branch that references blob X. 2. You repack, moving X into the pack. 3. You delete the branch (and its reflog), so that X is unreferenced. 4. You "git add" blob X so that it is now referenced only by the index. 5. You repack again with git-gc. The pack-objects we invoke will see that X is neither referenced nor recent and not bother loosening it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-19pack-objects: use argv_arrayLibravatar Jeff King1-10/+10
This saves us from having to bump the rp_av count when we add new traversal options. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-16Merge branch 'po/everyday-doc'Libravatar Junio C Hamano1-0/+1
"git help everyday" to show the Everyday Git document. * po/everyday-doc: doc: add 'everyday' to 'git help' doc: Makefile regularise OBSOLETE_HTML list building doc: modernise everyday.txt wording and format in man page style
2014-10-16make add_object_array_with_context interface more saneLibravatar Jeff King1-4/+4
When you resolve a sha1, you can optionally keep any context found during the resolution, including the path and mode of a tree entry (e.g., when looking up "HEAD:subdir/file.c"). The add_object_array_with_context function lets you then attach that context to an entry in a list. Unfortunately, the interface for doing so is horrible. The object_context structure is large and most object_array users do not use it. Therefore we keep a pointer to the structure to avoid burdening other users too much. But that means when we do use it that we must allocate the struct ourselves. And the struct contains a fixed PATH_MAX-sized buffer, which makes this wholly unsuitable for any large arrays. We can observe that there is only a single user of the "with_context" variant: builtin/grep.c. And in that use case, the only element we care about is the path. We can therefore store only the path as a pointer (the context's mode field was redundant with the object_array_entry itself, and nobody actually cared about the surrounding tree). This still requires a strdup of the pathname, but at least we are only consuming the minimum amount of memory for each string. We can also handle the copying ourselves in add_object_array_*, and free it as appropriate in object_array_release_entry. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-16pack-objects: match prune logic for discarding objectsLibravatar Jeff King1-0/+39
A recent commit taught git-prune to keep non-recent objects that are reachable from recent ones. However, pack-objects, when loosening unreachable objects, tries to optimize out the write in the case that the object will be immediately pruned. It now gets this wrong, since its rule does not reflect the new prune code (and this can be seen by running t6501 with a strategically placed repack). Let's teach pack-objects similar logic. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-16pack-objects: refactor unpack-unreachable expiration checkLibravatar Jeff King1-5/+12
When we are loosening unreachable packed objects, we do not bother to process objects that would simply be pruned immediately anyway. The "would be pruned" check is a simple comparison, but is about to get more complicated. Let's pull it out into a separate function. Note that this is slightly less efficient than the original, which avoided even opening old packs, since no object in them could pass the current check, which cares only about the pack mtime. But the new rules will depend on the exact object, so we need to perform the check even for old packs. Note also that we fix a minor buglet when the pack mtime is exactly the same as the expiration time. The prune code considers that worth pruning, whereas our check here considered it worth keeping. This wasn't a big deal. Besides being unlikely to happen, the result was simply that the object was loosened and then pruned, missing the optimization. Still, we can easily fix it while we are here. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-16prune: keep objects reachable from recent objectsLibravatar Jeff King2-2/+2
Our current strategy with prune is that an object falls into one of three categories: 1. Reachable (from ref tips, reflogs, index, etc). 2. Not reachable, but recent (based on the --expire time). 3. Not reachable and not recent. We keep objects from (1) and (2), but prune objects in (3). The point of (2) is that these objects may be part of an in-progress operation that has not yet updated any refs. However, it is not always the case that objects for an in-progress operation will have a recent mtime. For example, the object database may have an old copy of a blob (from an abandoned operation, a branch that was deleted, etc). If we create a new tree that points to it, a simultaneous prune will leave our tree, but delete the blob. Referencing that tree with a commit will then work (we check that the tree is in the object database, but not that all of its referred objects are), as will mentioning the commit in a ref. But the resulting repo is corrupt; we are missing the blob reachable from a ref. One way to solve this is to be more thorough when referencing a sha1: make sure that not only do we have that sha1, but that we have objects it refers to, and so forth recursively. The problem is that this is very expensive. Creating a parent link would require traversing the entire object graph! Instead, this patch pushes the extra work onto prune, which runs less frequently (and has to look at the whole object graph anyway). It creates a new category of objects: objects which are not recent, but which are reachable from a recent object. We do not prune these objects, just like the reachable and recent ones. This lets us avoid the recursive check above, because if we have an object, even if it is unreachable, we should have its referent. We can make a simple inductive argument that with this patch, this property holds (that there are no objects with missing referents in the repository): 0. When we have no objects, we have nothing to refer or be referred to, so the property holds. 1. If we add objects to the repository, their direct referents must generally exist (e.g., if you create a tree, the blobs it references must exist; if you create a commit to point at the tree, the tree must exist). This is already the case before this patch. And it is not 100% foolproof (you can make bogus objects using `git hash-object`, for example), but it should be the case for normal usage. Therefore for any sequence of object additions, the property will continue to hold. 2. If we remove objects from the repository, then we will not remove a child object (like a blob) if an object that refers to it is being kept. That is the part implemented by this patch. Note, however, that our reachability check and the actual pruning are not atomic. So it _is_ still possible to violate the property (e.g., an object becomes referenced just as we are deleting it). This patch is shooting for eliminating problems where the mtimes of dependent objects differ by hours or days, and one is dropped without the other. It does nothing to help with short races. Naively, the simplest way to implement this would be to add all recent objects as tips to the reachability traversal. However, this does not perform well. In a recently-packed repository, all reachable objects will also be recent, and therefore we have to look at each object twice. This patch instead performs the reachability traversal, then follows up with a second traversal for recent objects, skipping any that have already been marked. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-16count-objects: use for_each_loose_file_in_objdirLibravatar Jeff King1-71/+30
This drops our line count considerably, and should make things more readable by keeping the counting logic separate from the traversal. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-16count-objects: do not use xsize_t when counting object sizeLibravatar Jeff King1-1/+1
The point of xsize_t is to safely cast an off_t into a size_t (because we are about to mmap). But in count-objects, we are summing the sizes in an off_t. Using xsize_t means that count-objects could fail on a 32-bit system with a 4G object (not likely, as other parts of git would fail, but we should at least be correct here). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-16prune-packed: use for_each_loose_file_in_objdirLibravatar Jeff King1-46/+23
This saves us from manually traversing the directory structure ourselves. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-16prune: factor out loose-object directory traversalLibravatar Jeff King1-61/+26
Prune has to walk $GIT_DIR/objects/?? in order to find the set of loose objects to prune. Other parts of the code (e.g., count-objects) want to do the same. Let's factor it out into a reusable for_each-style function. Note that this is not quite a straight code movement. The original code had strange behavior when it found a file of the form "[0-9a-f]{2}/.{38}" that did _not_ contain all hex digits. It executed a "break" from the loop, meaning that we stopped pruning in that directory (but still pruned other directories!). This was probably a bug; we do not want to process the file as an object, but we should keep going otherwise (and that is how the new code handles it). We are also a little more careful with loose object directories which fail to open. The original code silently ignored any failures, but the new code will complain about any problems besides ENOENT. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-15remote rm/prune: print a message when writing packed-refs failsLibravatar Ronnie Sahlberg1-4/+11
Until v2.1.0-rc0~22^2~11 (refs.c: add an err argument to repack_without_refs, 2014-06-20), repack_without_refs forgot to provide an error message when commit_packed_refs fails. Even today, it only provides a message for callers that pass a non-NULL err parameter. Internal callers in refs.c pass non-NULL err but "git remote" does not. That means that "git remote rm" and "git remote prune" can fail without printing a message about why. Fix them by passing in a non-NULL err parameter and printing the returned message. This is the last caller to a ref handling function passing err == NULL. A later patch can drop support for err == NULL, avoiding such problems in the future. Signed-off-by: Ronnie Sahlberg <sahlberg@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-15for-each-ref: skip and warn about broken ref namesLibravatar Ronnie Sahlberg1-0/+5
Print a warning message for any bad ref names we find in the repo and skip them so callers don't have to deal with parsing them. It might be useful in the future to have a flag where we would not skip these refs for those callers that want to and are prepared (for example by using a --format argument with %0 as a delimiter after the ref name). Signed-off-by: Ronnie Sahlberg <sahlberg@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-15refs.c: allow listing and deleting badly named refsLibravatar Ronnie Sahlberg1-4/+5
We currently do not handle badly named refs well: $ cp .git/refs/heads/master .git/refs/heads/master.....@\*@\\. $ git branch fatal: Reference has invalid format: 'refs/heads/master.....@*@\.' $ git branch -D master.....@\*@\\. error: branch 'master.....@*@\.' not found. Users cannot recover from a badly named ref without manually finding and deleting the loose ref file or appropriate line in packed-refs. Making that easier will make it easier to tweak the ref naming rules in the future, for example to forbid shell metacharacters like '`' and '"', without putting people in a state that is hard to get out of. So allow "branch --list" to show these refs and allow "branch -d/-D" and "update-ref -d" to delete them. Other commands (for example to rename refs) will continue to not handle these refs but can be changed in later patches. Details: In resolving functions, refuse to resolve refs that don't pass the git-check-ref-format(1) check unless the new RESOLVE_REF_ALLOW_BAD_NAME flag is passed. Even with RESOLVE_REF_ALLOW_BAD_NAME, refuse to resolve refs that escape the refs/ directory and do not match the pattern [A-Z_]* (think "HEAD" and "MERGE_HEAD"). In locking functions, refuse to act on badly named refs unless they are being deleted and either are in the refs/ directory or match [A-Z_]*. Just like other invalid refs, flag resolved, badly named refs with the REF_ISBROKEN flag, treat them as resolving to null_sha1, and skip them in all iteration functions except for for_each_rawref. Flag badly named refs (but not symrefs pointing to badly named refs) with a REF_BAD_NAME flag to make it easier for future callers to notice and handle them specially. For example, in a later patch for-each-ref will use this flag to detect refs whose names can confuse callers parsing for-each-ref output. In the transaction API, refuse to create or update badly named refs, but allow deleting them (unless they try to escape refs/ and don't match [A-Z_]*). Signed-off-by: Ronnie Sahlberg <sahlberg@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-15branch -d: simplify by using RESOLVE_REF_READINGLibravatar Ronnie Sahlberg1-3/+4
When "git branch -d" reads the branch it is about to delete, it used to avoid passing the RESOLVE_REF_READING ('treat missing ref as error') flag because a symref pointing to a nonexistent ref would show up as missing instead of as something that could be deleted. To check if a ref is actually missing, we then check - is it a symref? - if not, did it resolve to null_sha1? Now we pass RESOLVE_REF_NO_RECURSE and the correct information is returned for a symref even when it points to a missing ref. Simplify by relying on RESOLVE_REF_READING. No functional change intended. Signed-off-by: Ronnie Sahlberg <sahlberg@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-15branch -d: avoid repeated symref resolutionLibravatar Jonathan Nieder1-1/+2
If a repository gets in a broken state with too much symref nesting, it cannot be repaired with "git branch -d": $ git symbolic-ref refs/heads/nonsense refs/heads/nonsense $ git branch -d nonsense error: branch 'nonsense' not found. Worse, "git update-ref --no-deref -d" doesn't work for such repairs either: $ git update-ref -d refs/heads/nonsense error: unable to resolve reference refs/heads/nonsense: Too many levels of symbolic links Fix both by teaching resolve_ref_unsafe a new RESOLVE_REF_NO_RECURSE flag and passing it when appropriate. Callers can still read the value of a symref (for example to print a message about it) with that flag set --- resolve_ref_unsafe will resolve one level of symrefs and stop there. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Reviewed-by: Ronnie Sahlberg <sahlberg@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-15refs.c: change resolve_ref_unsafe reading argument to be a flags fieldLibravatar Ronnie Sahlberg15-24/+32
resolve_ref_unsafe takes a boolean argument for reading (a nonexistent ref resolves successfully for writing but not for reading). Change this to be a flags field instead, and pass the new constant RESOLVE_REF_READING when we want this behaviour. While at it, swap two of the arguments in the function to put output arguments at the end. As a nice side effect, this ensures that we can catch callers that were unaware of the new API so they can be audited. Give the wrapper functions resolve_refdup and read_ref_full the same treatment for consistency. Signed-off-by: Ronnie Sahlberg <sahlberg@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-15fetch.c: change s_update_ref to use a ref transactionLibravatar Ronnie Sahlberg1-10/+24
Change s_update_ref to use a ref transaction for the ref update. Signed-off-by: Ronnie Sahlberg <sahlberg@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-15refs.c: pass the ref log message to _create/delete/update instead of _commitLibravatar Ronnie Sahlberg5-14/+17
Change the ref transaction API so that we pass the reflog message to the create/delete/update functions instead of to ref_transaction_commit. This allows different reflog messages for each ref update in a multi-ref transaction. Signed-off-by: Ronnie Sahlberg <sahlberg@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-14color_parse: do not mention variable name in error messageLibravatar Jeff King5-11/+13
Originally the color-parsing function was used only for config variables. It made sense to pass the variable name so that the die() message could be something like: $ git -c color.branch.plain=bogus branch fatal: bad color value 'bogus' for variable 'color.branch.plain' These days we call it in other contexts, and the resulting error messages are a little confusing: $ git log --pretty='%C(bogus)' fatal: bad color value 'bogus' for variable '--pretty format' $ git config --get-color foo.bar bogus fatal: bad color value 'bogus' for variable 'command line' This patch teaches color_parse to complain only about the value, and then return an error code. Config callers can then propagate that up to the config parser, which mentions the variable name. Other callers can provide a custom message. After this patch these three cases now look like: $ git -c color.branch.plain=bogus branch error: invalid color value: bogus fatal: unable to parse 'color.branch.plain' from command-line config $ git log --pretty='%C(bogus)' error: invalid color value: bogus fatal: unable to parse --pretty format $ git config --get-color foo.bar bogus error: invalid color value: bogus fatal: unable to parse default color value Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-14pass config slots as pointers instead of offsetsLibravatar Jonathan Nieder3-19/+18
Many config-parsing helpers, like parse_branch_color_slot, take the name of a config variable and an offset to the "slot" name (e.g., "color.branch.plain" is passed along with "13" to effectively pass "plain"). This is leftover from the time that these functions would die() on error, and would want the full variable name for error reporting. These days they do not use the full variable name at all. Passing a single pointer to the slot name is more natural, and lets us more easily adjust the callers to use skip_prefix to avoid manually writing offset numbers. This is effectively a continuation of 9e1a5eb, which did the same for parse_diff_color_slot. This patch covers all of the remaining similar constructs. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-14Merge branch 'rs/more-uses-of-skip-prefix'Libravatar Junio C Hamano9-45/+43
* rs/more-uses-of-skip-prefix: use skip_prefix() to avoid more magic numbers
2014-10-14Merge branch 'rs/mailsplit'Libravatar Junio C Hamano1-1/+0
* rs/mailsplit: mailsplit: remove unnecessary unlink(2) call
2014-10-14Merge branch 'mh/lockfile'Libravatar Junio C Hamano17-24/+34
The lockfile API and its users have been cleaned up. * mh/lockfile: (38 commits) lockfile.h: extract new header file for the functions in lockfile.c hold_locked_index(): move from lockfile.c to read-cache.c hold_lock_file_for_append(): restore errno before returning get_locked_file_path(): new function lockfile.c: rename static functions lockfile: rename LOCK_NODEREF to LOCK_NO_DEREF commit_lock_file_to(): refactor a helper out of commit_lock_file() trim_last_path_component(): replace last_path_elm() resolve_symlink(): take a strbuf parameter resolve_symlink(): use a strbuf for internal scratch space lockfile: change lock_file::filename into a strbuf commit_lock_file(): use a strbuf to manage temporary space try_merge_strategy(): use a statically-allocated lock_file object try_merge_strategy(): remove redundant lock_file allocation struct lock_file: declare some fields volatile lockfile: avoid transitory invalid states git_config_set_multivar_in_file(): avoid call to rollback_lock_file() dump_marks(): remove a redundant call to rollback_lock_file() api-lockfile: document edge cases commit_lock_file(): rollback lock file on failure to rename ...
2014-10-13trailer: add interpret-trailers commandLibravatar Christian Couder1-0/+44
This patch adds the "git interpret-trailers" command. This command uses the previously added process_trailers() function in trailer.c. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-13pack-objects: set number of threads before checking and warningLibravatar Junio C Hamano1-2/+4
Under NO_PTHREADS build, we warn when delta_search_threads is not set to 1, because that is the only sensible value on a single threaded build. However, the auto detection that kicks in when that variable is set to 0 (e.g. there is no configuration variable or command line option, or an explicit --threads=0 is given from the command line to override the pack.threads configuration to force auto-detection) was not done before the condition to issue this warning was tested. Move the auto-detection code and place it at an appropriate spot. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-13index-pack: fix compilation with NO_PTHREADSLibravatar Etienne Buira1-0/+3
type_cas_lock/unlock() should be defined as no-op for NO_PTHREADS build, just like all the other locking primitives. Signed-off-by: Etienne Buira <etienne.buira@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-13receive-pack: plug minor memory leak in unpack()Libravatar René Scharfe1-10/+8
The argv_array used in unpack() is never freed. Instead of adding explicit calls to argv_array_clear() use the args member of struct child_process and let run_command() and friends clean up for us. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-10doc: add 'everyday' to 'git help'Libravatar Philip Oakley1-0/+1
The "Everyday GIT With 20 Commands Or So" is not accessible via the Git help system. Move everyday.txt to giteveryday.txt so that "git help everyday" works, and create a new placeholder file everyday.html to refer people who follow existing URLs to the updated location. giteveryday.txt now formats well with AsciiDoc as a man page and refreshed content to a more command modern style. Add 'everyday' to the help --guides list and update git(1) and 5 other links to giteveryday. Signed-off-by: Philip Oakley <philipoakley@iee.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-08Merge branch 'jc/push-cert'Libravatar Junio C Hamano3-40/+358
Allow "git push" request to be signed, so that it can be verified and audited, using the GPG signature of the person who pushed, that the tips of branches at a public repository really point the commits the pusher wanted to, without having to "trust" the server. * jc/push-cert: (24 commits) receive-pack::hmac_sha1(): copy the entire SHA-1 hash out signed push: allow stale nonce in stateless mode signed push: teach smart-HTTP to pass "git push --signed" around signed push: fortify against replay attacks signed push: add "pushee" header to push certificate signed push: remove duplicated protocol info send-pack: send feature request on push-cert packet receive-pack: GPG-validate push certificates push: the beginning of "git push --signed" pack-protocol doc: typofix for PKT-LINE gpg-interface: move parse_signature() to where it should be gpg-interface: move parse_gpg_output() to where it should be send-pack: clarify that cmds_sent is a boolean send-pack: refactor inspecting and resetting status and sending commands send-pack: rename "new_refs" to "need_pack_data" receive-pack: factor out capability string generation send-pack: factor out capability string generation send-pack: always send capabilities send-pack: refactor decision to send update per ref send-pack: move REF_STATUS_REJECT_NODELETE logic a bit higher ...
2014-10-07use skip_prefix() to avoid more magic numbersLibravatar René Scharfe9-45/+43
Continue where ae021d87 (use skip_prefix to avoid magic numbers) left off and use skip_prefix() in more places for determining the lengths of prefix strings to avoid using dependent constants and other indirect methods. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>