summaryrefslogtreecommitdiff
path: root/shallow.c
AgeCommit message (Collapse)AuthorFilesLines
2017-05-08shallow: convert shallow registration functions to object_idLibravatar brian m. carlson1-6/+6
Convert register_shallow and unregister_shallow to take struct object_id. register_shallow is a caller of lookup_commit, which we will convert later. It doesn't make sense for the registration and unregistration functions to have incompatible interfaces, so convert them both. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31Rename sha1_array to oid_arrayLibravatar brian m. carlson1-6/+6
Since this structure handles an array of object IDs, rename it to struct oid_array. Also rename the accessor functions and the initialization constant. This commit was produced mechanically by providing non-Documentation files to the following Perl one-liners: perl -pi -E 's/struct sha1_array/struct oid_array/g' perl -pi -E 's/\bsha1_array_/oid_array_/g' perl -pi -E 's/SHA1_ARRAY_INIT/OID_ARRAY_INIT/g' Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-28sha1-array: convert internal storage for struct sha1_array to object_idLibravatar brian m. carlson1-13/+13
Make the internal storage for struct sha1_array use an array of struct object_id internally. Update the users of this struct which inspect its internals. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-21Merge branch 'nd/shallow-fixup'Libravatar Junio C Hamano1-19/+20
Code cleanup in shallow boundary computation. * nd/shallow-fixup: shallow.c: remove useless code shallow.c: bit manipulation tweaks shallow.c: avoid theoretical pointer wrap-around shallow.c: make paint_alloc slightly more robust shallow.c: stop abusing COMMIT_SLAB_SIZE for paint_info's memory pools shallow.c: rename fields in paint_info to better express their purposes
2016-12-07shallow.c: remove useless codeLibravatar Nguyễn Thái Ngọc Duy1-4/+0
Some context before we talk about the removed code. This paint_down() is part of step 6 of 58babff (shallow.c: the 8 steps to select new commits for .git/shallow - 2013-12-05). When we fetch from a shallow repository, we need to know if one of the new/updated refs needs new "shallow commits" in .git/shallow (because we don't have enough history of those refs) and which one. The question at step 6 is, what (new) shallow commits are required in other to maintain reachability throughout the repository _without_ cutting our history short? To answer, we mark all commits reachable from existing refs with UNINTERESTING ("rev-list --not --all"), mark shallow commits with BOTTOM, then for each new/updated refs, walk through the commit graph until we either hit UNINTERESTING or BOTTOM, marking the ref on the commit as we walk. After all the walking is done, we check the new shallow commits. If we have not seen any new ref marked on a new shallow commit, we know all new/updated refs are reachable using just our history and .git/shallow. The shallow commit in question is not needed and can be thrown away. So, the code. The loop here (to walk through commits) is basically 1. get one commit from the queue 2. ignore if it's SEEN or UNINTERESTING 3. mark it 4. go through all the parents and.. 5a. mark it if it's never marked before 5b. put it back in the queue What we do in this patch is drop step 5a because it is not necessary. The commit being marked at 5a is put back on the queue, and will be marked at step 3 at the next iteration. The only case it will not be marked is when the commit is already marked UNINTERESTING (5a does not check this), which will be ignored at step 2. But we don't care about refs marking on UNINTERESTING. We care about the marking on _shallow commits_ that are not reachable from our current history (and having UNINTERESTING on it means it's reachable). So it's ok for an UNINTERESTING not to be ref-marked. Reported-by: Rasmus Villemoes <rv@rasmusvillemoes.dk> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-07shallow.c: bit manipulation tweaksLibravatar Rasmus Villemoes1-4/+4
First of all, 1 << 31 is technically undefined behaviour, so let's just use an unsigned literal. If i is 'signed int' and gcc doesn't know that i is positive, gcc generates code to compute the C99-mandated values of "i / 32" and "i % 32", which is a lot more complicated than simple a simple shifts/mask. The only caller of paint_down actually passes an "unsigned int" value, but the prototype of paint_down causes (completely well-defined) conversion to signed int, and gcc has no way of knowing that the converted value is non-negative. Just make the id parameter unsigned. In update_refstatus, the change in generated code is much smaller, presumably because gcc is smart enough to see that i starts as 0 and is only incremented, so it is allowed (per the UD of signed overflow) to assume that i is always non-negative. But let's just help less smart compilers generate good code anyway. Signed-off-by: Rasmus Villemoes <rv@rasmusvillemoes.dk> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-07shallow.c: avoid theoretical pointer wrap-aroundLibravatar Rasmus Villemoes1-1/+1
The expression info->free+size is technically undefined behaviour in exactly the case we want to test for. Moreover, the compiler is likely to translate the expression to (unsigned long)info->free + size > (unsigned long)info->end where there's at least a theoretical chance that the LHS could wrap around 0, giving a false negative. This might as well be written using pointer subtraction avoiding these issues. Signed-off-by: Rasmus Villemoes <rv@rasmusvillemoes.dk> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-07shallow.c: make paint_alloc slightly more robustLibravatar Nguyễn Thái Ngọc Duy1-0/+3
paint_alloc() allocates a big block of memory and splits it into smaller, fixed size, chunks of memory whenever it's called. Each chunk contains enough bits to present all "new refs" [1] in a fetch from a shallow repository. We do not check if the new "big block" is smaller than the requested memory chunk though. If it happens, we'll happily pass back a memory region smaller than expected. Which will lead to problems eventually. A normal fetch may add/update a dozen new refs. Let's stay on the "reasonably extreme" side and say we need 16k refs (or bits from paint_alloc's perspective). Each chunk of memory would be 2k, much smaller than the memory pool (512k). So, normally, the under-allocation situation should never happen. A bad guy, however, could make a fetch that adds more than 4m new/updated refs to this code which results in a memory chunk larger than pool size. Check this case and abort. Noticed-by: Rasmus Villemoes <rv@rasmusvillemoes.dk> Reviewed-by: Jeff King <peff@peff.net> [1] Details are in commit message of 58babff (shallow.c: the 8 steps to select new commits for .git/shallow - 2013-12-05), step 6. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-07shallow.c: stop abusing COMMIT_SLAB_SIZE for paint_info's memory poolsLibravatar Nguyễn Thái Ngọc Duy1-2/+4
We need to allocate a "big" block of memory in paint_alloc(). The exact size does not really matter. But the pool size has no relation with commit-slab. Stop using that macro here. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-07shallow.c: rename fields in paint_info to better express their purposesLibravatar Nguyễn Thái Ngọc Duy1-9/+9
paint_alloc() is basically malloc(), tuned for allocating a fixed number of bits on every call without worrying about freeing any individual allocation since all will be freed at the end. It does it by allocating a big block of memory every time it runs out of "free memory". "slab" is a poor choice of name, at least poorer than "pool". Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-31Merge branch 'ls/filter-process'Libravatar Junio C Hamano1-1/+1
The smudge/clean filter API expect an external process is spawned to filter the contents for each path that has a filter defined. A new type of "process" filter API has been added to allow the first request to run the filter for a path to spawn a single process, and all filtering need is served by this single process for multiple paths, reducing the process creation overhead. * ls/filter-process: contrib/long-running-filter: add long running filter example convert: add filter.<driver>.process option convert: prepare filter.<driver>.process option convert: make apply_filter() adhere to standard Git error handling pkt-line: add functions to read/write flush terminated packet streams pkt-line: add packet_write_gently() pkt-line: add packet_flush_gently() pkt-line: add packet_write_fmt_gently() pkt-line: extract set_packet_header() pkt-line: rename packet_write() to packet_write_fmt() run-command: add clean_on_exit_handler run-command: move check_pipe() from write_or_die to run_command convert: modernize tests convert: quote filter names in error messages
2016-10-17pkt-line: rename packet_write() to packet_write_fmt()Libravatar Lars Schneider1-1/+1
packet_write() should be called packet_write_fmt() because it is a printf-like function that takes a format string as first parameter. packet_write_fmt() should be used for text strings only. Arbitrary binary data should use a new packet_write() function that is introduced in a subsequent patch. Suggested-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-10Merge branch 'nd/shallow-deepen'Libravatar Junio C Hamano1-0/+78
The existing "git fetch --depth=<n>" option was hard to use correctly when making the history of an existing shallow clone deeper. A new option, "--deepen=<n>", has been added to make this easier to use. "git clone" also learned "--shallow-since=<date>" and "--shallow-exclude=<tag>" options to make it easier to specify "I am interested only in the recent N months worth of history" and "Give me only the history since that version". * nd/shallow-deepen: (27 commits) fetch, upload-pack: --deepen=N extends shallow boundary by N commits upload-pack: add get_reachable_list() upload-pack: split check_unreachable() in two, prep for get_reachable_list() t5500, t5539: tests for shallow depth excluding a ref clone: define shallow clone boundary with --shallow-exclude fetch: define shallow boundary with --shallow-exclude upload-pack: support define shallow boundary by excluding revisions refs: add expand_ref() t5500, t5539: tests for shallow depth since a specific date clone: define shallow clone boundary based on time with --shallow-since fetch: define shallow boundary with --shallow-since upload-pack: add deepen-since to cut shallow repos based on time shallow.c: implement a generic shallow boundary finder based on rev-list fetch-pack: use a separate flag for fetch in deepening mode fetch-pack.c: mark strings for translating fetch-pack: use a common function for verbose printing fetch-pack: use skip_prefix() instead of starts_with() upload-pack: move rev-list code out of check_non_tip() upload-pack: make check_non_tip() clean things up on error upload-pack: tighten number parsing at "deepen" lines ...
2016-08-01pass constants as first argument to st_mult()Libravatar René Scharfe1-1/+1
The result of st_mult() is the same no matter the order of its arguments. It invokes the macro unsigned_mult_overflows(), which divides the second parameter by the first one. Pass constants first to allow that division to be done already at compile time. Signed-off-by: Rene Scharfe <l.s.r@web.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13shallow.c: implement a generic shallow boundary finder based on rev-listLibravatar Nguyễn Thái Ngọc Duy1-0/+78
Instead of a custom commit walker like get_shallow_commits(), this new function uses rev-list to mark NOT_SHALLOW to all reachable commits, except borders. The definition of reachable is to be defined by the protocol later. This makes it more flexible to define shallow boundary. The way we find border is paint all reachable commits NOT_SHALLOW. Any of them that "touches" commits without NOT_SHALLOW flag are considered shallow (e.g. zero parents via grafting mechanism). Shallow commits and their true parents are all marked SHALLOW. Then NOT_SHALLOW is removed from shallow commits at the end. There is an interesting observation. With a generic walker, we can produce all kinds of shallow cutting. In the following graph, every commit but "x" is reachable. "b" is a parent of "a". x -- a -- o / / x -- c -- b -- o After this function is run, "a" and "c" are both considered shallow commits. After grafting occurs at the client side, what we see is a -- o / c -- b -- o Notice that because of grafting, "a" has zero parents, so "b" is no longer a parent of "a". This is unfortunate and may be solved in two ways. The first is change the way shallow grafting works and keep "a -- b" connection if "b" exists and always ends at shallow commits (iow, no loose ends). This is hard to detect, or at least not cheap to do. The second way is mark one "x" as shallow commit instead of "a" and produce this graph at client side: x -- a -- o / / c -- b -- o More commits, but simpler grafting rules. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22use st_add and st_mult for allocation size computationLibravatar Jeff King1-1/+1
If our size computation overflows size_t, we may allocate a much smaller buffer than we expected and overflow it. It's probably impossible to trigger an overflow in most of these sites in practice, but it is easy enough convert their additions and multiplications into overflow-checking variants. This may be fixing real bugs, and it makes auditing the code easier. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22convert trivial cases to ALLOC_ARRAYLibravatar Jeff King1-3/+3
Each of these cases can be converted to use ALLOC_ARRAY or REALLOC_ARRAY, which has two advantages: 1. It automatically checks the array-size multiplication for overflow. 2. It always uses sizeof(*array) for the element-size, so that it can never go out of sync with the declared type of the array. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-20Remove get_object_hash.Libravatar brian m. carlson1-1/+1
Convert all instances of get_object_hash to use an appropriate reference to the hash member of the oid member of struct object. This provides no functional change, as it is essentially a macro substitution. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-11-20Convert struct object to object_idLibravatar brian m. carlson1-2/+2
struct object is one of the major data structures dealing with object IDs. Convert it to use struct object_id instead of an unsigned char array. Convert get_object_hash to refer to the new member as well. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-11-20Add several uses of get_object_hash.Libravatar brian m. carlson1-1/+1
Convert most instances where the sha1 member of struct object is dereferenced to use get_object_hash. Most instances that are passed to functions that have versions taking struct object_id, such as get_sha1_hex/get_oid_hex, or instances that can be trivially converted to use struct object_id instead, are not converted. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-10-30Merge branch 'rs/pop-commit'Libravatar Junio C Hamano1-5/+1
Code simplification. * rs/pop-commit: use pop_commit() for consuming the first entry of a struct commit_list
2015-10-29Merge branch 'tk/sigchain-unnecessary-post-tempfile'Libravatar Junio C Hamano1-1/+0
Remove no-longer used #include. * tk/sigchain-unnecessary-post-tempfile: shallow: remove unused #include "sigchain.h" read-cache: remove unused #include "sigchain.h" diff: remove unused #include "sigchain.h" credential-cache--daemon: remove unused #include "sigchain.h"
2015-10-26use pop_commit() for consuming the first entry of a struct commit_listLibravatar René Scharfe1-5/+1
Instead of open-coding the function pop_commit() just call it. This makes the intent clearer and reduces code size. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-10-22shallow: remove unused #include "sigchain.h"Libravatar Tobias Klauser1-1/+0
After switching to use the tempfile module in commit 6e122b44 (setup_temporary_shallow(): use tempfile module), no declarations from sigchain.h are used in read-cache.c anymore. Thus, remove the #include. Signed-off-by: Tobias Klauser <tklauser@distanz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-25Merge branch 'mh/tempfile'Libravatar Junio C Hamano1-31/+10
The "lockfile" API has been rebuilt on top of a new "tempfile" API. * mh/tempfile: credential-cache--daemon: use tempfile module credential-cache--daemon: delete socket from main() gc: use tempfile module to handle gc.pid file lock_repo_for_gc(): compute the path to "gc.pid" only once diff: use tempfile module setup_temporary_shallow(): use tempfile module write_shared_index(): use tempfile module register_tempfile(): new function to handle an existing temporary file tempfile: add several functions for creating temporary files prepare_tempfile_object(): new function, extracted from create_tempfile() tempfile: a new module for handling temporary files commit_lock_file(): use get_locked_file_path() lockfile: add accessor get_lock_file_path() lockfile: add accessors get_lock_file_fd() and get_lock_file_fp() create_bundle(): duplicate file descriptor to avoid closing it twice lockfile: move documentation to lockfile.h and lockfile.c
2015-08-10memoize common git-path "constant" filesLibravatar Jeff King1-5/+5
One of the most common uses of git_path() is to pass a constant, like git_path("MERGE_MSG"). This has two drawbacks: 1. The return value is a static buffer, and the lifetime is dependent on other calls to git_path, etc. 2. There's no compile-time checking of the pathname. This is OK for a one-off (after all, we have to spell it correctly at least once), but many of these constant strings appear throughout the code. This patch introduces a series of functions to "memoize" these strings, which are essentially globals for the lifetime of the program. We compute the value once, take ownership of the buffer, and return the cached value for subsequent calls. cache.h provides a helper macro for defining these functions as one-liners, and defines a few common ones for global use. Using a macro is a little bit gross, but it does nicely document the purpose of the functions. If we need to touch them all later (e.g., because we learned how to change the git_dir variable at runtime, and need to invalidate all of the stored values), it will be much easier to have the complete list. Note that the shared-global functions have separate, manual declarations. We could do something clever with the macros (e.g., expand it to a declaration in some places, and a declaration _and_ a definition in path.c). But there aren't that many, and it's probably better to stay away from too-magical macros. Likewise, if we abandon the C preprocessor in favor of generating these with a script, we could get much fancier. E.g., normalizing "FOO/BAR-BAZ" into "git_path_foo_bar_baz". But the small amount of saved typing is probably not worth the resulting confusion to readers who want to grep for the function's definition. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-10setup_temporary_shallow(): use tempfile moduleLibravatar Michael Haggerty1-28/+7
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-10lockfile: add accessor get_lock_file_path()Libravatar Michael Haggerty1-3/+3
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25shallow: rewrite functions to take object_id argumentsLibravatar Michael Haggerty1-18/+11
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25each_ref_fn: change to take an object_id parameterLibravatar Michael Haggerty1-6/+13
Change typedef each_ref_fn to take a "const struct object_id *oid" parameter instead of "const unsigned char *sha1". To aid this transition, implement an adapter that can be used to wrap old-style functions matching the old typedef, which is now called "each_ref_sha1_fn"), and make such functions callable via the new interface. This requires the old function and its cb_data to be wrapped in a "struct each_ref_fn_sha1_adapter", and that object to be used as the cb_data for an adapter function, each_ref_fn_adapter(). This is an enormous diff, but most of it consists of simple, mechanical changes to the sites that call any of the "for_each_ref" family of functions. Subsequent to this change, the call sites can be rewritten one by one to use the new interface. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-05Merge branch 'bc/object-id'Libravatar Junio C Hamano1-4/+4
Identify parts of the code that knows that we use SHA-1 hash to name our objects too much, and use (1) symbolic constants instead of hardcoded 20 as byte count and/or (2) use struct object_id instead of unsigned char [20] for object names. * bc/object-id: apply: convert threeway_stage to object_id patch-id: convert to use struct object_id commit: convert parts to struct object_id diff: convert struct combine_diff_path to object_id bulk-checkin.c: convert to use struct object_id zip: use GIT_SHA1_HEXSZ for trailers archive.c: convert to use struct object_id bisect.c: convert leaf functions to use struct object_id define utility functions for object IDs define a structure for object IDs
2015-03-13commit: convert parts to struct object_idLibravatar brian m. carlson1-4/+4
Convert struct commit_graft and necessary local parts of commit.c. Also, convert several constants based on the hex length of an SHA-1 to use GIT_SHA1_HEXSZ, and move several magic constants into variables for readability. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-02-24Merge branch 'jk/blame-commit-label' into maintLibravatar Junio C Hamano1-1/+1
"git blame HEAD -- missing" failed to correctly say "HEAD" when it tried to say "No such path 'missing' in HEAD". * jk/blame-commit-label: blame.c: fix garbled error message use xstrdup_or_null to replace ternary conditionals builtin/commit.c: use xstrdup_or_null instead of envdup builtin/apply.c: use xstrdup_or_null instead of null_strdup git-compat-util: add xstrdup_or_null helper
2015-02-11Merge branch 'jc/unused-symbols'Libravatar Junio C Hamano1-1/+1
Mark file-local symbols as "static", and drop functions that nobody uses. * jc/unused-symbols: shallow.c: make check_shallow_file_for_update() static remote.c: make clear_cas_option() static urlmatch.c: make match_urls() static revision.c: make save_parents() and free_saved_parents() static line-log.c: make line_log_data_init() static pack-bitmap.c: make pack_bitmap_filename() static prompt.c: remove git_getpass() nobody uses http.c: make finish_active_slot() and handle_curl_result() static
2015-02-11Merge branch 'jk/blame-commit-label'Libravatar Junio C Hamano1-1/+1
"git blame HEAD -- missing" failed to correctly say "HEAD" when it tried to say "No such path 'missing' in HEAD". * jk/blame-commit-label: blame.c: fix garbled error message use xstrdup_or_null to replace ternary conditionals builtin/commit.c: use xstrdup_or_null instead of envdup builtin/apply.c: use xstrdup_or_null instead of null_strdup git-compat-util: add xstrdup_or_null helper
2015-01-15shallow.c: make check_shallow_file_for_update() staticLibravatar Junio C Hamano1-1/+1
No external callers exist. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-13use xstrdup_or_null to replace ternary conditionalsLibravatar Jeff King1-1/+1
This replaces "x ? xstrdup(x) : NULL" with xstrdup_or_null(x). The change is fairly mechanical, with the exception of resolve_refdup, which can eliminate a temporary variable. There are still a few hits grepping for "?.*xstrdup", but these are of slightly different forms and cannot be converted (e.g., "x ? xstrdup(x->foo) : NULL"). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-24Merge branch 'eb/no-pthreads'Libravatar Junio C Hamano1-5/+2
Allow us build with NO_PTHREADS=NoThanks compilation option. * eb/no-pthreads: Handle atexit list internaly for unthreaded builds pack-objects: set number of threads before checking and warning index-pack: fix compilation with NO_PTHREADS
2014-10-19Handle atexit list internaly for unthreaded buildsLibravatar Etienne Buira1-5/+2
Wrap atexit()s calls on unthreaded builds to handle callback list internally. This is needed because on unthreaded builds, asyncs inherits parent's atexit() list, that gets run as soon as the async exit()s (and again at the end of async's parent process). That led to remove temporary files too early. Also remove a by-atexit-callback guard against this kind of issue in clone.c, as this patch makes it redundant. Fixes test 5537 (temporary shallow file vanished before unpack-objects could open it) BTW remove an unused variable in shallow.c. Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Andreas Schwab <schwab@linux-m68k.org> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Etienne Buira <etienne.buira@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-01lockfile.h: extract new header file for the functions in lockfile.cLibravatar Michael Haggerty1-0/+1
Move the interface declaration for the functions in lockfile.c from cache.h to a new file, lockfile.h. Add #includes where necessary (and remove some redundant includes of cache.h by files that already include builtin.h). Move the documentation of the lock_file state diagram from lockfile.c to the new header file. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-01lockfile: change lock_file::filename into a strbufLibravatar Michael Haggerty1-3/+3
For now, we still make sure to allocate at least PATH_MAX characters for the strbuf because resolve_symlink() doesn't know how to expand the space for its return value. (That will be fixed in a moment.) Another alternative would be to just use a strbuf as scratch space in lock_file() but then store a pointer to the naked string in struct lock_file. But lock_file objects are often reused. By reusing the same strbuf, we can avoid having to reallocate the string most times when a lock_file object is reused. Helped-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-18use REALLOC_ARRAY for changing the allocation size of arraysLibravatar René Scharfe1-2/+1
Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-13trace: improve trace performanceLibravatar Karsten Blees1-5/+5
The trace API currently rechecks the environment variable and reopens the trace file on every API call. This has the ugly side effect that errors (e.g. file cannot be opened, or the user specified a relative path) are also reported on every call. Performance can be improved by about factor three by remembering the environment state and keeping the file open. Replace the 'const char *key' parameter in the API with a pointer to a 'struct trace_key' that bundles the environment variable name with additional, trace-internal state. Change the call sites of these APIs to use a static 'struct trace_key' instead of a string constant. In trace.c::get_trace_fd(), save and reuse the file descriptor in 'struct trace_key'. Add a 'trace_disable()' API, so that packet_trace() can cleanly disable tracing when it encounters packed data (instead of using unsetenv()). Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-17shallow: verify shallow file after taking lockLibravatar Jeff King1-2/+2
Before writing the shallow file, we stat() the existing file to make sure it has not been updated since our operation began. However, we do not do so under a lock, so there is a possible race: 1. Process A takes the lock. 2. Process B calls check_shallow_file_for_update and finds no update. 3. Process A commits the lockfile. 4. Process B takes the lock, then overwrite's process A's changes. We can fix this by doing our check while we hold the lock. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-27shallow: automatically clean up shallow tempfilesLibravatar Jeff King1-7/+34
We sometimes write tempfiles of the form "shallow_XXXXXX" during fetch/push operations with shallow repositories. Under normal circumstances, we clean up the result when we are done. However, we do no take steps to clean up after ourselves when we exit due to die() or signal death. This patch teaches the tempfile creation code to register handlers to clean up after ourselves. To handle this, we change the ownership semantics of the filename returned by setup_temporary_shallow. It now keeps a copy of the filename itself, and returns only a const pointer to it. We can also do away with explicit tempfile removal in the callers. They all exit not long after finishing with the file, so they can rely on the auto-cleanup, simplifying the code. Note that we keep things simple and maintain only a single filename to be cleaned. This is sufficient for the current caller, but we future-proof it with a die("BUG"). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-27shallow: use stat_validity to check for up-to-date fileLibravatar Jeff King1-17/+7
When we are about to write the shallow file, we check that it has not changed since we last read it. Instead of hand-rolling this, we can use stat_validity. This is built around the index stat-check, so it is more robust than just checking the mtime, as we do now (it uses the same check as we do for index files). The new code also handles the case of a shallow file appearing unexpectedly. With the current code, two simultaneous processes making us shallow (e.g., two "git fetch --depth=1" running at the same time in a non-shallow repository) can race to overwrite each other. As a bonus, we also remove a race in determining the stat information of what we read (we stat and then open, leaving a race window; instead we should open and then fstat the descriptor). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-01-17Merge branch 'nd/shallow-clone'Libravatar Junio C Hamano1-7/+462
Fetching from a shallow-cloned repository used to be forbidden, primarily because the codepaths involved were not carefully vetted and we did not bother supporting such usage. This attempts to allow object transfer out of a shallow-cloned repository in a controlled way (i.e. the receiver become a shallow repository with truncated history). * nd/shallow-clone: (31 commits) t5537: fix incorrect expectation in test case 10 shallow: remove unused code send-pack.c: mark a file-local function static git-clone.txt: remove shallow clone limitations prune: clean .git/shallow after pruning objects clone: use git protocol for cloning shallow repo locally send-pack: support pushing from a shallow clone via http receive-pack: support pushing to a shallow clone via http smart-http: support shallow fetch/clone remote-curl: pass ref SHA-1 to fetch-pack as well send-pack: support pushing to a shallow clone receive-pack: allow pushes that update .git/shallow connected.c: add new variant that runs with --shallow-file add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses receive/send-pack: support pushing from a shallow clone receive-pack: reorder some code in unpack() fetch: add --update-shallow to accept refs that update .git/shallow upload-pack: make sure deepening preserves shallow roots fetch: support fetching from a shallow repository clone: support remote shallow repository ...
2014-01-06shallow: remove unused codeLibravatar Ramsay Jones1-16/+0
Commit 58babfff ("shallow.c: the 8 steps to select new commits for .git/shallow", 05-12-2013) added a function to implement step 5 of the quoted eight steps, namely 'remove_nonexistent_ours_in_pack()'. This function implements an optional optimization step in the new shallow commit selection algorithm. However, this function has no callers. (The commented out call sites would need to change, in order to provide information required by the function.) Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Acked-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10prune: clean .git/shallow after pruning objectsLibravatar Nguyễn Thái Ngọc Duy1-2/+53
This patch teaches "prune" to remove shallow roots that are no longer reachable from any refs (e.g. when the relevant refs are removed). Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10receive-pack: allow pushes that update .git/shallowLibravatar Nguyễn Thái Ngọc Duy1-0/+23
The basic 8 steps to update .git/shallow does not fully apply here because the user may choose to accept just a few refs (while fetch always accepts all refs). The steps are modified a bit. 1-6. same as before. After calling assign_shallow_commits_to_refs at step 6, each shallow commit has a bitmap that marks all refs that require it. 7. mark all "ours" shallow commits that are reachable from any refs. We will need to do the original step 7 on them later. 8. go over all shallow commit bitmaps, mark refs that require new shallow commits. 9. setup a strict temporary shallow file to plug all the holes, even if it may cut some of our history short. This file is used by all hooks. The hooks could use --shallow-file=$GIT_DIR/shallow to overcome this and reach everything in current repo. 10. go over the new refs one by one. For each ref, do the reachability test if it needs a shallow commit on the list from step 7. Remove it if it's reachable from our refs. Gather all required shallow commits, run check_everything_connected() with the new ref, then install them to .git/shallow. This mode is disabled by default and can be turned on with receive.shallowupdate Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>