summaryrefslogtreecommitdiff
path: root/builtin/pack-objects.c
AgeCommit message (Collapse)AuthorFilesLines
2018-01-30sha1_file: convert force_object_loose to object_idLibravatar Patryk Obara1-1/+1
Convert the definition and declaration of force_object_loose to struct object_id and adjust usage of this function. Signed-off-by: Patryk Obara <patryk.obara@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-27Merge branch 'jh/object-filtering'Libravatar Junio C Hamano1-1/+63
In preparation for implementing narrow/partial clone, the object walking machinery has been taught a way to tell it to "filter" some objects from enumeration. * jh/object-filtering: rev-list: support --no-filter argument list-objects-filter-options: support --no-filter list-objects-filter-options: fix 'keword' typo in comment pack-objects: add list-objects filtering rev-list: add list-objects filtering support list-objects: filter objects in traverse_commit_list oidset: add iterator methods to oidset oidmap: add oidmap iterator methods dir: allow exclusions from blob in addition to file
2017-11-22pack-objects: add list-objects filteringLibravatar Jeff Hostetler1-1/+63
Teach pack-objects to use the filtering provided by the traverse_commit_list_filtered() interface to omit unwanted objects from the resulting packfile. Filtering requires the use of the "--stdout" option. Add t5317 test. In the future, we will introduce a "partial clone" mechanism wherein an object in a repo, obtained from a remote, may reference a missing object that can be dynamically fetched from that remote once needed. This "partial clone" mechanism will have a way, sometimes slow, of determining if a missing link is one of the links expected to be produced by this mechanism. This patch introduces handling of missing objects to help debugging and development of the "partial clone" mechanism, and once the mechanism is implemented, for a power user to perform operations that are missing-object aware without incurring the cost of checking if a missing link is expected. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Reviewed-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16refs: convert peel_ref to struct object_idLibravatar brian m. carlson1-2/+2
Convert peel_ref (and its corresponding backend) to struct object_id. This transformation was done with an update to the declaration, definition, comments, and test helper and the following semantic patch: @@ expression E1, E2; @@ - peel_ref(E1, E2.hash) + peel_ref(E1, &E2) @@ expression E1, E2; @@ - peel_ref(E1, E2->hash) + peel_ref(E1, E2) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16builtin/pack-objects: convert to struct object_idLibravatar brian m. carlson1-67/+68
This is one of the last unconverted callers to peel_ref. While we're fixing that, convert the rest of the file, since it will need to be converted at some point anyway. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16pack-bitmap: convert traverse_bitmap_commit_list to object_idLibravatar brian m. carlson1-4/+4
Convert traverse_bitmap_commit_list and the callbacks it takes to use a pointer to struct object_id. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-10cleanup: fix possible overflow errors in binary searchLibravatar Derrick Stolee1-1/+1
A common mistake when writing binary search is to allow possible integer overflow by using the simple average: mid = (min + max) / 2; Instead, use the overflow-safe version: mid = min + (max - min) / 2; This translation is safe since the operation occurs inside a loop conditioned on "min < max". The included changes were found using the following git grep: git grep '/ *2;' '*.c' Making this cleanup will prevent future review friction when a new binary search is contructed based on existing code. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-29Merge branch 'rj/no-sign-compare'Libravatar Junio C Hamano1-2/+2
Many codepaths have been updated to squelch -Wsign-compare warnings. * rj/no-sign-compare: ALLOC_GROW: avoid -Wsign-compare warnings cache.h: hex2chr() - avoid -Wsign-compare warnings commit-slab.h: avoid -Wsign-compare warnings git-compat-util.h: xsize_t() - avoid -Wsign-compare warnings
2017-09-22ALLOC_GROW: avoid -Wsign-compare warningsLibravatar Ramsay Jones1-2/+2
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14pack: make packed_git_mru global a value instead of a pointerLibravatar Jonathan Nieder1-2/+2
The MRU cache that keeps track of recently used packs is represented using two global variables: struct mru packed_git_mru_storage; struct mru *packed_git_mru = &packed_git_mru_storage; Callers never assign to the packed_git_mru pointer, though, so we can simplify by eliminating it and using &packed_git_mru_storage (renamed to &packed_git_mru) directly. This variable is only used by the packfile subsystem, making this a relatively uninvasive change (and any new unadapted callers would trigger a compile error). Noticed while moving these globals to the object_store struct. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-10Merge branch 'ma/ts-cleanups'Libravatar Junio C Hamano1-0/+6
Assorted bugfixes and clean-ups. * ma/ts-cleanups: ThreadSanitizer: add suppressions strbuf_setlen: don't write to strbuf_slopbuf pack-objects: take lock before accessing `remaining` convert: always initialize attr_action in convert_attrs
2017-08-23pack: move open_pack_index(), parse_pack_index()Libravatar Jonathan Tan1-0/+1
alloc_packed_git() in packfile.c is duplicated from sha1_file.c. In a subsequent commit, alloc_packed_git() will be removed from sha1_file.c. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-23Merge branch 'rs/pack-objects-pbase-cleanup' into maintLibravatar Junio C Hamano1-1/+1
Code clean-up. * rs/pack-objects-pbase-cleanup: pack-objects: remove unnecessary NULL check
2017-08-23pack-objects: take lock before accessing `remaining`Libravatar Martin Ågren1-0/+6
When checking the conditional of "while (me->remaining)", we did not hold the lock. Calling find_deltas would still be safe, since it checks "remaining" (after taking the lock) and is able to handle all values. In fact, this could (currently) not trigger any bug: a bug could happen if `remaining` transitioning from zero to non-zero races with the evaluation of the while-condition, but these are always separated by the data_ready-mechanism. Make sure we have the lock when we read `remaining`. This does mean we release it just so that find_deltas can take it immediately again. We could tweak the contract so that the lock should be taken before calling find_deltas, but let's defer that until someone can actually show that "unlock+lock" has a measurable negative impact. Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-11Merge branch 'rs/pack-objects-pbase-cleanup'Libravatar Junio C Hamano1-1/+1
Code clean-up. * rs/pack-objects-pbase-cleanup: pack-objects: remove unnecessary NULL check
2017-07-20pack-objects: remove unnecessary NULL checkLibravatar René Scharfe1-1/+1
If done_pbase_paths is NULL then done_pbase_paths_num must be zero and done_pbase_path_pos() returns -1 without accessing the array, so the check is not necessary. If the invariant was violated then the check would make sure we keep on going and allocate the necessary amount of memory in the next ALLOC_GROW call. That sounds nice, but all array entries except for one would contain garbage data. If the invariant was violated without the check we'd get a segfault in done_pbase_path_pos(), i.e. an observable crash, alerting us of the presence of a bug. Currently there is no such bug: Only the functions check_pbase_path() and cleanup_preferred_base() change pointer and counter, and both make sure to keep them in sync. Get rid of the check anyway to allow us to see if later changes introduce such a defect, and to simplify the code. Detected by Coverity Scan. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-17use MOVE_ARRAYLibravatar René Scharfe1-3/+2
Simplify the code for moving members inside of an array and make it more robust by using the helper macro MOVE_ARRAY. It calculates the size based on the specified number of elements for us and supports NULL pointers when that number is zero. Raw memmove(3) calls with NULL can cause the compiler to (over-eagerly) optimize out later NULL checks. This patch was generated with contrib/coccinelle/array.cocci and spatch (Coccinelle). Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-24Merge branch 'ab/free-and-null'Libravatar Junio C Hamano1-8/+4
A common pattern to free a piece of memory and assign NULL to the pointer that used to point at it has been replaced with a new FREE_AND_NULL() macro. * ab/free-and-null: *.[ch] refactoring: make use of the FREE_AND_NULL() macro coccinelle: make use of the "expression" FREE_AND_NULL() rule coccinelle: add a rule to make "expression" code use FREE_AND_NULL() coccinelle: make use of the "type" FREE_AND_NULL() rule coccinelle: add a rule to make "type" code use FREE_AND_NULL() git-compat-util: add a FREE_AND_NULL() wrapper around free(ptr); ptr = NULL
2017-06-24Merge branch 'bw/config-h'Libravatar Junio C Hamano1-0/+1
Fix configuration codepath to pay proper attention to commondir that is used in multi-worktree situation, and isolate config API into its own header file. * bw/config-h: config: don't implicitly use gitdir or commondir config: respect commondir setup: teach discover_git_directory to respect the commondir config: don't include config.h by default config: remove git_config_iter config: create config.h
2017-06-16coccinelle: make use of the "type" FREE_AND_NULL() ruleLibravatar Ævar Arnfjörð Bjarmason1-8/+4
Apply the result of the just-added coccinelle rule. This manually excludes a few occurrences, mostly things that resulted in many FREE_AND_NULL() on one line, that'll be manually fixed in a subsequent change. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-15config: don't include config.h by defaultLibravatar Brandon Williams1-0/+1
Stop including config.h by default in cache.h. Instead only include config.h in those files which require use of the config system. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-04Merge branch 'jk/disable-pack-reuse-when-broken' into maintLibravatar Junio C Hamano1-1/+5
"pack-objects" can stream a slice of an existing packfile out when the pack bitmap can tell that the reachable objects are all needed in the output, without inspecting individual objects. This strategy however would not work well when "--local" and other options are in use, and need to be disabled. * jk/disable-pack-reuse-when-broken: t5310: fix "; do" style pack-objects: disable pack reuse for object-selection options
2017-06-02Merge branch 'ab/grep-preparatory-cleanup'Libravatar Junio C Hamano1-1/+3
The internal implementation of "git grep" has seen some clean-up. * ab/grep-preparatory-cleanup: (31 commits) grep: assert that threading is enabled when calling grep_{lock,unlock} grep: given --threads with NO_PTHREADS=YesPlease, warn pack-objects: fix buggy warning about threads pack-objects & index-pack: add test for --threads warning test-lib: add a PTHREADS prerequisite grep: move is_fixed() earlier to avoid forward declaration grep: change internal *pcre* variable & function names to be *pcre1* grep: change the internal PCRE macro names to be PCRE1 grep: factor test for \0 in grep patterns into a function grep: remove redundant regflags assignments grep: catch a missing enum in switch statement perf: add a comparison test of log --grep regex engines with -F perf: add a comparison test of log --grep regex engines perf: add a comparison test of grep regex engines with -F perf: add a comparison test of grep regex engines perf: emit progress output when unpacking & building perf: add a GIT_PERF_MAKE_COMMAND for when *_MAKE_OPTS won't do grep: add tests to fix blind spots with \0 patterns grep: prepare for testing binary regexes containing rx metacharacters grep: add a test helper function for less verbose -f \0 tests ...
2017-05-29Merge branch 'jk/disable-pack-reuse-when-broken'Libravatar Junio C Hamano1-1/+5
"pack-objects" can stream a slice of an existing packfile out when the pack bitmap can tell that the reachable objects are all needed in the output, without inspecting individual objects. This strategy however would not work well when "--local" and other options are in use, and need to be disabled. * jk/disable-pack-reuse-when-broken: t5310: fix "; do" style pack-objects: disable pack reuse for object-selection options
2017-05-29Merge branch 'bc/object-id'Libravatar Junio C Hamano1-30/+41
Conversion from uchar[20] to struct object_id continues. * bc/object-id: (53 commits) object: convert parse_object* to take struct object_id tree: convert parse_tree_indirect to struct object_id sequencer: convert do_recursive_merge to struct object_id diff-lib: convert do_diff_cache to struct object_id builtin/ls-tree: convert to struct object_id merge: convert checkout_fast_forward to struct object_id sequencer: convert fast_forward_to to struct object_id builtin/ls-files: convert overlay_tree_on_cache to object_id builtin/read-tree: convert to struct object_id sha1_name: convert internals of peel_onion to object_id upload-pack: convert remaining parse_object callers to object_id revision: convert remaining parse_object callers to object_id revision: rename add_pending_sha1 to add_pending_oid http-push: convert process_ls_object and descendants to object_id refs/files-backend: convert many internals to struct object_id refs: convert struct ref_update to use struct object_id ref-filter: convert some static functions to struct object_id Convert struct ref_array_item to struct object_id Convert the verify_pack callback to struct object_id Convert lookup_tag to struct object_id ...
2017-05-26pack-objects: fix buggy warning about threadsLibravatar Ævar Arnfjörð Bjarmason1-1/+3
Fix a buggy warning about threads under NO_PTHREADS=YesPlease. Due to re-using the delta_search_threads variable for both the state of the "pack.threads" config & the --threads option, setting "pack.threads" but not supplying --threads would trigger the warning for both "pack.threads" & --threads. Solve this bug by resetting the delta_search_threads variable in git_pack_config(), it might then be set by --threads again and be subsequently warned about, as the test I'm changing here asserts. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-09pack-objects: disable pack reuse for object-selection optionsLibravatar Jeff King1-1/+5
If certain options like --honor-pack-keep, --local, or --incremental are used with pack-objects, then we need to feed each potential object to want_object_in_pack() to see if it should be filtered out. But when the bitmap reuse_packfile optimization is in effect, we do not call that function at all, and in fact skip adding the objects to the to_pack list entirely. This means we have a bug: for certain requests we will silently ignore those options and include objects in that pack that should not be there. The problem has been present since the inception of the pack-reuse code in 6b8fda2db (pack-objects: use bitmaps when packing objects, 2013-12-21), but it was unlikely to come up in practice. These options are generally used for on-disk packing, not transfer packs (which go to stdout), but we've never allowed pack reuse for non-stdout packs (until 645c432d6, we did not even use bitmaps, which the reuse optimization relies on; after that, we explicitly turned it off when not packing to stdout). We can fix this by just disabling the reuse_packfile optimization when the options are in use. In theory we could teach the pack-reuse code to satisfy these checks, but it's not worth the complexity. The purpose of the optimization is to keep the amount of per-object work we do to a minimum. But these options inherently require us to search for other copies of each object, drowning out any benefit of the pack-reuse optimization. But note that the optimizations from 56dfeb626 (pack-objects: compute local/ignore_pack_keep early, 2016-07-29) happen before pack-reuse, meaning that specifying "--honor-pack-keep" in a repository with no .keep files can still follow the fast path. There are tests in t5310 that check these options with bitmaps and --stdout, but they didn't catch the bug, and it's hard to adapt them to do so. One problem is that they don't use --delta-base-offset; without that option, we always disable the reuse optimization entirely. It would be fine to add it in (it actually makes the test more realistic), but that still isn't quite enough. The other problem is that the reuse code is very picky; it only kicks in when it can reuse most of a pack, starting from the first byte. So we'd have to start from a fully repacked and bitmapped state to trigger it. But the tests for these options use a much more subtle state; they want to be sure that the want_object_in_pack() code is allowing some objects but not others. Doing a full repack runs counter to that. So this patch adds new tests at the end of the script which create the fully-packed state and make sure that each option is not fooled by reusable pack. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-08Convert lookup_tag to struct object_idLibravatar brian m. carlson1-1/+1
Convert lookup_tag to take a pointer to struct object_id. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-08pack: convert struct pack_idx_entry to struct object_idLibravatar brian m. carlson1-26/+37
Convert struct pack_idx_entry to use struct object_id by changing the definition and applying the following semantic patch, plus the standard object_id transforms: @@ struct pack_idx_entry E1; @@ - E1.sha1 + E1.oid.hash @@ struct pack_idx_entry *E1; @@ - E1->sha1 + E1->oid.hash Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-08shallow: convert shallow registration functions to object_idLibravatar brian m. carlson1-3/+3
Convert register_shallow and unregister_shallow to take struct object_id. register_shallow is a caller of lookup_commit, which we will convert later. It doesn't make sense for the registration and unregistration functions to have incompatible interfaces, so convert them both. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-27timestamp_t: a new data type for timestampsLibravatar Johannes Schindelin1-2/+2
Git's source code assumes that unsigned long is at least as precise as time_t. Which is incorrect, and causes a lot of problems, in particular where unsigned long is only 32-bit (notably on Windows, even in 64-bit versions). So let's just use a more appropriate data type instead. In preparation for this, we introduce the new `timestamp_t` data type. By necessity, this is a very, very large patch, as it has to replace all timestamps' data type in one go. As we will use a data type that is not necessarily identical to `time_t`, we need to be very careful to use `time_t` whenever we interact with the system functions, and `timestamp_t` everywhere else. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-19Merge branch 'bc/object-id'Libravatar Junio C Hamano1-12/+12
Conversion from unsigned char [40] to struct object_id continues. * bc/object-id: Documentation: update and rename api-sha1-array.txt Rename sha1_array to oid_array Convert sha1_array_for_each_unique and for_each_abbrev to object_id Convert sha1_array_lookup to take struct object_id Convert remaining callers of sha1_array_lookup to object_id Make sha1_array_append take a struct object_id * sha1-array: convert internal storage for struct sha1_array to object_id builtin/pull: convert to struct object_id submodule: convert check_for_new_submodule_commits to object_id sha1_name: convert disambiguate_hint_fn to take object_id sha1_name: convert struct disambiguate_state to object_id test-sha1-array: convert most code to struct object_id parse-options-cb: convert sha1_array_append caller to struct object_id fsck: convert init_skiplist to struct object_id builtin/receive-pack: convert portions to struct object_id builtin/pull: convert portions to struct object_id builtin/diff: convert to struct object_id Convert GIT_SHA1_RAWSZ used for allocation to GIT_MAX_RAWSZ Convert GIT_SHA1_HEXSZ used for allocation to GIT_MAX_HEXSZ Define new hash-size constants for allocating memory
2017-03-31Rename sha1_array to oid_arrayLibravatar brian m. carlson1-5/+5
Since this structure handles an array of object IDs, rename it to struct oid_array. Also rename the accessor functions and the initialization constant. This commit was produced mechanically by providing non-Documentation files to the following Perl one-liners: perl -pi -E 's/struct sha1_array/struct oid_array/g' perl -pi -E 's/\bsha1_array_/oid_array_/g' perl -pi -E 's/SHA1_ARRAY_INIT/OID_ARRAY_INIT/g' Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31Convert sha1_array_lookup to take struct object_idLibravatar brian m. carlson1-1/+1
Convert this function by changing the declaration and definition and applying the following semantic patch to update the callers: @@ expression E1, E2; @@ - sha1_array_lookup(E1, E2.hash) + sha1_array_lookup(E1, &E2) @@ expression E1, E2; @@ - sha1_array_lookup(E1, E2->hash) + sha1_array_lookup(E1, E2) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31Convert remaining callers of sha1_array_lookup to object_idLibravatar brian m. carlson1-8/+8
There are a very small number of callers which don't already use struct object_id. Convert them. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31Make sha1_array_append take a struct object_id *Libravatar brian m. carlson1-2/+2
Convert the callers to pass struct object_id by changing the function declaration and definition and applying the following semantic patch: @@ expression E1, E2; @@ - sha1_array_append(E1, E2.hash) + sha1_array_append(E1, &E2) @@ expression E1, E2; @@ - sha1_array_append(E1, E2->hash) + sha1_array_append(E1, E2) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-28Merge branch 'jk/fast-import-cleanup'Libravatar Junio C Hamano1-4/+8
Code clean-up. * jk/fast-import-cleanup: pack.h: define largest possible encoded object size encode_in_pack_object_header: respect output buffer length fast-import: use xsnprintf for formatting headers fast-import: use xsnprintf for writing sha1s
2017-03-24pack.h: define largest possible encoded object sizeLibravatar Jeff King1-2/+4
Several callers use fixed buffers for storing the pack object header, and they've picked 10 as a magic number. This is reasonable, since it handles objects up to 2^67. But let's give them a constant so it's clear that the number isn't pulled out of thin air. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-24encode_in_pack_object_header: respect output buffer lengthLibravatar Jeff King1-2/+4
The encode_in_pack_object_header() writes a variable-length header to an output buffer, but it doesn't actually know long the buffer is. At first glance, this looks like it might be possible to overflow. In practice, this is probably impossible. The smallest buffer we use is 10 bytes, which would hold the header for an object up to 2^67 bytes. Obviously we're not likely to see such an object, but we might worry that an object could lie about its size (causing us to overflow before we realize it does not actually have that many bytes). But the argument is passed as a uintmax_t. Even on systems that have __int128 available, uintmax_t is typically restricted to 64-bit by the ABI. So it's unlikely that a system exists where this could be exploited. Still, it's easy enough to use a normal out/len pair and make sure we don't write too far. That protects the hypothetical 128-bit system, makes it harder for callers to accidentally specify a too-small buffer, and makes the resulting code easier to audit. Note that the one caller in fast-import tried to catch such a case, but did so _after_ the call (at which point we'd have already overflowed!). This check can now go away. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-17Merge branch 'bc/object-id'Libravatar Junio C Hamano1-3/+3
"uchar [40]" to "struct object_id" conversion continues. * bc/object-id: wt-status: convert to struct object_id builtin/merge-base: convert to struct object_id Convert object iteration callbacks to struct object_id sha1_file: introduce an nth_packed_object_oid function refs: simplify parsing of reflog entries refs: convert each_reflog_ent_fn to struct object_id reflog-walk: convert struct reflog_info to struct object_id builtin/replace: convert to struct object_id Convert remaining callers of resolve_refdup to object_id builtin/merge: convert to struct object_id builtin/clone: convert to struct object_id builtin/branch: convert to struct object_id builtin/grep: convert to struct object_id builtin/fmt-merge-message: convert to struct object_id builtin/fast-export: convert to struct object_id builtin/describe: convert to struct object_id builtin/diff-tree: convert to struct object_id builtin/commit: convert to struct object_id hex: introduce parse_oid_hex
2017-02-27Merge branch 'bw/attr'Libravatar Junio C Hamano1-14/+5
The gitattributes machinery is being taught to work better in a multi-threaded environment. * bw/attr: (27 commits) attr: reformat git_attr_set_direction() function attr: push the bare repo check into read_attr() attr: store attribute stack in attr_check structure attr: tighten const correctness with git_attr and match_attr attr: remove maybe-real, maybe-macro from git_attr attr: eliminate global check_all_attr array attr: use hashmap for attribute dictionary attr: change validity check for attribute names to use positive logic attr: pass struct attr_check to collect_some_attrs attr: retire git_check_attrs() API attr: convert git_check_attrs() callers to use the new API attr: convert git_all_attrs() to use "struct attr_check" attr: (re)introduce git_check_attr() and struct attr_check attr: rename function and struct related to checking attributes attr.c: outline the future plans by heavily commenting Documentation: fix a typo attr.c: add push_stack() helper attr: support quoting pathname patterns in C style attr.c: plug small leak in parse_attr_line() attr.c: tighten constness around "git_attr" structure ...
2017-02-27Merge branch 'jk/delta-chain-limit'Libravatar Junio C Hamano1-23/+110
"git repack --depth=<n>" for a long time busted the specified depth when reusing delta from existing packs. This has been corrected. * jk/delta-chain-limit: pack-objects: convert recursion to iteration in break_delta_chain() pack-objects: enforce --depth limit in reused deltas
2017-02-22Convert object iteration callbacks to struct object_idLibravatar brian m. carlson1-3/+3
Convert each_loose_object_fn and each_packed_object_fn to take a pointer to struct object_id. Update the various callbacks. Convert several 40-based constants to use GIT_SHA1_HEXSZ. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-01attr: convert git_check_attrs() callers to use the new APILibravatar Junio C Hamano1-14/+5
The remaining callers are all simple "I have N attributes I am interested in. I'll ask about them with various paths one by one". After this step, no caller to git_check_attrs() remains. After removing it, we can extend "struct attr_check" struct with data that can be used in optimizing the query for the specific N attributes it contains. Signed-off-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-01attr: rename function and struct related to checking attributesLibravatar Junio C Hamano1-3/+3
The traditional API to check attributes is to prepare an N-element array of "struct git_attr_check" and pass N and the array to the function "git_check_attr()" as arguments. In preparation to revamp the API to pass a single structure, in which these N elements are held, rename the type used for these individual array elements to "struct attr_check_item" and rename the function to "git_check_attrs()". Signed-off-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-27pack-objects: convert recursion to iteration in break_delta_chain()Libravatar Jeff King1-30/+99
The break_delta_chain() function is recursive over the depth of a given delta chain, which can lead to possibly running out of stack space. Normally delta depth is quite small, but if there _is_ a pathological case, this is where we would find and fix it, so we should be more careful. We can do it without recursion at all, but there's a little bit of cleverness needed to do so. It's easiest to explain by covering the less-clever strategies first. The obvious thing to try is just keeping our own stack on the heap. Whenever we would recurse, push the new entry onto the stack and loop instead. But this gets tricky; when we see an ACTIVE entry, we need to care if we just pushed it (in which case it's a cycle) or if we just popped it (in which case we dealt with its bases, and no we need to clear the ACTIVE flag and compute its depth). You can hack around that in various ways, like keeping a "just pushed" flag, but the logic gets muddled. However, we can observe that we do all of our pushes first, and then all of our pops afterwards. In other words, we can do this in two passes. First dig down to the base, stopping when we see a cycle, and pushing each item onto our stack. Then pop the stack elements, clearing the ACTIVE flag and computing the depth for each. This works, and is reasonably elegant. However, why do we need the stack for the second pass? We can just walk the delta pointers again. There's one complication. Popping the stack went over our list in reverse, so we could compute the depth of each entry by incrementing the depth of its base, which we will have just computed. To go forward in the second pass, we have to compute the total depth on the way down, and then assign it as we go. This patch implements this final strategy, because it not only keeps the memory off the stack, but it eliminates it entirely. Credit for the cleverness in that approach goes to Michael Haggerty; bugs are mine. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-27pack-objects: enforce --depth limit in reused deltasLibravatar Jeff King1-0/+18
Since 898b14c (pack-objects: rework check_delta_limit usage, 2007-04-16), we check the delta depth limit only when figuring out whether we should make a new delta. We don't consider it at all when reusing deltas, which means that packing once with --depth=250, and then again with --depth=50, the second pack may still contain chains larger than 50. This is generally considered a feature, as the results of earlier high-depth repacks are carried forward, used for serving fetches, etc. However, since we started using cross-pack deltas in c9af708b1 (pack-objects: use mru list when iterating over packs, 2016-08-11), we are no longer bounded by the length of an existing delta chain in a single pack. Here's one particular pathological case: a sequence of N packs, each with 2 objects, the base of which is stored as a delta in a previous pack. If we chain all the deltas together, we have a cycle of length N. We break the cycle, but the tip delta is still at depth N-1. This is less unlikely than it might sound. See the included test for a reconstruction based on real-world actions. I ran into such a case in the wild, where a client was rapidly sending packs, and we had accumulated 10,000 before doing a server-side repack. The pack that "git repack" tried to generate had a very deep chain, which caused pack-objects to run out of stack space in the recursive write_one(). This patch bounds the length of delta chains in the output pack based on --depth, regardless of whether they are caused by cross-pack deltas or existed in the input packs. This fixes the problem, but does have two possible downsides: 1. High-depth aggressive repacks followed by "normal" repacks will throw away the high-depth chains. In the long run this is probably OK; investigation showed that high-depth repacks aren't actually beneficial, and we dropped the aggressive depth default to match the normal case in 07e7dbf0d (gc: default aggressive depth to 50, 2016-08-11). 2. If you really do want to store high-depth deltas on disk, they may be discarded and new delta computed when serving a fetch, unless you set pack.depth to match your high-depth size. The implementation uses the existing search for delta cycles. That lets us compute the depth of any node based on the depth of its base, because we know the base is DFS_DONE by the time we look at it (modulo any cycles in the graph, but we know there cannot be any because we break them as we see them). There is some subtlety worth mentioning, though. We record the depth of each object as we compute it. It might seem like we could save the per-object storage space by just keeping track of the depth of our traversal (i.e., have break_delta_chains() report how deep it went). But we may visit an object through multiple delta paths, and on subsequent paths we want to know its depth immediately, without having to walk back down to its final base (doing so would make our graph walk quadratic rather than linear). Likewise, one could try to record the depth not from the base, but from our starting point (i.e., start recursion_depth at 0, and pass "recursion_depth + 1" to each invocation of break_delta_chains()). And then when recursion_depth gets too big, we know that we must cut the delta chain. But that technique is wrong if we do not visit the nodes in topological order. In a chain A->B->C, it if we visit "C", then "B", then "A", we will never recurse deeper than 1 link (because we see at each node that we have already visited it). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-15compression: unify pack.compression configuration parsingLibravatar Junio C Hamano1-14/+0
There are three codepaths that use a variable whose name is pack_compression_level to affect how objects and deltas sent to a packfile is compressed. Unlike zlib_compression_level that controls the loose object compression, however, this variable was static to each of these codepaths. Two of them read the pack.compression configuration variable, using core.compression as the default, and one of them also allowed overriding it from the command line. The other codepath in bulk-checkin did not pay any attention to the configuration. Unify the configuration parsing to git_default_config(), where we implement the parsing of core.loosecompression and core.compression and make the former override the latter, by moving code to parse pack.compression and also allow core.compression to give default to this variable. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-25sha1_file: rename git_open_noatime() to git_open()Libravatar Lars Schneider1-1/+1
This function is meant to be used when reading from files in the object store, and the original objective was to avoid smudging atime of loose object files too often, hence its name. Because we'll be extending its role in the next commit to also arrange the file descriptors they return auto-closed in the child processes, rename it to lose "noatime" part that is too specific. Signed-off-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-10Merge branch 'jk/pack-objects-optim-mru'Libravatar Junio C Hamano1-2/+90
"git pack-objects" in a repository with many packfiles used to spend a lot of time looking for/at objects in them; the accesses to the packfiles are now optimized by checking the most-recently-used packfile first. * jk/pack-objects-optim-mru: pack-objects: use mru list when iterating over packs pack-objects: break delta cycles before delta-search phase sha1_file: make packed_object_info public provide an initializer for "struct object_info"