summaryrefslogtreecommitdiff
path: root/builtin/pack-objects.c
AgeCommit message (Collapse)AuthorFilesLines
2021-09-20Merge branch 'tb/pack-finalize-ordering'Libravatar Junio C Hamano1-4/+11
The order in which various files that make up a single (conceptual) packfile has been reevaluated and straightened up. This matters in correctness, as an incomplete set of files must not be shown to a running Git. * tb/pack-finalize-ordering: pack-objects: rename .idx files into place after .bitmap files pack-write: split up finish_tmp_packfile() function builtin/index-pack.c: move `.idx` files into place last index-pack: refactor renaming in final() builtin/repack.c: move `.idx` files into place last pack-write.c: rename `.idx` files after `*.rev` pack-write: refactor renaming in finish_tmp_packfile() bulk-checkin.c: store checksum directly pack.h: line-wrap the definition of finish_tmp_packfile()
2021-09-20Merge branch 'tb/multi-pack-bitmaps'Libravatar Junio C Hamano1-1/+7
The reachability bitmap file used to be generated only for a single pack, but now we've learned to generate bitmaps for history that span across multiple packfiles. * tb/multi-pack-bitmaps: (29 commits) pack-bitmap: drop bitmap_index argument from try_partial_reuse() pack-bitmap: drop repository argument from prepare_midx_bitmap_git() p5326: perf tests for MIDX bitmaps p5310: extract full and partial bitmap tests midx: respect 'GIT_TEST_MULTI_PACK_INDEX_WRITE_BITMAP' t7700: update to work with MIDX bitmap test knob t5319: don't write MIDX bitmaps in t5319 t5310: disable GIT_TEST_MULTI_PACK_INDEX_WRITE_BITMAP t0410: disable GIT_TEST_MULTI_PACK_INDEX_WRITE_BITMAP t5326: test multi-pack bitmap behavior t/helper/test-read-midx.c: add --checksum mode t5310: move some tests to lib-bitmap.sh pack-bitmap: write multi-pack bitmaps pack-bitmap: read multi-pack bitmaps pack-bitmap.c: avoid redundant calls to try_partial_reuse pack-bitmap.c: introduce 'bitmap_is_preferred_refname()' pack-bitmap.c: introduce 'nth_bitmap_object_oid()' pack-bitmap.c: introduce 'bitmap_num_objects()' midx: avoid opening multiple MIDXs when writing midx: close linked MIDXs, avoid leaking memory ...
2021-09-09pack-objects: rename .idx files into place after .bitmap filesLibravatar Ævar Arnfjörð Bjarmason1-1/+2
In preceding commits the race of renaming .idx files in place before .rev files and other auxiliary files was fixed in pack-write.c's finish_tmp_packfile(), builtin/repack.c's "struct exts", and builtin/index-pack.c's final(). As noted in the change to pack-write.c we left in place the issue of writing *.bitmap files after the *.idx, let's fix that issue. See 7cc8f971085 (pack-objects: implement bitmap writing, 2013-12-21) for commentary at the time when *.bitmap was implemented about how those files are written out, nothing in that commit contradicts what's being done here. Note that this commit and preceding ones only close any race condition with *.idx files being written before their auxiliary files if we're optimistic about our lack of fsync()-ing in this are not tripping us over. See the thread at [1] for a rabbit hole of various discussions about filesystem races in the face of doing and not doing fsync() (and if doing fsync(), not doing it properly). We may want to fsync the containing directory once after renaming the *.idx file into place, but that is outside the scope of this series. 1. https://lore.kernel.org/git/8735qgkvv1.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-09pack-write: split up finish_tmp_packfile() functionLibravatar Ævar Arnfjörð Bjarmason1-2/+5
Split up the finish_tmp_packfile() function and use the split-up version in pack-objects.c in preparation for moving the step of renaming the *.idx file later as part of a function change. Since the only other caller of finish_tmp_packfile() was in bulk-checkin.c, and it won't be needing a change to its *.idx renaming, provide a thin wrapper for the old function as a static function in that file. If other callers end up needing the simpler version it could be moved back to "pack-write.c" and "pack.h". Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-09pack-write: refactor renaming in finish_tmp_packfile()Libravatar Ævar Arnfjörð Bjarmason1-2/+5
Refactor the renaming in finish_tmp_packfile() into a helper function. The callers are now expected to pass a "name_buffer" ending in "pack-OID." instead of the previous "pack-", we then append "pack", "idx" or "rev" to it. By doing the strbuf_setlen() in rename_tmp_packfile() we reuse the buffer and avoid the repeated allocations we'd get if that function had its own temporary "struct strbuf". This approach of reusing the buffer does make the last user in pack-object.c's write_pack_file() slightly awkward, since we needlessly do a strbuf_setlen() before calling strbuf_release() for consistency. In subsequent changes we'll move that bitmap writing code around, so let's not skip the strbuf_setlen() now. The previous strbuf_reset() idiom originated with 5889271114a (finish_tmp_packfile():use strbuf for pathname construction, 2014-03-03), which in turn was a minimal adjustment of pre-strbuf code added in 0e990530ae (finish_tmp_packfile(): a helper function, 2011-10-28). Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-09-01pack-bitmap: read multi-pack bitmapsLibravatar Taylor Blau1-0/+5
This prepares the code in pack-bitmap to interpret the new multi-pack bitmaps described in Documentation/technical/bitmap-format.txt, which mostly involves converting bit positions to accommodate looking them up in a MIDX. Note that there are currently no writers who write multi-pack bitmaps, and that this will be implemented in the subsequent commit. Note also that get_midx_checksum() and get_midx_filename() are made non-static so they can be called from pack-bitmap.c. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-29builtin/pack-objects.c: remove duplicate hash lookupLibravatar Taylor Blau1-10/+1
In the original code from 08cdfb1337 (pack-objects --keep-unreachable, 2007-09-16), we add each object to the packing list with type `obj->type`, where `obj` comes from `lookup_unknown_object()`. Unless we had already looked up and parsed the object, this will be `OBJ_NONE`. That's fine, since oe_set_type() sets the type_valid bit to '0', and we determine the real type later on. So the only thing we need from the object lookup is access to the `flags` field so that we can mark that we've added the object with `OBJECT_ADDED` to avoid adding it again (we can just pass `OBJ_NONE` directly instead of grabbing it from the object). But add_object_entry() already rejects duplicates! This has been the behavior since 7a979d99ba (Thin pack - create packfile with missing delta base., 2006-02-19), but 08cdfb1337 didn't take advantage of it. Moreover, to do the OBJECT_ADDED check, we have to do a hash lookup in `obj_hash`. So we can drop the lookup_unknown_object() call completely, *and* the OBJECT_ADDED flag, too, since the spot we're touching here is the only location that checks it. In the end, we perform the same number of hash lookups, but with the added bonus that we don't waste memory allocating an OBJ_NONE object (if we were traversing, we'd need it eventually, but the whole point of this code path is not to traverse). Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-29builtin/pack-objects.c: simplify add_objects_in_unpacked_packs()Libravatar Taylor Blau1-68/+16
This function is used to implement `pack-objects`'s `--keep-unreachable` option, but can be simplified in a couple of ways: - add_objects_in_unpacked_packs() iterates over all packs (and then all packed objects) itself, but could use for_each_packed_object() instead since the missing flags necessary were added in the previous commit - objects are added to an in_pack array which store (off_t, object) tuples, and then sorted in offset order when we could iterate objects in offset order. There is a slight behavior change here: before we would have added objects in sorted offset order among _all_ packs. Handing objects to create_object_entry() in pack order for each pack (instead of feeding objects from all packs simultaneously their offset relative to different packs) is much more reasonable, if different than how the code currently works. - objects in a single pack are iterated in index order and searched for in order to discover their offsets, which is much less efficient than using the on-disk reverse index Simplify the function by addressing each of the above and moving the core of the loop into a callback function that we then pass to for_each_packed_object() instead of open-coding the latter function ourselves. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-08-24pack-bitmap-write.c: gracefully fail to write non-closed bitmapsLibravatar Taylor Blau1-1/+2
The set of objects covered by a bitmap must be closed under reachability, since it must be the case that there is a valid bit position assigned for every possible reachable object (otherwise the bitmaps would be incomplete). Pack bitmaps are never written from 'git repack' unless repacking all-into-one, and so we never write non-closed bitmaps (except in the case of partial clones where we aren't guaranteed to have all objects). But multi-pack bitmaps change this, since it isn't known whether the set of objects in the MIDX is closed under reachability until walking them. Plumb through a bit that is set when a reachable object isn't found. As soon as a reachable object isn't found in the set of objects to include in the bitmap, bitmap_writer_build() knows that the set is not closed, and so it now fails gracefully. A test is added in t0410 to trigger a bitmap write without full reachability closure by removing local copies of some reachable objects from a promisor remote. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-09pack-objects: fix segfault in --stdin-packs optionLibravatar Ævar Arnfjörð Bjarmason1-3/+20
Fix a segfault in the --stdin-packs option added in 339bce27f4f (builtin/pack-objects.c: add '--stdin-packs' option, 2021-02-22). The read_packs_list_from_stdin() function didn't check that the lines it was reading were valid packs, and thus when doing the QSORT() with pack_mtime_cmp() we'd have a NULL "util" field. The "util" field is used to associate the names of included/excluded packs with the packed_git structs they correspond to. The logic error was in assuming that we could iterate all packs and annotate the excluded and included packs we got, as opposed to checking the lines we got on stdin. There was a check for excluded packs, but included packs were simply assumed to be valid. As noted in the test we'll not report the first bad line, but whatever line sorted first according to the string-list.c API. In this case I think that's fine. There was further discussion of alternate approaches in [1]. Even though we're being lazy let's assert the line we do expect to get in the test, since whoever changes this code in the future might miss this case, and would want to update the test and comments. 1. http://lore.kernel.org/git/YND3h2l10PlnSNGJ@nand.local Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-27Merge branch 'ab/pack-linkage-fix'Libravatar Junio C Hamano1-7/+167
"ld" on Solaris fails to link some test helpers, which has been worked around by reshuffling the inline function definitions from a header file to a source file that is the only user of them. * ab/pack-linkage-fix: pack-objects: move static inline from a header to the sole consumer
2021-05-27pack-objects: move static inline from a header to the sole consumerLibravatar Ævar Arnfjörð Bjarmason1-7/+167
Move the code that is only used in builtin/pack-objects.c out of pack-objects.h. This fixes an issue where Solaris's SunCC hasn't been able to compile git since 483fa7f42d9 (t/helper/test-bitmap.c: initial commit, 2021-03-31). The real origin of that issue is that in 898eba5e630 (pack-objects: refer to delta objects by index instead of pointer, 2018-04-14) utility functions only needed by builtin/pack-objects.c were added to pack-objects.h. Since then the header has been used in a few other places, but 483fa7f42d9 was the first time it was used by test helper. Since Solaris is stricter about linking and the oe_get_size_slow() function lives in builtin/pack-objects.c the build started failing with: Undefined first referenced symbol in file oe_get_size_slow t/helper/test-bitmap.o ld: fatal: symbol referencing errors. No output written to t/helper/test-tool On other platforms this is presumably OK because the compiler and/or linker detects that the "static inline" functions that reference oe_get_size_slow() aren't used. Let's solve this by moving the relevant code from pack-objects.h to builtin/pack-objects.c. This is almost entirely a code-only move, but because of the early macro definitions in that file referencing some of these inline functions we need to move the definition of "static struct packing_data to_pack" earlier, and declare these inline functions above the macros. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-11Merge branch 'jk/pack-objects-negative-options-fix'Libravatar Junio C Hamano1-0/+4
Options to "git pack-objects" that take numeric values like --window and --depth should not accept negative values; the input validation has been tightened. * jk/pack-objects-negative-options-fix: pack-objects: clamp negative depth to 0 t5316: check behavior of pack-objects --depth=0 pack-objects: clamp negative window size to 0 t5300: check that we produced expected number of deltas t5300: modernize basic tests
2021-05-10Merge branch 'rs/repack-without-loosening-promised-objects'Libravatar Junio C Hamano1-1/+7
"git repack -A -d" in a partial clone unnecessarily loosened objects in promisor pack. * rs/repack-without-loosening-promised-objects: repack: avoid loosening promisor objects in partial clones
2021-05-10Merge branch 'bc/hash-transition-interop-part-1'Libravatar Junio C Hamano1-10/+10
SHA-256 transition. * bc/hash-transition-interop-part-1: hex: print objects using the hash algorithm member hex: default to the_hash_algo on zero algorithm value builtin/pack-objects: avoid using struct object_id for pack hash commit-graph: don't store file hashes as struct object_id builtin/show-index: set the algorithm for object IDs hash: provide per-algorithm null OIDs hash: set, copy, and use algo field in struct object_id builtin/pack-redundant: avoid casting buffers to struct object_id Use the final_oid_fn to finalize hashing of object IDs hash: add a function to finalize object IDs http-push: set algorithm when reading object ID Always use oidread to read into struct object_id hash: add an algo member to struct object_id
2021-05-07Merge branch 'ps/rev-list-object-type-filter'Libravatar Junio C Hamano1-1/+1
"git rev-list" learns the "--filter=object:type=<type>" option, which can be used to exclude objects of the given kind from the packfile generated by pack-objects. * ps/rev-list-object-type-filter: rev-list: allow filtering of provided items pack-bitmap: implement combined filter pack-bitmap: implement object type filter list-objects: implement object type filter list-objects: support filtering by tag and commit list-objects: move tag processing into its own function revision: mark commit parents as NOT_USER_GIVEN uploadpack.txt: document implication of `uploadpackfilter.allow`
2021-05-03pack-objects: clamp negative depth to 0Libravatar Jeff King1-0/+2
A negative delta depth makes no sense, and the code is not prepared to handle it. If passed "--depth=-1" on the command line, then this line from break_delta_chains(): cur->depth = (total_depth--) % (depth + 1); triggers a divide-by-zero. This is undefined behavior according to the C standard, but on POSIX systems results in SIGFPE killing the process. This is certainly one way to inform the use that the command was invalid, but it's a bit friendlier to just treat it as "don't allow any deltas", which we already do for --depth=0. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-03pack-objects: clamp negative window size to 0Libravatar Jeff King1-0/+2
A negative window size makes no sense, and the code in find_deltas() is not prepared to handle it. If you pass "-1", for example, we end up generate a 0-length array of "struct unpacked", but our loop assumes it has at least one entry in it (and we end up reading garbage memory). We could complain to the user about this, but it's more forgiving to just clamp it to 0, which means "do not find any deltas at all". The 0-case is already tested earlier in the script, so we'll make sure this does the same thing. Reported-by: Yiyuan guo <yguoaz@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-30Merge branch 'jk/promisor-optim'Libravatar Junio C Hamano1-1/+1
Handling of "promisor packs" that allows certain objects to be missing and lazily retrievable has been optimized (a bit). * jk/promisor-optim: revision: avoid parsing with --exclude-promisor-objects lookup_unknown_object(): take a repository argument is_promisor_object(): free tree buffer after parsing
2021-04-28repack: avoid loosening promisor objects in partial clonesLibravatar Rafael Silva1-1/+7
When `git repack -A -d` is run in a partial clone, `pack-objects` is invoked twice: once to repack all promisor objects, and once to repack all non-promisor objects. The latter `pack-objects` invocation is with --exclude-promisor-objects and --unpack-unreachable, which loosens all objects unused during this invocation. Unfortunately, this includes promisor objects. Because the -d argument to `git repack` subsequently deletes all loose objects also in packs, these just-loosened promisor objects will be immediately deleted. However, this extra disk churn is unnecessary in the first place. For example, in a newly-cloned partial repo that filters all blob objects (e.g. `--filter=blob:none`), `repack` ends up unpacking all trees and commits into the filesystem because every object, in this particular case, is a promisor object. Depending on the repo size, this increases the disk usage considerably: In my copy of the linux.git, the object directory peaked 26GB of more disk usage. In order to avoid this extra disk churn, pass the names of the promisor packfiles as --keep-pack arguments to the second invocation of `pack-objects`. This informs `pack-objects` that the promisor objects are already in a safe packfile and, therefore, do not need to be loosened. For testing, we need to validate whether any object was loosened. However, the "evidence" (loosened objects) is deleted during the process which prevents us from inspecting the object directory. Instead, let's teach `pack-objects` to count loosened objects and emit via trace2 thus allowing inspecting the debug events after the process is finished. This new event is used on the added regression test. Lastly, add a new perf test to evaluate the performance impact made by this changes (tested on git.git): Test HEAD^ HEAD ---------------------------------------------------------- 5600.3: gc 134.38(41.93+90.95) 7.80(6.72+1.35) -94.2% For a bigger repository, such as linux.git, the improvement is even bigger: Test HEAD^ HEAD ------------------------------------------------------------------- 5600.3: gc 6833.00(918.07+3162.74) 268.79(227.02+39.18) -96.1% These improvements are particular big because every object in the newly-cloned partial repository is a promisor object. Reported-by: SZEDER Gábor <szeder.dev@gmail.com> Helped-by: Jeff King <peff@peff.net> Helped-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Rafael Silva <rafaeloliveira.cs@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-27builtin/pack-objects: avoid using struct object_id for pack hashLibravatar brian m. carlson1-10/+10
We use struct object_id for the names of objects. It isn't intended to be used for other hash values that don't name objects such as the pack hash. Because struct object_id will soon need to have its algorithm member set, using it in this code path would mean that we didn't set that member, only the hash member, which would result in a crash. For both of these reasons, switch to using an unsigned char array of size GIT_MAX_RAWSZ. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-20Merge branch 'jk/pack-objects-bitmap-progress-fix'Libravatar Junio C Hamano1-1/+2
When "git pack-objects" makes a literal copy of a part of existing packfile using the reachability bitmaps, its update to the progress meter was broken. * jk/pack-objects-bitmap-progress-fix: pack-objects: update "nr_seen" progress based on pack-reused count
2021-04-19rev-list: allow filtering of provided itemsLibravatar Patrick Steinhardt1-1/+1
When providing an object filter, it is currently impossible to also filter provided items. E.g. when executing `git rev-list HEAD` , the commit this reference points to will be treated as user-provided and is thus excluded from the filtering mechanism. This makes it harder than necessary to properly use the new `--filter=object:type` filter given that even if the user wants to only see blobs, he'll still see commits of provided references. Improve this by introducing a new `--filter-provided-objects` option to the git-rev-parse(1) command. If given, then all user-provided references will be subject to filtering. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-13lookup_unknown_object(): take a repository argumentLibravatar Jeff King1-1/+1
All of the other lookup_foo() functions take a repository argument, but lookup_unknown_object() was never converted, and it uses the_repository internally. Let's fix that. We could leave a wrapper that uses the_repository, but there aren't that many calls, so we'll just convert them all. I looked briefly at each site to see if we had a repository struct (besides the_repository) we could pass, but none of them do (so this conversion to pass the_repository is a pure noop in each case, though it does take us one step closer to eventually getting rid of the_repository). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-12pack-objects: update "nr_seen" progress based on pack-reused countLibravatar Jeff King1-1/+2
When serving a clone or fetch with bitmaps, after deciding which objects need to be sent our "pack reuse" mechanism kicks in: we try to send more-or-less verbatim a bunch of objects from the beginning of the bitmapped packfile without even adding them to the to_pack.objects array. After deciding which objects will be in the "reused" portion, we update nr_result to account for those, and then trigger display_progress() to show the user (who is undoubtedly dazzled that we managed to enumerate so many objects so quickly). But then something confusing happens: the "Enumerating objects" progress meter jumps _backwards_, counting up from zero the number of objects we actually add into to_pack.objects. This worked correctly once upon a time, but was broken in 5af050437a (pack-objects: show some progress when counting kept objects, 2018-04-15), when the latter half of that progress meter switched to using a separate nr_seen counter, rather than nr_result. Nobody noticed for two reasons: - prior to the pack-reuse fixes from a14aebeac3 (Merge branch 'jk/packfile-reuse-cleanup', 2020-02-14), the reuse code almost never kicked in anyway - the output looks _kind of_ correct. The "backwards" moment is hard to catch, because we overwrite the old progress number with the new one, and the larger number is displayed only for a second. So unless you look at that exact second, you just see the much smaller value, counting up to the number of non-reused objects (though of course if you catch it in stderr, or look at GIT_TRACE_PACKET from a server with bitmaps, you can see both values). This smaller output isn't wrong per se, but isn't counting what we ever intended to. We should give the user the whole number of objects we considered (which, as per 5af050437a's original purpose, is already _not_ a count of what goes into to_pack.objects). The follow-on "Counting objects" meter shows the actual number of objects we feed into that array. We can easily fix this by bumping (and showing) nr_seen for the pack-reused objects. When the included test is run without this patch, the second pack-objects invocation produces "Enumerating objects: 1" to show the one loose object, even though the resulting pack has hundreds of objects in it. With it, we jump to "Enumerating objects: 674" after deciding on reuse, and then "675" when we add in the loose object. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-31builtin/pack-objects.c: respect 'pack.preferBitmapTips'Libravatar Taylor Blau1-0/+34
When writing a new pack with a bitmap, it is sometimes convenient to indicate some reference prefixes which should receive priority when selecting which commits to receive bitmaps. A truly motivated caller could accomplish this by setting 'pack.islandCore', (since all commits in the core island are similarly marked as preferred) but this requires callers to opt into using delta islands, which they may or may not want to do. Introduce a new multi-valued configuration, 'pack.preferBitmapTips' to allow callers to specify a list of reference prefixes. All references which have a prefix contained in 'pack.preferBitmapTips' will mark their tips as "preferred" in the same way as commits are marked as preferred for selection by 'pack.islandCore'. The choice of the verb "prefer" is intentional: marking the NEEDS_BITMAP flag on an object does *not* guarantee that that object will receive a bitmap. It merely guarantees that that commit will receive a bitmap over any *other* commit in the same window by bitmap_writer_select_commits(). The test this patch adds reflects this quirk, too. It only tests that a commit (which didn't receive bitmaps by default) is selected for bitmaps after changing the value of 'pack.preferBitmapTips' to include it. Other commits may lose their bitmaps as a byproduct of how the selection process works (bitmap_writer_select_commits() ignores the remainder of a window after seeing a commit with the NEEDS_BITMAP flag). This configuration will aide in selecting important references for multi-pack bitmaps, since they do not respect the same pack.islandCore configuration. (They could, but doing so may be confusing, since it is packs--not bitmaps--which are influenced by the delta-islands configuration). In a fork network repository (one which lists all forks of a given repository as remotes), for example, it is useful to set pack.preferBitmapTips to 'refs/remotes/<root>/heads' and 'refs/remotes/<root>/tags', where '<root>' is an opaque identifier referring to the repository which is at the base of the fork chain. Suggested-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-30Merge branch 'hx/pack-objects-chunk-comment'Libravatar Junio C Hamano1-2/+2
Comment update. * hx/pack-objects-chunk-comment: pack-objects: fix comment of reused_chunk.difference
2021-03-24Merge branch 'tb/geometric-repack'Libravatar Junio C Hamano1-53/+277
"git repack" so far has been only capable of repacking everything under the sun into a single pack (or split by size). A cleverer strategy to reduce the cost of repacking a repository has been introduced. * tb/geometric-repack: builtin/pack-objects.c: ignore missing links with --stdin-packs builtin/repack.c: reword comment around pack-objects flags builtin/repack.c: be more conservative with unsigned overflows builtin/repack.c: assign pack split later t7703: test --geometric repack with loose objects builtin/repack.c: do not repack single packs with --geometric builtin/repack.c: add '--geometric' option packfile: add kept-pack cache for find_kept_pack_entry() builtin/pack-objects.c: rewrite honor-pack-keep logic p5303: measure time to repack with keep p5303: add missing &&-chains builtin/pack-objects.c: add '--stdin-packs' option revision: learn '--no-kept-objects' packfile: introduce 'find_kept_pack_entry()'
2021-03-24pack-objects: fix comment of reused_chunk.differenceLibravatar Han Xin1-2/+2
As record_reused_object(offset, offset - hashfile_total(out)) said, reused_chunk.difference should be the offset of original packfile minus the offset of the generated packfile. But the comment presented an opposite way. Signed-off-by: Han Xin <hanxin.hx@alibaba-inc.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-19builtin/pack-objects.c: ignore missing links with --stdin-packsLibravatar Taylor Blau1-0/+1
When 'git pack-objects --stdin-packs' encounters a commit in a pack, it marks it as a starting point of a best-effort reachability traversal that is used to populate the name-hash of the objects listed in the given packs. The traversal expects that it should be able to walk the ancestors of all commits in a pack without issue. Ordinarily this is the case, but it is possible to having missing parents from an unreachable part of the repository. In that case, we'd consider any missing objects in the unreachable portion of the graph to be junk. This should be handled gracefully: since the traversal is best-effort (i.e., we don't strictly need to fill in all of the name-hash fields), we should simply ignore any missing links. This patch does that (by setting the 'ignore_missing_links' bit on the rev_info struct), and ensures we don't regress in the future by adding a test which demonstrates this case. It is a little over-eager, since it will also ignore missing links in reachable parts of the packs (which would indicate a corrupted repository), but '--stdin-packs' is explicitly *not* about reachability. So this step isn't making anything worse for a repository which contains packs missing reachable objects (since we never drop objects with '--stdin-packs'). Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-13use CALLOC_ARRAYLibravatar René Scharfe1-4/+4
Add and apply a semantic patch for converting code that open-codes CALLOC_ARRAY to use it instead. It shortens the code and infers the element size automatically. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-22builtin/pack-objects.c: rewrite honor-pack-keep logicLibravatar Jeff King1-51/+76
Now that we have find_kept_pack_entry(), we don't have to manually keep hunting through every pack to find a possible "kept" duplicate of the object. This should be faster, assuming only a portion of your total packs are actually kept. Note that we have to re-order the logic a bit here; we can deal with the disqualifying situations first (e.g., finding the object in a non-local pack with --local), then "kept" situation(s), and then just fall back to other "--local" conditions. Here are the results from p5303 (measurements again taken on the kernel): Test HEAD^ HEAD ----------------------------------------------------------------------------------------------- 5303.5: repack (1) 57.26(54.59+10.84) 57.34(54.66+10.88) +0.1% 5303.6: repack with kept (1) 57.33(54.80+10.51) 57.38(54.83+10.49) +0.1% 5303.11: repack (50) 71.54(88.57+4.84) 71.70(88.99+4.74) +0.2% 5303.12: repack with kept (50) 85.12(102.05+4.94) 72.58(89.61+4.78) -14.7% 5303.17: repack (1000) 216.87(490.79+14.57) 217.19(491.72+14.25) +0.1% 5303.18: repack with kept (1000) 665.63(938.87+15.76) 246.12(520.07+14.93) -63.0% and the --stdin-packs timings: 5303.7: repack with --stdin-packs (1) 0.01(0.01+0.00) 0.00(0.00+0.00) -100.0% 5303.13: repack with --stdin-packs (50) 3.53(12.07+0.24) 3.43(11.75+0.24) -2.8% 5303.19: repack with --stdin-packs (1000) 195.83(371.82+8.10) 130.50(307.15+7.66) -33.4% So our repack with an empty .keep pack is roughly as fast as one without a .keep pack up to 50 packs. But the --stdin-packs case scales a little better, too. Notably, it is faster than a repack of the same size and a kept pack. It looks at fewer objects, of course, but the penalty for looking at many packs isn't as costly. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-22builtin/pack-objects.c: add '--stdin-packs' optionLibravatar Taylor Blau1-2/+200
In an upcoming commit, 'git repack' will want to create a pack comprised of all of the objects in some packs (the included packs) excluding any objects in some other packs (the excluded packs). This caller could iterate those packs themselves and feed the objects it finds to 'git pack-objects' directly over stdin, but this approach has a few downsides: - It requires every caller that wants to drive 'git pack-objects' in this way to implement pack iteration themselves. This forces the caller to think about details like what order objects are fed to pack-objects, which callers would likely rather not do. - If the set of objects in included packs is large, it requires sending a lot of data over a pipe, which is inefficient. - The caller is forced to keep track of the excluded objects, too, and make sure that it doesn't send any objects that appear in both included and excluded packs. But the biggest downside is the lack of a reachability traversal. Because the caller passes in a list of objects directly, those objects don't get a namehash assigned to them, which can have a negative impact on the delta selection process, causing 'git pack-objects' to fail to find good deltas even when they exist. The caller could formulate a reachability traversal themselves, but the only way to drive 'git pack-objects' in this way is to do a full traversal, and then remove objects in the excluded packs after the traversal is complete. This can be detrimental to callers who care about performance, especially in repositories with many objects. Introduce 'git pack-objects --stdin-packs' which remedies these four concerns. 'git pack-objects --stdin-packs' expects a list of pack names on stdin, where 'pack-xyz.pack' denotes that pack as included, and '^pack-xyz.pack' denotes it as excluded. The resulting pack includes all objects that are present in at least one included pack, and aren't present in any excluded pack. To address the delta selection problem, 'git pack-objects --stdin-packs' works as follows. First, it assembles a list of objects that it is going to pack, as above. Then, a reachability traversal is started, whose tips are any commits mentioned in included packs. Upon visiting an object, we find its corresponding object_entry in the to_pack list, and set its namehash parameter appropriately. To avoid the traversal visiting more objects than it needs to, the traversal is halted upon encountering an object which can be found in an excluded pack (by marking the excluded packs as kept in-core, and passing --no-kept-objects=in-core to the revision machinery). This can cause the traversal to halt early, for example if an object in an included pack is an ancestor of ones in excluded packs. But stopping early is OK, since filling in the namehash fields of objects in the to_pack list is only additive (i.e., having it helps the delta selection process, but leaving it blank doesn't impact the correctness of the resulting pack). Even still, it is unlikely that this hurts us much in practice, since the 'git repack --geometric' caller (which is introduced in a later commit) marks small packs as included, and large ones as excluded. During ordinary use, the small packs usually represent pushes after a large repack, and so are unlikely to be ancestors of objects that already exist in the repository. (I found it convenient while developing this patch to have 'git pack-objects' report the number of objects which were visited and got their namehash fields filled in during traversal. This is also included in the below patch via trace2 data lines). Suggested-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-02-12Merge branch 'tb/pack-revindex-on-disk'Libravatar Junio C Hamano1-0/+9
Introduce an on-disk file to record revindex for packdata, which traditionally was always created on the fly and only in-core. * tb/pack-revindex-on-disk: t5325: check both on-disk and in-memory reverse index pack-revindex: ensure that on-disk reverse indexes are given precedence t: support GIT_TEST_WRITE_REV_INDEX t: prepare for GIT_TEST_WRITE_REV_INDEX Documentation/config/pack.txt: advertise 'pack.writeReverseIndex' builtin/pack-objects.c: respect 'pack.writeReverseIndex' builtin/index-pack.c: write reverse indexes builtin/index-pack.c: allow stripping arbitrary extensions pack-write.c: prepare to write 'pack-*.rev' files packfile: prepare for the existence of '*.rev' files
2021-02-05Merge branch 'jv/pack-objects-narrower-ref-iteration'Libravatar Junio C Hamano1-5/+3
The "pack-objects" command needs to iterate over all the tags when automatic tag following is enabled, but it actually iterated over all refs and then discarded everything outside "refs/tags/" hierarchy, which was quite wasteful. * jv/pack-objects-narrower-ref-iteration: builtin/pack-objects.c: avoid iterating all refs
2021-02-03Merge branch 'jk/peel-iterated-oid'Libravatar Junio C Hamano1-2/+2
The peel_ref() API has been replaced with peel_iterated_oid(). * jk/peel-iterated-oid: refs: switch peel_ref() to peel_iterated_oid()
2021-01-25t: support GIT_TEST_WRITE_REV_INDEXLibravatar Taylor Blau1-0/+2
Add a new option that unconditionally enables the pack.writeReverseIndex setting in order to run the whole test suite in a mode that generates on-disk reverse indexes. Additionally, enable this mode in the second run of tests under linux-gcc in 'ci/run-build-and-tests.sh'. Once on-disk reverse indexes are proven out over several releases, we can change the default value of that configuration to 'true', and drop this patch. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-25builtin/pack-objects.c: respect 'pack.writeReverseIndex'Libravatar Taylor Blau1-0/+7
Now that we have an implementation that can write the new reverse index format, enable writing a .rev file in 'git pack-objects' by consulting the pack.writeReverseIndex configuration variable. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-22builtin/pack-objects.c: avoid iterating all refsLibravatar Jacob Vosmaer1-5/+3
In git-pack-objects, we iterate over all the tags if the --include-tag option is passed on the command line. For some reason this uses for_each_ref which is expensive if the repo has many refs. We should use for_each_tag_ref instead. Because the add_ref_tag callback will now only visit tags we simplified it a bit. The motivation for this change is that we observed performance issues with a repository on gitlab.com that has 500,000 refs but only 2,000 tags. The fetch traffic on that repo is dominated by CI, and when we changed CI to fetch with 'git fetch --no-tags' we saw a dramatic change in the CPU profile of git-pack-objects. This lead us to this particular ref walk. More details in: https://gitlab.com/gitlab-com/gl-infra/scalability/-/issues/746#note_483546598 Signed-off-by: Jacob Vosmaer <jacob@gitlab.com> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-21refs: switch peel_ref() to peel_iterated_oid()Libravatar Jeff King1-2/+2
The peel_ref() interface is confusing and error-prone: - it's typically used by ref iteration callbacks that have both a refname and oid. But since they pass only the refname, we may load the ref value from the filesystem again. This is inefficient, but also means we are open to a race if somebody simultaneously updates the ref. E.g., this: int some_ref_cb(const char *refname, const struct object_id *oid, ...) { if (!peel_ref(refname, &peeled)) printf("%s peels to %s", oid_to_hex(oid), oid_to_hex(&peeled); } could print nonsense. It is correct to say "refname peels to..." (you may see the "before" value or the "after" value, either of which is consistent), but mentioning both oids may be mixing before/after values. Worse, whether this is possible depends on whether the optimization to read from the current iterator value kicks in. So it is actually not possible with: for_each_ref(some_ref_cb); but it _is_ possible with: head_ref(some_ref_cb); which does not use the iterator mechanism (though in practice, HEAD should never peel to anything, so this may not be triggerable). - it must take a fully-qualified refname for the read_ref_full() code path to work. Yet we routinely pass it partial refnames from callbacks to for_each_tag_ref(), etc. This happens to work when iterating because there we do not call read_ref_full() at all, and only use the passed refname to check if it is the same as the iterator. But the requirements for the function parameters are quite unclear. Instead of taking a refname, let's instead take an oid. That fixes both problems. It's a little funny for a "ref" function not to involve refs at all. The key thing is that it's optimizing under the hood based on having access to the ref iterator. So let's change the name to make it clear why you'd want this function versus just peel_object(). There are two other directions I considered but rejected: - we could pass the peel information into the each_ref_fn callback. However, we don't know if the caller actually wants it or not. For packed-refs, providing it is essentially free. But for loose refs, we actually have to peel the object, which would be wasteful in most cases. We could likewise pass in a flag to the callback indicating whether the peeled information is known, but that complicates those callbacks, as they then have to decide whether to manually peel themselves. Plus it requires changing the interface of every callback, whether they care about peeling or not, and there are many of them. - we could make a function to return the peeled value of the current iterated ref (computing it if necessary), and BUG() otherwise. I.e.: int peel_current_iterated_ref(struct object_id *out); Each of the current callers is an each_ref_fn callback, so they'd mostly be happy. But: - we use those callbacks with functions like head_ref(), which do not use the iteration code. So we'd need to handle the fallback case there, anyway. - it's possible that a caller would want to call into generic code that sometimes is used during iteration and sometimes not. This encapsulates the logic to do the fast thing when possible, and fallback when necessary. The implementation is mostly obvious, but I want to call out a few things in the patch: - the test-tool coverage for peel_ref() is now meaningless, as it all collapses to a single peel_object() call (arguably they were pretty uninteresting before; the tricky part of that function is the fast-path we see during iteration, but these calls didn't trigger that). I've just dropped it entirely, though note that some other tests relied on the tags we created; I've moved that creation to the tests where it matters. - we no longer need to take a ref_store parameter, since we'd never look up a ref now. We do still rely on a global "current iterator" variable which _could_ be kept per-ref-store. But in practice this is only useful if there are multiple recursive iterations, at which point the more appropriate solution is probably a stack of iterators. No caller used the actual ref-store parameter anyway (they all call the wrapper that passes the_repository). - the original only kicked in the optimization when the "refname" pointer matched (i.e., not string comparison). We do likewise with the "oid" parameter here, but fall back to doing an actual oideq() call. This in theory lets us kick in the optimization more often, though in practice no current caller cares. It should never be wrong, though (peeling is a property of an object, so two refs pointing to the same object would peel identically). - the original took care not to touch the peeled out-parameter unless we found something to put in it. But no caller cares about this, and anyway, it is enforced by peel_object() itself (and even in the optimized iterator case, that's where we eventually end up). We can shorten the code and avoid an extra copy by just passing the out-parameter through the stack. Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-13check_object(): convert to new revindex APILibravatar Taylor Blau1-4/+4
Replace direct accesses to the revindex with calls to 'offset_to_pack_pos()' and 'pack_pos_to_index()'. Since this caller already had some error checking (it can jump to the 'give_up' label if it encounters an error), we can easily check whether or not the provided offset points to an object in the given pack. This error checking existed prior to this patch, too, since the caller checks whether the return value from 'find_pack_revindex()' was NULL or not. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-13write_reused_pack_verbatim(): convert to new revindex APILibravatar Taylor Blau1-1/+1
Replace a direct access to the revindex array with 'pack_pos_to_offset()'. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-13write_reused_pack_one(): convert to new revindex APILibravatar Taylor Blau1-4/+10
Replace direct revindex accesses with calls to 'pack_pos_to_offset()' and 'pack_pos_to_index()'. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-13write_reuse_object(): convert to new revindex APILibravatar Taylor Blau1-4/+9
First replace 'find_pack_revindex()' with its replacement 'offset_to_pack_pos()'. This prevents any bogus OFS_DELTA that may make its way through until 'write_reuse_object()' from causing a bad memory read (if 'revidx' is 'NULL') Next, replace a direct access of '->nr' with the wrapper function 'pack_pos_to_index()'. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-08pack-bitmap-write: ignore BITMAP_FLAG_REUSELibravatar Jeff King1-1/+0
The on-disk bitmap format has a flag to mark a bitmap to be "reused". This is a rather curious feature, and works like this: - a run of pack-objects would decide to mark the last 80% of the bitmaps it generates with the reuse flag - the next time we generate bitmaps, we'd see those reuse flags from the last run, and mark those commits as special: - we'd be more likely to select those commits to get bitmaps in the new output - when generating the bitmap for a selected commit, we'd reuse the old bitmap as-is (rearranging the bits to match the new pack, of course) However, neither of these behaviors particularly makes sense. Just because a commit happened to be bitmapped last time does not make it a good candidate for having a bitmap this time. In particular, we may choose bitmaps based on how recent they are in history, or whether a ref tip points to them, and those things will change. We're better off re-considering fresh which commits are good candidates. Reusing the existing bitmap _is_ a reasonable thing to do to save computation. But only reusing exact bitmaps is a weak form of this. If we have an old bitmap for A and now want a new bitmap for its child, we should be able to compute that only by looking at trees and that are new to the child. But this code would consider only exact reuse (which is perhaps why it was eager to select those commits in the first place). Furthermore, the recent switch to the reverse-edge algorithm for generating bitmaps dropped this optimization entirely (and yet still performs better). So let's do a few cleanups: - drop the whole "reusing bitmaps" phase of generating bitmaps. It's not helping anything, and is mostly unused code (or worse, code that is using CPU but not doing anything useful) - drop the use of the on-disk reuse flag to select commits to bitmap - stop setting the on-disk reuse flag in bitmaps we generate (since nothing respects it anymore) We will keep a few innards of the reuse code, which will help us implement a more capable version of the "reuse" optimization: - simplify rebuild_existing_bitmaps() into a function that only builds the mapping of bits between the old and new orders, but doesn't actually convert any bitmaps - make rebuild_bitmap() public; we'll call it lazily to convert bitmaps as we traverse (using the mapping created above) Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-19Merge branch 'jc/object-names-are-not-sha-1'Libravatar Junio C Hamano1-1/+1
A few end-user facing messages have been updated to be hash-algorithm agnostic. * jc/object-names-are-not-sha-1: messages: avoid SHA-1 in end-user facing messages
2020-08-14messages: avoid SHA-1 in end-user facing messagesLibravatar Junio C Hamano1-1/+1
There are still a handful mentions of SHA-1 when we meant the (hexadecimal) object names in end-user facing messages. Rewrite them. I was hoping that this can mostly be s/SHA-1/object name/, but a few messages needed rephrasing to keep the result readable. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-13Merge branch 'jt/has_object'Libravatar Junio C Hamano1-2/+2
A new helper function has_object() has been introduced to make it easier to mark object existence checks that do and don't want to trigger lazy fetches, and a few such checks are converted using it. * jt/has_object: fsck: do not lazy fetch known non-promisor object pack-objects: no fetch when allow-{any,promisor} apply: do not lazy fetch when applying binary sha1-file: introduce no-lazy-fetch has_object()
2020-08-10Merge branch 'jk/strvec'Libravatar Junio C Hamano1-13/+13
The argv_array API is useful for not just managing argv but any "vector" (NULL-terminated array) of strings, and has seen adoption to a certain degree. It has been renamed to "strvec" to reduce the barrier to adoption. * jk/strvec: strvec: rename struct fields strvec: drop argv_array compatibility layer strvec: update documention to avoid argv_array strvec: fix indentation in renamed calls strvec: convert remaining callers away from argv_array name strvec: convert more callers away from argv_array name strvec: convert builtin/ callers away from argv_array name quote: rename sq_dequote_to_argv_array to mention strvec strvec: rename files from argv-array to strvec argv-array: rename to strvec argv-array: use size_t for count and alloc
2020-08-06pack-objects: no fetch when allow-{any,promisor}Libravatar Jonathan Tan1-2/+2
The options --missing=allow-{any,promisor} were introduced in caf3827e2f ("rev-list: add list-objects filtering support", 2017-11-22) with the following note in the commit message: This patch introduces handling of missing objects to help debugging and development of the "partial clone" mechanism, and once the mechanism is implemented, for a power user to perform operations that are missing-object aware without incurring the cost of checking if a missing link is expected. The idea that these options are missing-object aware (and thus do not need to lazily fetch objects, unlike unaware commands that assume that all objects are present) are assumed in later commits such as 07ef3c6604 ("fetch test: use more robust test for filtered objects", 2020-01-15). However, the current implementations of these options use has_object_file(), which indeed lazily fetches missing objects. Teach these implementations not to do so. Also, update the documentation of these options to be clearer. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>