summaryrefslogtreecommitdiff
path: root/builtin-pack-objects.c
AgeCommit message (Collapse)AuthorFilesLines
2009-04-18Merge branch 'lt/pack-object-memuse'Libravatar Junio C Hamano1-6/+12
* lt/pack-object-memuse: show_object(): push path_name() call further down process_{tree,blob}: show objects without buffering Conflicts: builtin-pack-objects.c builtin-rev-list.c list-objects.c list-objects.h upload-pack.c
2009-04-12show_object(): push path_name() call further downLibravatar Linus Torvalds1-1/+3
In particular, pushing the "path_name()" call _into_ the show() function would seem to allow - more clarity into who "owns" the name (ie now when we free the name in the show_object callback, it's because we generated it ourselves by calling path_name()) - not calling path_name() at all, either because we don't care about the name in the first place, or because we are actually happy walking the linked list of "struct name_path *" and the last component. Now, I didn't do that latter optimization, because it would require some more coding, but especially looking at "builtin-pack-objects.c", we really don't even want the whole pathname, we really would be better off with the list of path components. Why? We use that name for two things: - add_preferred_base_object(), which actually _wants_ to traverse the path, and now does it by looking for '/' characters! - for 'name_hash()', which only cares about the last 16 characters of a name, so again, generating the full name seems to be just unnecessary work. Anyway, so I didn't look any closer at those things, but it did convince me that the "show_object()" calling convention was crazy, and we're actually better off doing _less_ in list-objects.c, and giving people access to the internal data structures so that they can decide whether they want to generate a path-name or not. This patch does that, and then for people who did use the name (even if they might do something more clever in the future), it just does the straightforward "name = path_name(path, component); .. free(name);" thing. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-12process_{tree,blob}: show objects without bufferingLibravatar Linus Torvalds1-6/+10
Here's a less trivial thing, and slightly more dubious one. I was looking at that "struct object_array objects", and wondering why we do that. I have honestly totally forgotten. Why not just call the "show()" function as we encounter the objects? Rather than add the objects to the object_array, and then at the very end going through the array and doing a 'show' on all, just do things more incrementally. Now, there are possible downsides to this: - the "buffer using object_array" _can_ in theory result in at least better I-cache usage (two tight loops rather than one more spread out one). I don't think this is a real issue, but in theory.. - this _does_ change the order of the objects printed. Instead of doing a "process_tree(revs, commit->tree, &objects, NULL, "");" in the loop over the commits (which puts all the root trees _first_ in the object list, this patch just adds them to the list of pending objects, and then we'll traverse them in that order (and thus show each root tree object together with the objects we discover under it) I _think_ the new ordering actually makes more sense, but the object ordering is actually a subtle thing when it comes to packing efficiency, so any change in order is going to have implications for packing. Good or bad, I dunno. - There may be some reason why we did it that odd way with the object array, that I have simply forgotten. Anyway, now that we don't buffer up the objects before showing them that may actually result in lower memory usage during that whole traverse_commit_list() phase. This is seriously not very deeply tested. It makes sense to me, it seems to pass all the tests, it looks ok, but... Does anybody remember why we did that "object_array" thing? It used to be an "object_list" a long long time ago, but got changed into the array due to better memory usage patterns (those linked lists of obejcts are horrible from a memory allocation standpoint). But I wonder why we didn't do this back then. Maybe there's a reason for it. Or maybe there _used_ to be a reason, and no longer is. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-12Merge branch 'cc/bisect-filter'Libravatar Junio C Hamano1-3/+3
* cc/bisect-filter: (21 commits) rev-list: add "int bisect_show_flags" in "struct rev_list_info" rev-list: remove last static vars used in "show_commit" list-objects: add "void *data" parameter to show functions bisect--helper: string output variables together with "&&" rev-list: pass "int flags" as last argument of "show_bisect_vars" t6030: test bisecting with paths bisect: use "bisect--helper" and remove "filter_skipped" function bisect: implement "read_bisect_paths" to read paths in "$GIT_DIR/BISECT_NAMES" bisect--helper: implement "git bisect--helper" bisect: use the new generic "sha1_pos" function to lookup sha1 rev-list: call new "filter_skip" function patch-ids: use the new generic "sha1_pos" function to lookup sha1 sha1-lookup: add new "sha1_pos" function to efficiently lookup sha1 rev-list: pass "revs" to "show_bisect_vars" rev-list: make "show_bisect_vars" non static rev-list: move code to show bisect vars into its own function rev-list: move bisect related code into its own file rev-list: make "bisect_list" variable local to "cmd_rev_list" refs: add "for_each_ref_in" function to refactor "for_each_*_ref" functions quote: add "sq_dequote_to_argv" to put unwrapped args in an argv array ...
2009-04-12Merge branch 'maint'Libravatar Junio C Hamano1-0/+2
* maint: GIT 1.6.2.3 State the effect of filter-branch on graft explicitly process_{tree,blob}: Remove useless xstrdup calls Conflicts: GIT-VERSION-GEN
2009-04-12Merge branch 'maint-1.6.1' into maintLibravatar Junio C Hamano1-0/+2
* maint-1.6.1: State the effect of filter-branch on graft explicitly process_{tree,blob}: Remove useless xstrdup calls
2009-04-12Merge branch 'maint-1.6.0' into maint-1.6.1Libravatar Junio C Hamano1-0/+2
* maint-1.6.0: State the effect of filter-branch on graft explicitly process_{tree,blob}: Remove useless xstrdup calls
2009-04-12process_{tree,blob}: Remove useless xstrdup callsLibravatar Linus Torvalds1-0/+2
On Wed, 8 Apr 2009, Björn Steinbrink wrote: > > The name of the processed object was duplicated for passing it to > add_object(), but that already calls path_name, which allocates a new > string anyway. So the memory allocated by the xstrdup calls just went > nowhere, leaking memory. Ack, ack. There's another easy 5% or so for the built-in object walker: once we've created the hash from the name, the name isn't interesting any more, and so something trivial like this can help a bit. Does it matter? Probably not on its own. But a few more memory saving tricks and it might all make a difference. Linus Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-11Update delta compression message to be less misleadingLibravatar Dan McGee1-1/+1
In the case of a small repository, pack-objects is smart enough to not start more threads than necessary. However, the output to the user always reports the value of the pack.threads configuration and not the real number of threads to be used. Signed-off-by: Dan McGee <dpmcgee@gmail.com> Acked-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-08Merge branch 'jc/maint-1.6.0-keep-pack' into maintLibravatar Junio C Hamano1-12/+27
* jc/maint-1.6.0-keep-pack: pack-objects: don't loosen objects available in alternate or kept packs t7700: demonstrate repack flaw which may loosen objects unnecessarily Remove --kept-pack-only option and associated infrastructure pack-objects: only repack or loosen objects residing in "local" packs git-repack.sh: don't use --kept-pack-only option to pack-objects t7700-repack: add two new tests demonstrating repacking flaws is_kept_pack(): final clean-up Simplify is_kept_pack() Consolidate ignore_packed logic more has_sha1_kept_pack(): take "struct rev_info" has_sha1_pack(): refactor "pretend these packs do not exist" interface git-repack: resist stray environment variable Conflicts: t/t7700-repack.sh
2009-04-07list-objects: add "void *data" parameter to show functionsLibravatar Christian Couder1-3/+3
The goal of this patch is to get rid of the "static struct rev_info revs" static variable in "builtin-rev-list.c". To do that, we need to pass the revs to the "show_commit" function in "builtin-rev-list.c" and this in turn means that the "traverse_commit_list" function in "list-objects.c" must be passed functions pointers to functions with 2 parameters instead of one. So we have to change all the callers and all the functions passed to "traverse_commit_list". Anyway this makes the code more clean and more generic, so it should be a good thing in the long run. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-04-01Merge branch 'jc/maint-1.6.0-keep-pack'Libravatar Junio C Hamano1-4/+27
* jc/maint-1.6.0-keep-pack: pack-objects: don't loosen objects available in alternate or kept packs t7700: demonstrate repack flaw which may loosen objects unnecessarily Remove --kept-pack-only option and associated infrastructure pack-objects: only repack or loosen objects residing in "local" packs git-repack.sh: don't use --kept-pack-only option to pack-objects t7700-repack: add two new tests demonstrating repacking flaws Conflicts: t/t7700-repack.sh
2009-03-24Merge branch 'maint'Libravatar Junio C Hamano1-1/+1
* maint: Increase the size of the die/warning buffer to avoid truncation close_sha1_file(): make it easier to diagnose errors avoid possible overflow in delta size filtering computation
2009-03-24Merge branch 'maint-1.6.1' into maintLibravatar Junio C Hamano1-1/+1
* maint-1.6.1: close_sha1_file(): make it easier to diagnose errors avoid possible overflow in delta size filtering computation
2009-03-24Merge branch 'maint-1.6.0' into maint-1.6.1Libravatar Junio C Hamano1-1/+1
* maint-1.6.0: close_sha1_file(): make it easier to diagnose errors avoid possible overflow in delta size filtering computation
2009-03-24avoid possible overflow in delta size filtering computationLibravatar Nicolas Pitre1-1/+1
On a 32-bit system, the maximum possible size for an object is less than 4GB, while 64-bit systems may cope with larger objects. Due to this limitation, variables holding object sizes are using an unsigned long type (32 bits on 32-bit systems, or 64 bits on 64-bit systems). When large objects are encountered, and/or people play with large delta depth values, it is possible for the maximum allowed delta size computation to overflow, especially on a 32-bit system. When this occurs, surviving result bits may represent a value much smaller than what it is supposed to be, or even zero. This prevents some objects from being deltified although they do get deltified when a smaller depth limit is used. Fix this by always performing a 64-bit multiplication. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-21Merge branch 'jc/maint-1.6.0-pack-directory' into maint-1.6.1Libravatar Junio C Hamano1-3/+2
* jc/maint-1.6.0-pack-directory: Fix odb_mkstemp() on AIX Make sure objects/pack exists before creating a new pack Conflicts: wrapper.c
2009-03-21pack-objects: don't loosen objects available in alternate or kept packsLibravatar Brandon Casey1-1/+25
If pack-objects is called with the --unpack-unreachable option then it will unpack (i.e. loosen) all unreferenced objects from local not-kept packs, including those that also exist in packs residing in an alternate object database or a locally kept pack. The only user of this option is git-repack. In this case, repack will follow the call to pack-objects with a call to prune-packed, which will delete these newly loosened objects, making the act of loosening a waste of time. The unnecessary loosening can be avoided by checking whether an object exists in a non-local pack or a locally kept pack before loosening it. This fixes the 'local packed unreachable obs that exist in alternate ODB are not loosened' test in t7700. Signed-off-by: Brandon Casey <drafnel@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-20Remove --kept-pack-only option and associated infrastructureLibravatar Brandon Casey1-1/+0
This option to pack-objects/rev-list was created to improve the -A and -a options of repack. It was found to be lacking in that it did not provide the ability to differentiate between local and non-local kept packs, and found to be unnecessary since objects residing in local kept packs can be filtered out by the --honor-pack-keep option. Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-20pack-objects: only repack or loosen objects residing in "local" packsLibravatar Brandon Casey1-2/+2
These two features were invented for use by repack when repack will delete the local packs that have been made redundant. The packs accessible through alternates are not deleted by repack, so the objects contained in them are still accessible after the local packs are deleted. They do not need to be repacked into the new pack or loosened. For the case of loosening they would immediately be deleted by the subsequent prune-packed that is called by repack anyway. This fixes the test 'packed unreachable obs in alternate ODB are not loosened' in t7700. Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-03-11Merge branch 'jc/maint-1.6.0-keep-pack'Libravatar Junio C Hamano1-11/+3
* jc/maint-1.6.0-keep-pack: is_kept_pack(): final clean-up Simplify is_kept_pack() Consolidate ignore_packed logic more has_sha1_kept_pack(): take "struct rev_info" has_sha1_pack(): refactor "pretend these packs do not exist" interface git-repack: resist stray environment variable
2009-02-28is_kept_pack(): final clean-upLibravatar Junio C Hamano1-2/+2
Now is_kept_pack() is just a member lookup into a structure, we can write it as such. Also rewrite the sole caller of has_sha1_kept_pack() to switch on the criteria the callee uses (namely, revs->kept_pack_only) between calling has_sha1_kept_pack() and has_sha1_pack(), so that these two callees do not have to take a pointer to struct rev_info as an argument. This removes the header file dependency issue temporarily introduced by the earlier commit, so we revert changes associated to that as well. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-28Simplify is_kept_pack()Libravatar Junio C Hamano1-3/+3
This removes --unpacked=<packfile> parameter from the revision parser, and rewrites its use in git-repack to pass a single --kept-pack-only option instead. The new --kept-pack-only option means just that. When this option is given, is_kept_pack() that used to say "not on the --unpacked=<packfile> list" now says "the packfile has corresponding .keep file". Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-28Consolidate ignore_packed logic moreLibravatar Junio C Hamano1-10/+2
This refactors three loops that check if a given packfile is on the ignore_packed list into a function is_kept_pack(). The function returns false for a pack on the list, and true for a pack not on the list, because this list is solely used by "git repack" to pass list of packfiles that do not have corresponding .keep files, i.e. a packfile not on the list is "kept". Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-25Merge branch 'jc/maint-1.6.0-pack-directory'Libravatar Junio C Hamano1-3/+2
* jc/maint-1.6.0-pack-directory: Make sure objects/pack exists before creating a new pack
2009-02-25Make sure objects/pack exists before creating a new packLibravatar Junio C Hamano1-3/+2
In a repository created with git older than f49fb35 (git-init-db: create "pack" subdirectory under objects, 2005-06-27), objects/pack/ directory is not created upon initialization. It was Ok because subdirectories are created as needed inside directories init-db creates, and back then, packfiles were recent invention. After the said commit, new codepaths started relying on the presense of objects/pack/ directory in the repository. This was exacerbated with 8b4eb6b (Do not perform cross-directory renames when creating packs, 2008-09-22) that moved the location temporary pack files are created from objects/ directory to objects/pack/ directory, because moving temporary to the final location was done carefully with lazy leading directory creation. Many packfile related operations in such an old repository can fail mysteriously because of this. This commit introduces two helper functions to make things work better. - odb_mkstemp() is a specialized version of mkstemp() to refactor the code and teach it to create leading directories as needed; - odb_pack_keep() refactors the code to create a ".keep" file while create leading directories as needed. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-02-05Merge branch 'lt/maint-wrap-zlib' into maintLibravatar Junio C Hamano1-3/+3
* lt/maint-wrap-zlib: Wrap inflate and other zlib routines for better error reporting Conflicts: http-push.c http-walker.c sha1_file.c
2009-01-21Merge branch 'lt/maint-wrap-zlib'Libravatar Junio C Hamano1-3/+3
* lt/maint-wrap-zlib: Wrap inflate and other zlib routines for better error reporting Conflicts: http-push.c http-walker.c sha1_file.c
2009-01-11Wrap inflate and other zlib routines for better error reportingLibravatar Linus Torvalds1-3/+3
R. Tyler Ballance reported a mysterious transient repository corruption; after much digging, it turns out that we were not catching and reporting memory allocation errors from some calls we make to zlib. This one _just_ wraps things; it doesn't do the "retry on low memory error" part, at least not yet. It is an independent issue from the reporting. Some of the errors are expected and passed back to the caller, but we die when zlib reports it failed to allocate memory for now. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-12-13pack-objects: don't use too many threads with few objectsLibravatar Nicolas Pitre1-0/+4
If there are few objects to deltify, they might be split amongst threads so that there is simply no other objects left to delta against within the same thread. Let's use the same 2*window treshold as used for the final load balancing to allow extra threads to be created. This fixes the benign t5300 test failure. Signed-off-by: Nicolas Pitre <nico@cam.org> Tested-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-12-12autodetect number of CPUs by default when using threadsLibravatar Nicolas Pitre1-1/+4
... and display the actual number of threads used when locally repacking. A remote server still won't tell you how many threads it uses during a fetch though. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-12-11Merge branch 'maint'Libravatar Junio C Hamano1-0/+1
* maint: fsck: reduce stack footprint make sure packs to be replaced are closed beforehand
2008-12-10make sure packs to be replaced are closed beforehandLibravatar Nicolas Pitre1-0/+1
Especially on Windows where an opened file cannot be replaced, make sure pack-objects always close packs it is about to replace. Even on non Windows systems, this could save potential bad results if ever objects were to be read from the new pack file using offset from the old index. This should fix t5303 on Windows. Signed-off-by: Nicolas Pitre <nico@cam.org> Tested-by: Johannes Sixt <j6t@kdbg.org> (MinGW) Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-12-02Merge branch 'bc/maint-keep-pack' into maintLibravatar Junio C Hamano1-0/+10
* bc/maint-keep-pack: repack: only unpack-unreachable if we are deleting redundant packs t7700: test that 'repack -a' packs alternate packed objects pack-objects: extend --local to mean ignore non-local loose objects too sha1_file.c: split has_loose_object() into local and non-local counterparts t7700: demonstrate mishandling of loose objects in an alternate ODB builtin-gc.c: use new pack_keep bitfield to detect .keep file existence repack: do not fall back to incremental repacking with [-a|-A] repack: don't repack local objects in packs with .keep file pack-objects: new option --honor-pack-keep packed_git: convert pack_local flag into a bitfield and add pack_keep t7700: demonstrate mishandling of objects in packs with a .keep file
2008-11-12Merge branch 'np/pack-safer'Libravatar Junio C Hamano1-19/+58
* np/pack-safer: t5303: fix printf format string for portability t5303: work around printf breakage in dash pack-objects: don't leak pack window reference when splitting packs extend test coverage for latest pack corruption resilience improvements pack-objects: allow "fixing" a corrupted pack without a full repack make find_pack_revindex() aware of the nasty world make check_object() resilient to pack corruptions make packed_object_info() resilient to pack corruptions make unpack_object_header() non fatal better validation on delta base object offsets close another possibility for propagating pack corruption
2008-11-12Merge branch 'bc/maint-keep-pack'Libravatar Junio C Hamano1-0/+10
* bc/maint-keep-pack: t7700: test that 'repack -a' packs alternate packed objects pack-objects: extend --local to mean ignore non-local loose objects too sha1_file.c: split has_loose_object() into local and non-local counterparts t7700: demonstrate mishandling of loose objects in an alternate ODB builtin-gc.c: use new pack_keep bitfield to detect .keep file existence repack: do not fall back to incremental repacking with [-a|-A] repack: don't repack local objects in packs with .keep file pack-objects: new option --honor-pack-keep packed_git: convert pack_local flag into a bitfield and add pack_keep t7700: demonstrate mishandling of objects in packs with a .keep file
2008-11-12Merge branch 'maint'Libravatar Junio C Hamano1-2/+10
* maint: Start 1.6.0.5 cycle Fix pack.packSizeLimit and --max-pack-size handling checkout: Fix "initial checkout" detection Remove the period after the git-check-attr summary Conflicts: RelNotes
2008-11-12Fix pack.packSizeLimit and --max-pack-size handlingLibravatar Nicolas Pitre1-2/+10
If the limit was sufficiently low, having a single object written could bust the limit (by design), but caused the remaining allowed size to go negative for subsequent objects, which for an unsigned variable is a rather huge limit. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-12pack-objects: extend --local to mean ignore non-local loose objects tooLibravatar Brandon Casey1-0/+3
With this patch, --local means pack only local objects that are not already packed. Additionally, this fixes t7700 testing whether loose objects in an alternate object database are repacked. Signed-off-by: Brandon Casey <drafnel@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-12pack-objects: new option --honor-pack-keepLibravatar Brandon Casey1-0/+7
This adds a new option to pack-objects which will cause it to ignore an object which appears in a local pack which has a .keep file, even if it was specified for packing. This option will be used by the porcelain repack. Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02pack-objects: don't leak pack window reference when splitting packsLibravatar Nicolas Pitre1-3/+9
Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02pack-objects: allow "fixing" a corrupted pack without a full repackLibravatar Nicolas Pitre1-9/+19
When the pack data to be reused is found to be bad, let's fall back to full object access through the generic path which has its own strategies to find alternate object sources in that case. This allows for "fixing" a corrupted pack simply by copying either another pack containing the object(s) found to be bad, or the loose object itself, into the object store and launch a repack without the need for -f. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02make find_pack_revindex() aware of the nasty worldLibravatar Nicolas Pitre1-0/+2
It currently calls die() whenever given offset is not found thinking that such thing should never happen. But this offset may come from a corrupted pack whych _could_ happen and not be found. Callers should deal with this possibility gracefully instead. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02make check_object() resilient to pack corruptionsLibravatar Nicolas Pitre1-6/+17
The check_object() function tries to get away with the least amount of pack access possible when it already has partial information on given object rather than calling the more costly packed_object_info(). When things don't look right, it should just give up and fall back to packed_object_info() directly instead of die()'ing. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02make unpack_object_header() non fatalLibravatar Nicolas Pitre1-1/+1
It is possible to have pack corruption in the object header. Currently unpack_object_header() simply die() on them instead of letting the caller deal with that gracefully. So let's have unpack_object_header() return an error instead, and find a better name for unpack_object_header_gently() in that context. All callers of unpack_object_header() are ready for it. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02better validation on delta base object offsetsLibravatar Nicolas Pitre1-2/+2
In one case, it was possible to have a bad offset equal to 0 effectively pointing a delta onto itself and crashing git after too many recursions. In the other cases, a negative offset could result due to off_t being signed. Catch those. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02close another possibility for propagating pack corruptionLibravatar Nicolas Pitre1-0/+10
Abstract -------- With index v2 we have a per object CRC to allow quick and safe reuse of pack data when repacking. This, however, doesn't currently prevent a stealth corruption from being propagated into a new pack when _not_ reusing pack data as demonstrated by the modification to t5302 included here. The Context ----------- The Git database is all checksummed with SHA1 hashes. Any kind of corruption can be confirmed by verifying this per object hash against corresponding data. However this can be costly to perform systematically and therefore this check is often not performed at run time when accessing the object database. First, the loose object format is entirely compressed with zlib which already provide a CRC verification of its own when inflating data. Any disk corruption would be caught already in this case. Then, packed objects are also compressed with zlib but only for their actual payload. The object headers and delta base references are not deflated for obvious performance reasons, however this leave them vulnerable to potentially undetected disk corruptions. Object types are often validated against the expected type when they're requested, and deflated size must always match the size recorded in the object header, so those cases are pretty much covered as well. Where corruptions could go unnoticed is in the delta base reference. Of course, in the OBJ_REF_DELTA case, the odds for a SHA1 reference to get corrupted so it actually matches the SHA1 of another object with the same size (the delta header stores the expected size of the base object to apply against) are virtually zero. In the OBJ_OFS_DELTA case, the reference is a pack offset which would have to match the start boundary of a different base object but still with the same size, and although this is relatively much more "probable" than in the OBJ_REF_DELTA case, the probability is also about zero in absolute terms. Still, the possibility exists as demonstrated in t5302 and is certainly greater than a SHA1 collision, especially in the OBJ_OFS_DELTA case which is now the default when repacking. Again, repacking by reusing existing pack data is OK since the per object CRC provided by index v2 guards against any such corruptions. What t5302 failed to test is a full repack in such case. The Solution ------------ As unlikely as this kind of stealth corruption can be in practice, it certainly isn't acceptable to propagate it into a freshly created pack. But, because this is so unlikely, we don't want to pay the run time cost associated with extra validation checks all the time either. Furthermore, consequences of such corruption in anything but repacking should be rather visible, and even if it could be quite unpleasant, it still has far less severe consequences than actively creating bad packs. So the best compromize is to check packed object CRC when unpacking objects, and only during the compression/writing phase of a repack, and only when not streaming the result. The cost of this is minimal (less than 1% CPU time), and visible only with a full repack. Someone with a stats background could provide an objective evaluation of this, but I suspect that it's bad RAM that has more potential for data corruptions at this point, even in those cases where this extra check is not performed. Still, it is best to prevent a known hole for corruption when recreating object data into a new pack. What about the streamed pack case? Well, any client receiving a pack must always consider that pack as untrusty and perform full validation anyway, hence no such stealth corruption could be propagated to remote repositoryes already. It is therefore worthless doing local validation in that case. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-11-02Merge branch 'maint'Libravatar Junio C Hamano1-1/+2
* maint: Start 1.6.0.4 cycle add instructions on how to send patches to the mailing list with Gmail Documentation/gitattributes: Add subsection header for each attribute git send-email: avoid leaking directory file descriptors. send-pack: do not send out single-level refs such as refs/stash fix overlapping memcpy in normalize_absolute_path pack-objects: avoid reading uninitalized data correct cache_entry allocation Conflicts: RelNotes
2008-11-01pack-objects: avoid reading uninitalized dataLibravatar Jeff King1-1/+2
In the main loop of find_deltas, we do: struct object_entry *entry = *list++; ... if (!*list_size) ... break Because we look at and increment *list _before_ the check of list_size, in the very last iteration of the loop we will look at uninitialized data, and increment the pointer beyond one past the end of the allocated space. Since we don't actually do anything with the data until after the check, this is not a problem in practice. But since it technically violates the C standard, and because it provokes a spurious valgrind warning, let's just move the initialization of entry to a safe place. This fixes valgrind errors in t5300, t5301, t5302, t303, and t9400. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-10-08Replace xmalloc/memset(0) pairs with xcallocLibravatar Brandon Casey1-3/+1
Many call sites immediately initialize allocated memory with zero after calling xmalloc. A single call to xcalloc can replace this two-call sequence. Signed-off-by: Brandon Casey <casey@nrlssc.navy.mil> Signed-off-by: Shawn O. Pearce <spearce@spearce.org>