summaryrefslogtreecommitdiff
path: root/builtin/pack-objects.c
AgeCommit message (Collapse)AuthorFilesLines
2014-03-31comments: fix misuses of "nor"Libravatar Justin Lebar1-1/+1
Signed-off-by: Justin Lebar <jlebar@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-26do not pretend sha1write returns errorsLibravatar Jeff King1-2/+0
The sha1write function returns an int, but it will always be "0". The failure-prone parts of the function happen in the "flush" callback, which cannot pass an error back to us. So we just end up calling die() during the flush. Let's just drop the return value altogether, as it only confuses callers into thinking that it might be useful. Only one call site actually checked the return value. We can drop that check, since it just led to a die() anyway. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05replace {pre,suf}fixcmp() with {starts,ends}_with()Libravatar Christian Couder1-1/+1
Leaving only the function definitions and declarations so that any new topic in flight can still make use of the old functions, replace existing uses of the prefixcmp() and suffixcmp() with new API functions. The change can be recreated by mechanically applying this: $ git grep -l -e prefixcmp -e suffixcmp -- \*.c | grep -v strbuf\\.c | xargs perl -pi -e ' s|!prefixcmp\(|starts_with\(|g; s|prefixcmp\(|!starts_with\(|g; s|!suffixcmp\(|ends_with\(|g; s|suffixcmp\(|!ends_with\(|g; ' on the result of preparatory changes in this series. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-23Merge branch 'jc/pack-objects'Libravatar Junio C Hamano1-11/+12
* jc/pack-objects: pack-objects: shrink struct object_entry
2013-09-20Merge branch 'nd/fetch-into-shallow'Libravatar Junio C Hamano1-1/+1
When there is no sufficient overlap between old and new history during a fetch into a shallow repository, we unnecessarily sent objects the sending side knows the receiving end has. * nd/fetch-into-shallow: Add testcase for needless objects during a shallow fetch list-objects: mark more commits as edges in mark_edges_uninteresting list-objects: reduce one argument in mark_edges_uninteresting upload-pack: delegate rev walking in shallow fetch to pack-objects shallow: add setup_temporary_shallow() shallow: only add shallow graft points to new shallow file move setup_alternate_shallow and write_shallow_commits to shallow.c
2013-08-28list-objects: reduce one argument in mark_edges_uninterestingLibravatar Nguyễn Thái Ngọc Duy1-1/+1
mark_edges_uninteresting() is always called with this form mark_edges_uninteresting(revs->commits, revs, ...); Remove the first argument and let mark_edges_uninteresting figure that out by itself. It helps answer the question "are this commit list and revs related in any way?" when looking at mark_edges_uninteresting implementation. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-02Don't close pack fd when free'ing pack windowsLibravatar Brandon Casey1-1/+1
Now that close_one_pack() has been introduced to handle file descriptor pressure, it is not strictly necessary to close the pack file descriptor in unuse_one_window() when we're under memory pressure. Jeff King provided a justification for leaving the pack file open: If you close packfile descriptors, you can run into racy situations where somebody else is repacking and deleting packs, and they go away while you are trying to access them. If you keep a descriptor open, you're fine; they last to the end of the process. If you don't, then they disappear from under you. For normal object access, this isn't that big a deal; we just rescan the packs and retry. But if you are packing yourself (e.g., because you are a pack-objects started by upload-pack for a clone or fetch), it's much harder to recover (and we print some warnings). Let's do so (or uh, not do so). Signed-off-by: Brandon Casey <drafnel@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04pack-objects: shrink struct object_entryLibravatar Junio C Hamano1-11/+12
Turn some boolean fields into bitfields and use uint32_t for name hash. This shrinks the size of the structure from 128 bytes to 120 bytes. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-25Merge branch 'jk/peel-ref'Libravatar Jeff King1-1/+0
Speeds up "git upload-pack" (what is invoked by "git fetch" on the other side of the connection) by reducing the cost to advertise the branches and tags that are available in the repository. * jk/peel-ref: upload-pack: use peel_ref for ref advertisements peel_ref: check object type before loading peel_ref: do not return a null sha1 peel_ref: use faster deref_tag_noverify
2012-10-04peel_ref: do not return a null sha1Libravatar Jeff King1-1/+0
The idea of the peel_ref function is to dereference tag objects recursively until we hit a non-tag, and return the sha1. Conceptually, it should return 0 if it is successful (and fill in the sha1), or -1 if there was nothing to peel. However, the current behavior is much more confusing. For a regular loose ref, the behavior is as described above. But there is an optimization to reuse the peeled-ref value for a ref that came from a packed-refs file. If we have such a ref, we return its peeled value, even if that peeled value is null (indicating that we know the ref definitely does _not_ peel). It might seem like such information is useful to the caller, who would then know not to bother loading and trying to peel the object. Except that they should not bother loading and trying to peel the object _anyway_, because that fallback is already handled by peel_ref. In other words, the whole point of calling this function is that it handles those details internally, and you either get a sha1, or you know that it is not peel-able. This patch catches the null sha1 case internally and converts it into a -1 return value (i.e., there is nothing to peel). This simplifies callers, which do not need to bother checking themselves. Two callers are worth noting: - in pack-objects, a comment indicates that there is a difference between non-peelable tags and unannotated tags. But that is not the case (before or after this patch). Whether you get a null sha1 has to do with internal details of how peel_ref operated. - in show-ref, if peel_ref returns a failure, the caller tries to decide whether to try peeling manually based on whether the REF_ISPACKED flag is set. But this doesn't make any sense. If the flag is set, that does not necessarily mean the ref came from a packed-refs file with the "peeled" extension. But it doesn't matter, because even if it didn't, there's no point in trying to peel it ourselves, as peel_ref would already have done so. In other words, the fallback peeling is guaranteed to fail. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-20i18n: pack-objects: mark parseopt strings for translationLibravatar Nguyễn Thái Ngọc Duy1-32/+32
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-07-22Merge branch 'jc/sha1-name-more'Libravatar Junio C Hamano1-1/+1
Teaches the object name parser things like a "git describe" output is always a commit object, "A" in "git log A" must be a committish, and "A" and "B" in "git log A...B" both must be committish, etc., to prolong the lifetime of abbreviated object names. * jc/sha1-name-more: (27 commits) t1512: match the "other" object names t1512: ignore whitespaces in wc -l output rev-parse --disambiguate=<prefix> rev-parse: A and B in "rev-parse A..B" refer to committish reset: the command takes committish commit-tree: the command wants a tree and commits apply: --build-fake-ancestor expects blobs sha1_name.c: add support for disambiguating other types revision.c: the "log" family, except for "show", takes committish revision.c: allow handle_revision_arg() to take other flags sha1_name.c: introduce get_sha1_committish() sha1_name.c: teach lookup context to get_sha1_with_context() sha1_name.c: many short names can only be committish sha1_name.c: get_sha1_1() takes lookup flags sha1_name.c: get_describe_name() by definition groks only commits sha1_name.c: teach get_short_sha1() a commit-only option sha1_name.c: allow get_short_sha1() to take other flags get_sha1(): fix error status regression sha1_name.c: restructure disambiguation of short names sha1_name.c: correct misnamed "canonical" and "res" ...
2012-07-09revision.c: allow handle_revision_arg() to take other flagsLibravatar Junio C Hamano1-1/+1
The existing "cant_be_filename" that tells the function that the caller knows the arg is not a path (hence it does not have to be checked for absense of the file whose name matches it) is made into a bit in the flag word. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-05-29pack-objects: use streaming interface for reading large loose blobsLibravatar Nguyễn Thái Ngọc Duy1-6/+67
git usually streams large blobs directly to packs. But there are cases where git can create large loose blobs (unpack-objects or hash-object over pipe). Or they can come from other git implementations. core.bigfilethreshold can also be lowered down and introduce a new wave of large loose blobs. Use streaming interface to read/compress/write these blobs in one go. Fall back to normal way if somehow streaming interface cannot be used. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-05-18pack-objects: refactor write_object() into helper functionsLibravatar Nguyễn Thái Ngọc Duy1-150/+172
The function first decides if we want to copy data taken from existing pack verbatim or we want to encode the data ourselves for the packfile we are creating and then carries out the decision. Separate the latter phase into two helper functions, one for the case the data is reused, the other for the case the data is produced anew. A little twist is that it can later turn out that we cannot reuse the data after we initially decide to do so; in such a case, the "reuse" helper makes a call to "generate" helper. It is easier to follow than the current fallback code that uses "goto" inside a single large function. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-05-18pack-objects, streaming: turn "xx >= big_file_threshold" to ".. > .."Libravatar Nguyễn Thái Ngọc Duy1-1/+1
This is because all other places do "xx > big_file_threshold" Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-11gc: do not explode objects which will be immediately prunedLibravatar Jeff King1-2/+23
When we pack everything into one big pack with "git repack -Ad", any unreferenced objects in to-be-deleted packs are exploded into loose objects, with the intent that they will be examined and possibly cleaned up by the next run of "git prune". Since the exploded objects will receive the mtime of the pack from which they come, if the source pack is old, those loose objects will end up pruned immediately. In that case, it is much more efficient to skip the exploding step entirely for these objects. This patch teaches pack-objects to receive the expiration information and avoid writing these objects out. It also teaches "git gc" to pass the value of gc.pruneexpire to repack (which in turn learns to pass it along to pack-objects) so that this optimization happens automatically during "git gc" and "git gc --auto". Signed-off-by: Jeff King <peff@peff.net> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-26pack-objects: Fix compilation with NO_PTHREDSLibravatar Michał Kiedrowicz1-1/+1
It looks like commit 99fb6e04 (pack-objects: convert to use parse_options(), 2012-02-01) moved the #ifdef NO_PTHREDS around but hasn't noticed that the 'arg' variable no longer is available. Signed-off-by: Michał Kiedrowicz <michal.kiedrowicz@gmail.com> Acked-by: Nguyen Thai Ngoc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-01pack-objects: convert to use parse_options()Libravatar Nguyễn Thái Ngọc Duy1-176/+139
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-01pack-objects: remove bogus commentLibravatar Nguyễn Thái Ngọc Duy1-14/+1
The comment was introduced in b5d97e6 (pack-objects: run rev-list equivalent internally. - 2006-09-04), stating that git pack-objects [options] base-name <refs...> is acceptable and refs should be passed into rev-list. But that's not true. All arguments after base-name are ignored. Remove the comment and reject this syntax (i.e. no more arguments after base name) Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-01pack-objects: do not accept "--index-version=version,"Libravatar Nguyễn Thái Ngọc Duy1-1/+1
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-01-12Merge branch 'maint'Libravatar Junio C Hamano1-2/+7
* maint: Update draft release notes to 1.7.8.4 Update draft release notes to 1.7.7.6 Update draft release notes to 1.7.6.6 thin-pack: try harder to use preferred base objects as base
2012-01-12Merge branch 'maint-1.7.7' into maintLibravatar Junio C Hamano1-2/+7
* maint-1.7.7: Update draft release notes to 1.7.7.6 Update draft release notes to 1.7.6.6 thin-pack: try harder to use preferred base objects as base
2012-01-12Merge branch 'maint-1.7.6' into maint-1.7.7Libravatar Junio C Hamano1-2/+7
* maint-1.7.6: Update draft release notes to 1.7.6.6 thin-pack: try harder to use preferred base objects as base
2012-01-12thin-pack: try harder to use preferred base objects as baseLibravatar Jeff King1-2/+7
When creating a pack using objects that reside in existing packs, we try to avoid recomputing futile delta between an object (trg) and a candidate for its base object (src) if they are stored in the same packfile, and trg is not recorded as a delta already. This heuristics makes sense because it is likely that we tried to express trg as a delta based on src but it did not produce a good delta when we created the existing pack. As the pack heuristics prefer producing delta to remove data, and Linus's law dictates that the size of a file grows over time, we tend to record the newest version of the file as inflated, and older ones as delta against it. When creating a thin-pack to transfer recent history, it is likely that we will try to send an object that is recorded in full, as it is newer. But the heuristics to avoid recomputing futile delta effectively forbids us from attempting to express such an object as a delta based on another object. Sending an object in full is often more expensive than sending a suboptimal delta based on other objects, and it is even more so if we could use an object we know the receiving end already has (i.e. preferred base object) as the delta base. Tweak the recomputation avoidance logic, so that we do not punt on computing delta against a preferred base object. The effect of this change can be seen on two simulated upload-pack workloads. The first is based on 44 reflog entries from my git.git origin/master reflog, and represents the packs that kernel.org sent me git updates for the past month or two. The second workload represents much larger fetches, going from git's v1.0.0 tag to v1.1.0, then v1.1.0 to v1.2.0, and so on. The table below shows the average generated pack size and the average CPU time consumed for each dataset, both before and after the patch: dataset | reflog | tags --------------------------------- before | 53358 | 2750977 size after | 32398 | 2668479 change | -39% | -3% --------------------------------- before | 0.18 | 1.12 CPU after | 0.18 | 1.15 change | +0% | +3% This patch makes a much bigger difference for packs with a shorter slice of history (since its effect is seen at the boundaries of the pack) though it has some benefit even for larger packs. Signed-off-by: Jeff King <peff@peff.net> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-16Merge branch 'jc/stream-to-pack'Libravatar Junio C Hamano1-44/+18
* jc/stream-to-pack: bulk-checkin: replace fast-import based implementation csum-file: introduce sha1file_checkpoint finish_tmp_packfile(): a helper function create_tmp_packfile(): a helper function write_pack_header(): a helper function Conflicts: pack.h
2011-12-13Merge branch 'jc/maint-pack-object-cycle' into maintLibravatar Junio C Hamano1-12/+43
* jc/maint-pack-object-cycle: pack-object: tolerate broken packs that have duplicated objects Conflicts: builtin/pack-objects.c
2011-12-13Merge branch 'nd/misc-cleanups' into maintLibravatar Junio C Hamano1-1/+1
* nd/misc-cleanups: unpack_object_header_buffer(): clear the size field upon error tree_entry_interesting: make use of local pointer "item" tree_entry_interesting(): give meaningful names to return values read_directory_recursive: reduce one indentation level get_tree_entry(): do not call find_tree_entry() on an empty tree tree-walk.c: do not leak internal structure in tree_entry_len()
2011-12-05Merge branch 'jc/maint-pack-object-cycle'Libravatar Junio C Hamano1-12/+43
* jc/maint-pack-object-cycle: pack-object: tolerate broken packs that have duplicated objects Conflicts: builtin/pack-objects.c
2011-12-05Merge branch 'nd/misc-cleanups'Libravatar Junio C Hamano1-1/+1
* nd/misc-cleanups: unpack_object_header_buffer(): clear the size field upon error tree_entry_interesting: make use of local pointer "item" tree_entry_interesting(): give meaningful names to return values read_directory_recursive: reduce one indentation level get_tree_entry(): do not call find_tree_entry() on an empty tree tree-walk.c: do not leak internal structure in tree_entry_len()
2011-12-01bulk-checkin: replace fast-import based implementationLibravatar Junio C Hamano1-5/+1
This extends the earlier approach to stream a large file directly from the filesystem to its own packfile, and allows "git add" to send large files directly into a single pack. Older code used to spawn fast-import, but the new bulk-checkin API replaces it. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-16pack-object: tolerate broken packs that have duplicated objectsLibravatar Junio C Hamano1-12/+43
When --reuse-delta is in effect (which is the default), and an existing pack in the repository has the same object registered twice (e.g. one copy in a non-delta format and the other copy in a delta against some other object), an attempt to repack the repository can result in a cyclic delta dependency, causing write_one() function to infinitely recurse into itself. Detect such a case and break the loopy dependency by writing out an object that is involved in such a loop in the non-delta format. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-01Merge branch 'dm/pack-objects-update'Libravatar Junio C Hamano1-19/+55
* dm/pack-objects-update: pack-objects: don't traverse objects unnecessarily pack-objects: rewrite add_descendants_to_write_order() iteratively pack-objects: use unsigned int for counter and offset values pack-objects: mark add_to_write_order() as inline
2011-10-28finish_tmp_packfile(): a helper functionLibravatar Junio C Hamano1-23/+10
Factor out a small logic out of the private write_pack_file() function in builtin/pack-objects.c. This changes the order of finishing multi-pack generation slightly. The code used to - adjust shared perm of temporary packfile - rename temporary packfile to the final name - update mtime of the packfile under the final name - adjust shared perm of temporary idxfile - rename temporary idxfile to the final name but because the helper does not want to do the mtime thing, the updated code does that step first and then all the rest. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-28create_tmp_packfile(): a helper functionLibravatar Junio C Hamano1-9/+3
Factor out a small logic out of the private write_pack_file() function in builtin/pack-objects.c Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-28write_pack_header(): a helper functionLibravatar Junio C Hamano1-6/+3
Factor out a small logic out of the private write_pack_file() function in builtin/pack-objects.c Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-27tree-walk.c: do not leak internal structure in tree_entry_len()Libravatar Nguyễn Thái Ngọc Duy1-1/+1
tree_entry_len() does not simply take two random arguments and return a tree length. The two pointers must point to a tree item structure, or struct name_entry. Passing random pointers will return incorrect value. Force callers to pass struct name_entry instead of two pointers (with hope that they don't manually construct struct name_entry themselves) Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-21Merge branch 'jk/maint-pack-objects-compete-with-delete'Libravatar Junio C Hamano1-0/+4
* jk/maint-pack-objects-compete-with-delete: downgrade "packfile cannot be accessed" errors to warnings pack-objects: protect against disappearing packs
2011-10-20pack-objects: don't traverse objects unnecessarilyLibravatar Dan McGee1-6/+12
This brings back some of the performance lost in optimizing recency order inside pack objects. We were doing extreme amounts of object re-traversal: for the 2.14 million objects in the Linux kernel repository, we were calling add_to_write_order() over 1.03 billion times (a 0.2% hit rate, making 99.8% of of these calls extraneous). Two optimizations take place here- we can start our objects array iteration from a known point where we left off before we started trying to find our tags, and we don't need to do the deep dives required by add_family_to_write_order() if the object has already been marked as filled. These two optimizations bring some pretty spectacular results via `perf stat`: task-clock: 83373 ms --> 43800 ms (50% faster) cycles: 221,633,461,676 --> 116,307,209,986 (47% fewer) instructions: 149,299,179,939 --> 122,998,800,184 (18% fewer) Helped-by: Ramsay Jones (format string fix in "die" message) Signed-off-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-18pack-objects: rewrite add_descendants_to_write_order() iterativelyLibravatar Dan McGee1-7/+37
This removes the need to call this function recursively, shinking the code size slightly and netting a small performance increase. Signed-off-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-18pack-objects: use unsigned int for counter and offset valuesLibravatar Dan McGee1-6/+6
This is done in some of the new pack layout code introduced in commit 1b4bb16b9ec331c. This more closely matches the nr_objects global that is unsigned that these variables are based off of and bounded by. Signed-off-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-18pack-objects: mark add_to_write_order() as inlineLibravatar Dan McGee1-1/+1
This function is a whole 26 bytes when compiled on x86_64, but is currently invoked over 1.037 billion times when running pack-objects on the Linux kernel git repository. This is hitting the point where micro-optimizations do make a difference, and inlining it only increases the object file size by 38 bytes. As reported by perf, this dropped task-clock from 84183 to 83373 ms, and total cycles from 223.5 billion to 221.6 billion. Not astronomical, but worth getting for adding one word. Signed-off-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-14downgrade "packfile cannot be accessed" errors to warningsLibravatar Jeff King1-1/+1
These can happen if another process simultaneously prunes a pack. But that is not usually an error condition, because a properly-running prune should have repacked the object into a new pack. So we will notice that the pack has disappeared unexpectedly, print a message, try other packs (possibly after re-scanning the list of packs), and find it in the new pack. Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-14pack-objects: protect against disappearing packsLibravatar Jeff King1-0/+4
It's possible that while pack-objects is running, a simultaneously running prune process might delete a pack that we are interested in. Because we load the pack indices early on, we know that the pack contains our item, but by the time we try to open and map it, it is gone. Since c715f78, we already protect against this in the normal object access code path, but pack-objects accesses the packs at a lower level. In the normal access path, we call find_pack_entry, which will call find_pack_entry_one on each pack index, which does the actual lookup. If it gets a hit, we will actually open and verify the validity of the matching packfile (using c715f78's is_pack_valid). If we can't open it, we'll issue a warning and pretend that we didn't find it, causing us to go on to the next pack (or on to loose objects). Furthermore, we will cache the descriptor to the opened packfile. Which means that later, when we actually try to access the object, we are likely to still have that packfile opened, and won't care if it has been unlinked from the filesystem. Notice the "likely" above. If there is another pack access in the interim, and we run out of descriptors, we could close the pack. And then a later attempt to access the closed pack could fail (we'll try to re-open it, of course, but it may have been deleted). In practice, this doesn't happen because we tend to look up items and then access them immediately. Pack-objects does not follow this code path. Instead, it accesses the packs at a much lower level, using find_pack_entry_one directly. This means we skip the is_pack_valid check, and may end up with the name of a packfile, but no open descriptor. We can add the same is_pack_valid check here. Unfortunately, the access patterns of pack-objects are not quite as nice for keeping lookup and object access together. We look up each object as we find out about it, and the only later when writing the packfile do we necessarily access it. Which means that the opened packfile may be closed in the interim. In practice, however, adding this check still has value, for three reasons. 1. If you have a reasonable number of packs and/or a reasonable file descriptor limit, you can keep all of your packs open simultaneously. If this is the case, then the race is impossible to trigger. 2. Even if you can't keep all packs open at once, you may end up keeping the deleted one open (i.e., you may get lucky). 3. The race window is shortened. You may notice early that the pack is gone, and not try to access it. Triggering the problem without this check means deleting the pack any time after we read the list of index files, but before we access the looked-up objects. Triggering it with this check means deleting the pack means deleting the pack after we do a lookup (and successfully access the packfile), but before we access the object. Which is a smaller window. Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-05Merge branch 'jc/fetch-verify'Libravatar Junio C Hamano1-1/+3
* jc/fetch-verify: fetch: verify we have everything we need before updating our ref rev-list --verify-object list-objects: pass callback data to show_objects()
2011-09-01list-objects: pass callback data to show_objects()Libravatar Junio C Hamano1-1/+3
The traverse_commit_list() API takes two callback functions, one to show commit objects, and the other to show other kinds of objects. Even though the former has a callback data parameter, so that the callback does not have to rely on global state, the latter does not. Give the show_objects() callback the same callback data parameter. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-17Merge branch 'mh/check-attr-relative'Libravatar Junio C Hamano1-1/+1
* mh/check-attr-relative: (29 commits) test-path-utils: Add subcommand "prefix_path" test-path-utils: Add subcommand "absolute_path" git-check-attr: Normalize paths git-check-attr: Demonstrate problems with relative paths git-check-attr: Demonstrate problems with unnormalized paths git-check-attr: test that no output is written to stderr Rename git_checkattr() to git_check_attr() git-check-attr: Fix command-line handling to match docs git-check-attr: Drive two tests using the same raw data git-check-attr: Add an --all option to show all attributes git-check-attr: Error out if no pathnames are specified git-check-attr: Process command-line args more systematically git-check-attr: Handle each error separately git-check-attr: Extract a function error_with_usage() git-check-attr: Introduce a new variable git-check-attr: Extract a function output_attr() Allow querying all attributes on a file Remove redundant check Remove redundant call to bootstrap_attr_stack() Extract a function collect_all_attrs() ...
2011-08-05Merge branch 'jc/pack-order-tweak'Libravatar Junio C Hamano1-1/+137
* jc/pack-order-tweak: pack-objects: optimize "recency order" core: log offset pack data accesses happened
2011-08-04Rename git_checkattr() to git_check_attr()Libravatar Michael Haggerty1-1/+1
Suggested by: Junio Hamano <gitster@pobox.com> Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-19Merge branch 'jc/index-pack'Libravatar Junio C Hamano1-9/+11
* jc/index-pack: verify-pack: use index-pack --verify index-pack: show histogram when emulating "verify-pack -v" index-pack: start learning to emulate "verify-pack -v" index-pack: a miniscule refactor index-pack --verify: read anomalous offsets from v2 idx file write_idx_file: need_large_offset() helper function index-pack: --verify write_idx_file: introduce a struct to hold idx customization options index-pack: group the delta-base array entries also by type Conflicts: builtin/verify-pack.c cache.h sha1_file.c