summaryrefslogtreecommitdiff
path: root/sha1_file.c
AgeCommit message (Collapse)AuthorFilesLines
2013-09-17Merge branch 'jk/has-sha1-file-retry-packed'Libravatar Junio C Hamano1-1/+4
When an object is not found after checking the packfiles and then loose object directory, read_sha1_file() re-checks the packfiles to prevent racing with a concurrent repacker; teach the same logic to has_sha1_file(). * jk/has-sha1-file-retry-packed: has_sha1_file: re-check pack directory before giving up
2013-09-04Merge branch 'bc/unuse-packfile'Libravatar Junio C Hamano1-15/+85
Handle memory pressure and file descriptor pressure separately when deciding to release pack windows to honor resource limits. * bc/unuse-packfile: Don't close pack fd when free'ing pack windows sha1_file: introduce close_one_pack() to close packs on fd pressure
2013-08-30has_sha1_file: re-check pack directory before giving upLibravatar Jeff King1-1/+4
When we read a sha1 file, we first look for a packed version, then a loose version, and then re-check the pack directory again before concluding that we cannot find it. This lets us handle a process that is writing to the repository simultaneously (e.g., receive-pack writing a new pack followed by a ref update, or git-repack packing existing loose objects into a new pack). However, we do not do the same trick with has_sha1_file; we only check the packed objects once, followed by loose objects. This means that we might incorrectly report that we do not have an object, even though we could find it if we simply re-checked the pack directory. By itself, this is usually not a big deal. The other process is running simultaneously, so we may run has_sha1_file before it writes, anyway. It is a race whether we see the object or not. However, we may also see other things the writing process has done (like updating refs); and in that case, we must be able to also see the new objects. For example, imagine we are doing a for_each_ref iteration, and somebody simultaneously pushes. Receive-pack may write the pack and update a ref after we have examined the objects/pack directory, but before the iteration gets to the updated ref. When we do finally see the updated ref, for_each_ref will call has_sha1_file to check whether the ref is broken. If has_sha1_file returns the wrong answer, we erroneously will think that the ref is broken. For a normal iteration without DO_FOR_EACH_INCLUDE_BROKEN, this means that the caller does not see the ref at all (neither the old nor the new value). So not only will we fail to see the new value of the ref (which is acceptable, since we are running simultaneously with the writer, and we might well read the ref before the writer commits its write), but we will not see the old value either. For programs that act on reachability like pack-objects or prune, this can cause data loss, as we may see the objects referenced by the original ref value as dangling (and either omit them from the pack, or delete them via prune). There's no test included here, because the success case is two processes running simultaneously forever. But you can replicate the issue with: # base.sh # run this in one terminal; it creates and pushes # repeatedly to a repository git init parent && (cd parent && # create a base commit that will trigger us looking at # the objects/pack directory before we hit the updated ref echo content >file && git add file && git commit -m base && # set the unpack limit abnormally low, which # lets us simulate full-size pushes using tiny ones git config receive.unpackLimit 1 ) && git clone parent child && cd child && n=0 && while true; do echo $n >file && git add file && git commit -m $n && git push origin HEAD:refs/remotes/child/master && n=$(($n + 1)) done # fsck.sh # now run this simultaneously in another terminal; it # repeatedly fscks, looking for us to consider the # newly-pushed ref broken. We cannot use for-each-ref # here, as it uses DO_FOR_EACH_INCLUDE_BROKEN, which # skips the has_sha1_file check (and if it wants # more information on the object, it will actually read # the object, which does the proper two-step lookup) cd parent && while true; do broken=`git fsck 2>&1 | grep remotes/child` if test -n "$broken"; then echo $broken exit 1 fi done Without this patch, the fsck loop fails within a few seconds (and almost instantly if the test repository actually has a large number of refs). With it, the two can run indefinitely. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-02Don't close pack fd when free'ing pack windowsLibravatar Brandon Casey1-14/+7
Now that close_one_pack() has been introduced to handle file descriptor pressure, it is not strictly necessary to close the pack file descriptor in unuse_one_window() when we're under memory pressure. Jeff King provided a justification for leaving the pack file open: If you close packfile descriptors, you can run into racy situations where somebody else is repacking and deleting packs, and they go away while you are trying to access them. If you keep a descriptor open, you're fine; they last to the end of the process. If you don't, then they disappear from under you. For normal object access, this isn't that big a deal; we just rescan the packs and retry. But if you are packing yourself (e.g., because you are a pack-objects started by upload-pack for a clone or fetch), it's much harder to recover (and we print some warnings). Let's do so (or uh, not do so). Signed-off-by: Brandon Casey <drafnel@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-08-02sha1_file: introduce close_one_pack() to close packs on fd pressureLibravatar Brandon Casey1-1/+78
When the number of open packs exceeds pack_max_fds, unuse_one_window() is called repeatedly to attempt to release the least-recently-used pack windows, which, as a side-effect, will also close a pack file after closing its last open window. If a pack file has been opened, but no windows have been allocated into it, it will never be selected by unuse_one_window() and hence its file descriptor will not be closed. When this happens, git may exceed the number of file descriptors permitted by the system. This latter situation can occur in show-ref or receive-pack during ref advertisement. During ref advertisement, receive-pack will iterate over every ref in the repository and advertise it to the client after ensuring that the ref exists in the local repository. If the ref is located inside a pack, then the pack is opened to ensure that it exists, but since the object is not actually read from the pack, no mmap windows are allocated. When the number of open packs exceeds pack_max_fds, unuse_one_window() will not be able to find any windows to free and will not be able to close any packs. Once the per-process file descriptor limit is exceeded, receive-pack will produce a warning, not an error, for each pack it cannot open, and will then most likely fail with an error to spawn rev-list or index-pack like: error: cannot create standard input pipe for rev-list: Too many open files error: Could not run 'git rev-list' This may also occur during upload-pack when refs are packed (in the packed-refs file) and the number of packs that must be opened to verify that these packed refs exist exceeds the file descriptor limit. If the refs are loose, then upload-pack will read each ref from the object database (if the object is in a pack, allocating one or more mmap windows for it) in order to peel tags and advertise the underlying object. But when the refs are packed and peeled, upload-pack will use the peeled sha1 in the packed-refs file and will not need to read from the pack files, so no mmap windows will be allocated and just like with receive-pack, unuse_one_window() will never select these opened packs to close. When we have file descriptor pressure, we just need to find an open pack to close. We can leave the existing mmap windows open. If additional windows need to be mapped into the pack file, it will be reopened when necessary. If the pack file has been rewritten in the mean time, open_packed_git_1() should notice when it compares the file size or the pack's sha1 checksum to what was previously read from the pack index, and reject it. Let's introduce a new function close_one_pack() designed specifically for this purpose to search for and close the least-recently-used pack, where LRU is defined as (in order of preference): * pack with oldest mtime and no allocated mmap windows * pack with the least-recently-used windows, i.e. the pack with the oldest most-recently-used window, where none of the windows are in use * pack with the least-recently-used windows Signed-off-by: Brandon Casey <drafnel@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-24Merge branch 'jk/cat-file-batch-optim'Libravatar Junio C Hamano1-60/+119
If somebody wants to only know on-disk footprint of an object without having to know its type or payload size, we can bypass a lot of code to cheaply learn it. * jk/cat-file-batch-optim: Fix some sparse warnings sha1_object_info_extended: pass object_info to helpers sha1_object_info_extended: make type calculation optional packed_object_info: make type lookup optional packed_object_info: hoist delta type resolution to helper sha1_loose_object_info: make type lookup optional sha1_object_info_extended: rename "status" to "type" cat-file: disable object/refname ambiguity check for batch mode
2013-07-18Fix some sparse warningsLibravatar Ramsay Jones1-1/+1
Sparse issues some "Using plain integer as NULL pointer" warnings. Each warning relates to the use of an '{0}' initialiser expression in the declaration of an 'struct object_info'. The first field of this structure has pointer type. Thus, in order to suppress these warnings, we replace the initialiser expression with '{NULL}'. Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-18Merge branch 'jk/in-pack-size-measurement'Libravatar Junio C Hamano1-5/+17
"git cat-file --batch-check=<format>" is added, primarily to allow on-disk footprint of objects in packfiles (often they are a lot smaller than their true size, when expressed as deltas) to be reported. * jk/in-pack-size-measurement: pack-revindex: radix-sort the revindex pack-revindex: use unsigned to store number of objects cat-file: split --batch input lines on whitespace cat-file: add %(objectsize:disk) format atom cat-file: add --batch-check=<format> cat-file: refactor --batch option parsing cat-file: teach --batch to stream blob objects t1006: modernize output comparisons teach sha1_object_info_extended a "disk_size" query zero-initialize object_info structs
2013-07-12sha1_object_info_extended: pass object_info to helpersLibravatar Jeff King1-27/+22
We take in a "struct object_info" which contains pointers to storage for items the caller cares about. But then rather than pass the whole object to the low-level loose/packed helper functions, we pass the individual pointers. Let's pass the whole struct instead, which will make adding more items later easier. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-12sha1_object_info_extended: make type calculation optionalLibravatar Jeff King1-7/+13
Each caller of sha1_object_info_extended sets up an object_info struct to tell the function which elements of the object it wants to get. Until now, getting the type of the object has always been required (and it is returned via the return type rather than a pointer in object_info). This can involve actually opening a loose object file to determine its type, or following delta chains to determine a packed file's base type. These effects produce a measurable slow-down when doing a "cat-file --batch-check" that does not include %(objecttype). This patch adds a "typep" query to struct object_info, so that it can be optionally queried just like size and disk_size. As a result, the return type of the function is no longer the object type, but rather 0/-1 for success/error. As there are only three callers total, we just fix up each caller rather than keep a compatibility wrapper: 1. The simpler sha1_object_info wrapper continues to always ask for and return the type field. 2. The istream_source function wants to know the type, and so always asks for it. 3. The cat-file batch code asks for the type only when %(objecttype) is part of the format string. On linux.git, the best-of-five for running: $ git rev-list --objects --all >objects $ time git cat-file --batch-check='%(objectsize:disk)' on a fully packed repository goes from: real 0m8.680s user 0m8.160s sys 0m0.512s to: real 0m7.205s user 0m6.580s sys 0m0.608s Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-12packed_object_info: make type lookup optionalLibravatar Jeff King1-9/+16
Currently, packed_object_info can save some work by not calculating the size or disk_size of the object if the caller is not interested. However, it always calculates the true object type, whether the caller cares or not, and only optionally returns the easy-to-get "representation type". Let's swap these types. The function will now return the representation type (or OBJ_BAD on failure), and will only optionally fill in the true type. There should be no behavior change yet, as the only caller, sha1_object_info_extended, will always feed it a type pointer. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-12packed_object_info: hoist delta type resolution to helperLibravatar Jeff King1-40/+53
To calculate the type of a packed object, we must walk down its delta chain until we hit a true base object with a real type. Most of the code in packed_object_info is for handling this case. Let's hoist it out into a separate helper function, which will make it easier to make the type-lookup optional in the future (and keep our indentation level sane). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-12sha1_loose_object_info: make type lookup optionalLibravatar Jeff King1-5/+43
Until recently, the only items to request from sha1_object_info_extended were type and size. This meant that we always had to open a loose object file to determine one or the other. But with the addition of the disk_size query, it's possible that we can fulfill the query without even opening the object file at all. However, since the function interface always returns the type, we have no way of knowing whether the caller cares about it or not. This patch only modified sha1_loose_object_info to make type lookup optional using an out-parameter, similar to the way the size is handled (and the return value is "0" or "-1" for success or error, respectively). There should be no functional change yet, though, as sha1_object_info_extended, the only caller, will always ask for a type. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-12sha1_object_info_extended: rename "status" to "type"Libravatar Jeff King1-10/+10
The value we get from each low-level object_info function (e.g., loose, packed) is actually the object type (or -1 for error). Let's explicitly call it "type", which will make further refactorings easier to read. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-07teach sha1_object_info_extended a "disk_size" queryLibravatar Jeff King1-4/+16
Using sha1_object_info_extended, a caller can find out the type of an object, its size, and information about where it is stored. In addition to the object's "true" size, it can also be useful to know the size that the object takes on disk (e.g., to generate statistics about which refs consume space). This patch adds a "disk_sizep" field to "struct object_info", and fills it in during sha1_object_info_extended if it is non-NULL. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-07zero-initialize object_info structsLibravatar Jeff King1-1/+1
The sha1_object_info_extended function expects the caller to provide a "struct object_info" which contains pointers to "query" items that will be filled in. The purpose of providing pointers rather than storing the response directly in the struct is so that callers can choose not to incur the expense in finding particular fields that they do not care about. Right now the only query item is "sizep", and all callers set it explicitly to choose whether or not to query it; they can then leave the rest of the struct uninitialized. However, as we add new query items, each caller will have to be updated to explicitly turn off the new ones (by setting them to NULL). Instead, let's teach each caller to zero-initialize the struct, so that they do not have to learn about each new query item added. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-23Merge branch 'jk/unpack-entry-fallback-to-another'Libravatar Junio C Hamano1-1/+10
* jk/unpack-entry-fallback-to-another: unpack_entry: do not die when we fail to apply a delta t5303: drop "count=1" from corruption dd
2013-06-20Merge branch 'nd/traces'Libravatar Junio C Hamano1-2/+12
* nd/traces: git.txt: document GIT_TRACE_PACKET core: use env variable instead of config var to turn on logging pack access
2013-06-14unpack_entry: do not die when we fail to apply a deltaLibravatar Jeff King1-1/+10
When we try to load an object from disk and fail, our general strategy is to see if we can get it from somewhere else (e.g., a loose object). That lets users fix corruption problems by copying known-good versions of objects into the object database. We already handle the case where we were not able to read the delta from disk. However, when we find that the delta we read does not apply, we simply die. This case is harder to trigger, as corruption in the delta data itself would trigger a crc error from zlib. However, a corruption that pointed us at the wrong delta base might cause it. We can do the same "fail and try to find the object elsewhere" trick instead of dying. This not only gives us a chance to recover, but also puts us on code paths that will alert the user to the problem (with the current message, they do not even know which sha1 caused the problem). Note that unlike some other pack corruptions, we do not recover automatically from this case when doing a repack. There is nothing apparently wrong with the delta, as it points to a valid, accessible object, and we realize the error only when the resulting size does not match up. And in theory, one could even have a case where the corrupted size is the same, and the problem would only be noticed by recomputing the sha1. We can get around this by recomputing the deltas with --no-reuse-delta, which our test does (and this is probably good advice for anyone recovering from pack corruption). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-11Merge branch 'tr/sha1-file-silence-loose-object-info-under-prune-race'Libravatar Junio C Hamano1-1/+1
* tr/sha1-file-silence-loose-object-info-under-prune-race: sha1_file: silence sha1_loose_object_info
2013-06-09core: use env variable instead of config var to turn on logging pack accessLibravatar Nguyễn Thái Ngọc Duy1-2/+12
5f44324 (core: log offset pack data accesses happened - 2011-07-06) provides a way to observe pack access patterns via a config switch. Setting an environment variable looks more obvious than a config var, especially when you just need to _observe_, and more inline with other tracing knobs we have. Document it as it may be useful for remote troubleshooting. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-03sha1_file: silence sha1_loose_object_infoLibravatar Thomas Rast1-1/+1
sha1_object_info() returns -1 (OBJ_BAD) if it cannot find the object for some reason, which suggests that it wants the _caller_ to report this error. However, part of its work happens in sha1_loose_object_info, which _does_ report errors itself. This is doubly strange because: * packed_object_info(), which is the other half of the duo, does _not_ report this. * In the event that an object is packed and pruned while sha1_object_info_extended() goes looking for it, we would erroneously show the error -- even though the code of the latter function purports to handle this case gracefully. * A caller might invoke sha1_object_info() to find the type of an object even if that object is not known to exist. Silence this error. The others remain untouched as a corrupt object is a much more grave error than it merely being absent. Signed-off-by: Thomas Rast <trast@inf.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-03sha1_file: trivial style cleanupLibravatar Felipe Contreras1-1/+1
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-05-03Merge branch 'tr/unpack-entry-use-after-free-fix'Libravatar Junio C Hamano1-1/+0
* tr/unpack-entry-use-after-free-fix: unpack_entry: avoid freeing objects in base cache
2013-04-30unpack_entry: avoid freeing objects in base cacheLibravatar Thomas Rast1-1/+0
In the !delta_data error path of unpack_entry(), we run free(base). This became a window for use-after-free() in abe601b (sha1_file: remove recursion in unpack_entry, 2013-03-27), as follows: Before abe601b, we got the 'base' from cache_or_unpack_entry(..., 0); keep_cache=0 tells it to also remove that entry. So the 'base' is at this point not cached, and freeing it in the error path is the right thing. After abe601b, the structure changed: we use a three-phase approach where phase 1 finds the innermost base or a base that is already in the cache. In phase 3 we therefore know that all bases we unpack are not part of the delta cache yet. (Observe that we pop from the cache in phase 1, so this is also true for the very first base.) So we make no further attempts to look up the bases in the cache, and just call add_delta_base_cache() on every base object we have assembled. But the !delta_data error path remained unchanged, and now calls free() on a base that has already been entered in the cache. This means that there is a use-after-free if we later use the same base again. So remove that free(); we are still going to use that data. Reported-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Thomas Rast <trast@inf.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-18Merge branch 'tr/packed-object-info-wo-recursion'Libravatar Junio C Hamano1-145/+266
Attempts to reduce the stack footprint of sha1_object_info() and unpack_entry() codepaths. * tr/packed-object-info-wo-recursion: sha1_file: remove recursion in unpack_entry Refactor parts of in_delta_base_cache/cache_or_unpack_entry sha1_file: remove recursion in packed_object_info
2013-04-03Merge branch 'jk/check-corrupt-objects-carefully'Libravatar Junio C Hamano1-0/+4
Have the streaming interface and other codepaths more carefully examine for corrupt objects. * jk/check-corrupt-objects-carefully: clone: leave repo in place after checkout errors clone: run check_everything_connected clone: die on errors from unpack_trees add tests for cloning corrupted repositories streaming_write_entry: propagate streaming errors add test for streaming corrupt blobs avoid infinite loop in read_istream_loose read_istream_filtered: propagate read error from upstream check_sha1_signature: check return value from read_istream stream_blob_to_fd: detect errors reading from stream
2013-04-02Merge branch 'sw/safe-create-leading-dir-race'Libravatar Junio C Hamano1-2/+7
* sw/safe-create-leading-dir-race: safe_create_leading_directories: fix race that could give a false negative
2013-03-27check_sha1_signature: check return value from read_istreamLibravatar Jeff King1-0/+4
It's possible for read_istream to return an error, in which case we just end up in an infinite loop (aside from EOF, we do not even look at the result, but just feed it straight into our running hash). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-27sha1_file: remove recursion in unpack_entryLibravatar Thomas Rast1-81/+150
Similar to the recursion in packed_object_info(), this leads to problems on stack-space-constrained systems in the presence of long delta chains. We proceed in three phases: 1. Dig through the delta chain, saving each delta object's offsets and size on an ad-hoc stack. 2. Unpack the base object at the bottom. 3. Unpack and apply the deltas from the stack. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-27Refactor parts of in_delta_base_cache/cache_or_unpack_entryLibravatar Thomas Rast1-13/+32
The delta base cache lookup and test were shared. Refactor them; we'll need both parts again. Also, we'll use the clearing routine later. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-26safe_create_leading_directories: fix race that could give a false negativeLibravatar Steven Walter1-2/+7
If two processes are racing to create the same directory tree, they will both see that the directory doesn't exist, both try to mkdir(), and one of them will fail. This is okay, as we only care that the directory gets created. So, we add a check for EEXIST from mkdir, and continue when the directory exists, taking the same codepath as the case where the earlier stat() succeeds and finds a directory. Signed-off-by: Steven Walter <stevenrwalter@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-25sha1_file: remove recursion in packed_object_infoLibravatar Thomas Rast1-51/+84
packed_object_info() and packed_delta_info() were mutually recursive. The former would handle ordinary types and defer deltas to the latter; the latter would use the former to resolve the delta base. This arrangement, however, leads to trouble with threaded index-pack and long delta chains on platforms where thread stacks are small, as happened on OS X (512kB thread stacks by default) with the chromium repo. The task of the two functions is not all that hard to describe without any recursion, however. It proceeds in three steps: - determine the representation type and size, based on the outermost object (delta or not) - follow through the delta chain, if any - determine the object type from what is found at the end of the delta chain The only complication stems from the error recovery. If parsing fails at any step, we want to mark that object (within the pack) as bad and try getting the corresponding SHA1 from elsewhere. If that also fails, we want to repeat this process back up the delta chain until we find a reasonable solution or conclude that there is no way to reconstruct the object. (This is conveniently checked by t5303.) To achieve that within the pack, we keep track of the entire delta chain in a stack. When things go sour, we process that stack from the top, marking entries as bad and attempting to re-resolve by sha1. To avoid excessive malloc(), the stack starts out with a small stack-allocated array. The choice of 64 is based on the default of pack.depth, which is 50, in the hope that it covers "most" delta chains without any need for malloc(). It's much harder to make the actual re-resolving by sha1 nonrecursive, so we skip that. If you can't afford *that* recursion, your corruption problems are more serious than your stack size problems. Reported-by: Stefan Zager <szager@google.com> Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-15count-objects: report garbage files in pack directory tooLibravatar Nguyễn Thái Ngọc Duy1-1/+82
prepare_packed_git_one() is modified to allow count-objects to hook a report function to so we don't need to duplicate the pack searching logic in count-objects.c. When report_pack_garbage is NULL, the overhead is insignificant. The garbage is reported with warning() instead of error() in packed garbage case because it's not an error to have garbage. Loose garbage is still reported as errors and will be converted to warnings later. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-13sha1_file: reorder code in prepare_packed_git_one()Libravatar Nguyễn Thái Ngọc Duy1-16/+14
The current loop does while (...) { if (it is not an .idx file) continue; process .idx file; } and is reordered to while (...) { if (it is an .idx file) { process .idx file; } } This makes it easier to add new extension file processing. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-11-08link_alt_odb_entries(): take (char *, len) rather than two pointersLibravatar Michael Haggerty1-5/+5
Change link_alt_odb_entries() to take the length of the "alt" parameter rather than a pointer to the end of the "alt" string. This is the more common calling convention and simplifies the code a tiny bit. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Jeff King <peff@peff.net>
2012-11-08link_alt_odb_entries(): use string_list_split_in_place()Libravatar Michael Haggerty1-24/+18
Change link_alt_odb_entry() to take a NUL-terminated string instead of (char *, len). Use string_list_split_in_place() rather than inline code in link_alt_odb_entries(). This approach saves some code and also avoids the (probably harmless) error of passing a non-NUL-terminated string to is_absolute_path(). Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Jeff King <peff@peff.net>
2012-08-24sha1_file.c: introduce get_max_fd_limit() helperLibravatar Joachim Schmitz1-7/+19
Not all platforms have getrlimit(), but there are other ways to see the maximum number of files that a process can have open. If getrlimit() is unavailable, fall back to sysconf(_SC_OPEN_MAX) if available, and use OPEN_MAX from <limits.h>. Signed-off-by: Joachim Schmitz <jojo@schmitz-digital.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-07-30Merge branch 'hv/link-alt-odb-entry'Libravatar Junio C Hamano1-1/+1
The code to avoid mistaken attempt to add the object directory itself as its own alternate could read beyond end of a string while comparison. * hv/link-alt-odb-entry: link_alt_odb_entry: fix read over array bounds reported by valgrind
2012-07-29link_alt_odb_entry: fix read over array bounds reported by valgrindLibravatar Heiko Voigt1-1/+1
pfxlen can be longer than the path in objdir when relative_base contains the path to gits object directory. Here we are interested in checking if ent->base[] (the part that corresponds to .git/objects) is the same string as objdir, and the code NUL-terminated ent->base[] to LEADING PATH\0XX/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\0 in preparation for these "duplicate check" step (before we return from the function, the first NUL is turned into '/' so that we can fill XX when probing for loose objects). All we need to do is to compare the string with the path to our object directory. Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-05-23Merge branch 'hv/submodule-alt-odb'Libravatar Junio C Hamano1-2/+1
When peeking into object stores of submodules, the code forgot that they might borrow objects from alternate object stores on their own. By Heiko Voigt * hv/submodule-alt-odb: teach add_submodule_odb() to look for alternates
2012-05-14teach add_submodule_odb() to look for alternatesLibravatar Heiko Voigt1-2/+1
Since we allow to link other object databases when loading a submodules database we should also load possible alternates. Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-30remove blank filename in error messageLibravatar Pete Wyckoff1-1/+1
When write_loose_object() finds that it is unable to create a temporary file, it complains, for instance: unable to create temporary sha1 filename : Too many open files That extra space was supposed to be the name of the file, and will be an empty string if the git_mkstemps_mode() fails. The name of the temporary file is unimportant; delete it. Signed-off-by: Pete Wyckoff <pw@padd.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-30remove superfluous newlines in error messagesLibravatar Pete Wyckoff1-3/+3
The error handling routines add a newline. Remove the duplicate ones in error messages. Signed-off-by: Pete Wyckoff <pw@padd.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-07parse_object: avoid putting whole blob in coreLibravatar Nguyễn Thái Ngọc Duy1-2/+40
Traditionally, all the callers of check_sha1_signature() first called read_sha1_file() to prepare the whole object data in core, and called this function. The function is used to revalidate what we read from the object database actually matches the object name we used to ask for the data from the object database. Update the API to allow callers to pass NULL as the object data, and have the function read and hash the object data using streaming API to recompute the object name, without having to hold everything in core at the same time. This is most useful in parse_object() that parses a blob object, because this caller does not have to keep the actual blob data around in memory after a "struct blob" is returned. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-04Merge branch 'jk/maint-avoid-streaming-filtered-contents' into maintLibravatar Junio C Hamano1-5/+9
* jk/maint-avoid-streaming-filtered-contents: do not stream large files to pack when filters are in use teach dry-run convert_to_git not to require a src buffer teach convert_to_git a "dry run" mode
2012-02-26Merge branch 'jk/maint-avoid-streaming-filtered-contents'Libravatar Junio C Hamano1-5/+9
* jk/maint-avoid-streaming-filtered-contents: do not stream large files to pack when filters are in use teach dry-run convert_to_git not to require a src buffer teach convert_to_git a "dry run" mode
2012-02-24do not stream large files to pack when filters are in useLibravatar Jeff King1-5/+9
Because git's object format requires us to specify the number of bytes in the object in its header, we must know the size before streaming a blob into the object database. This is not a problem when adding a regular file, as we can get the size from stat(). However, when filters are in use (such as autocrlf, or the ident, filter, or eol gitattributes), we have no idea what the ultimate size will be. The current code just punts on the whole issue and ignores filter configuration entirely for files larger than core.bigfilethreshold. This can generate confusing results if you use filters for large binary files, as the filter will suddenly stop working as the file goes over a certain size. Rather than try to handle unknown input sizes with streaming, this patch just turns off the streaming optimization when filters are in use. This has a slight performance regression in a very specific case: if you have autocrlf on, but no gitattributes, a large binary file will avoid the streaming code path because we don't know beforehand whether it will need conversion or not. But if you are handling large binary files, you should be marking them as such via attributes (or at least not using autocrlf, and instead marking your text files as such). And the flip side is that if you have a large _non_-binary file, there is a correctness improvement; before we did not apply the conversion at all. The first half of the new t1051 script covers these failures on input. The second half tests the matching output code paths. These already work correctly, and do not need any adjustment. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-02-21Merge branch 'nd/find-pack-entry-recent-cache-invalidation' into maintLibravatar Junio C Hamano1-38/+46
* nd/find-pack-entry-recent-cache-invalidation: find_pack_entry(): do not keep packed_git pointer locally sha1_file.c: move the core logic of find_pack_entry() into fill_pack_entry()
2012-02-16Merge branch 'mm/empty-loose-error-message' into maintLibravatar Junio C Hamano1-0/+5
* mm/empty-loose-error-message: fsck: give accurate error message on empty loose object files