summaryrefslogtreecommitdiff
path: root/builtin/index-pack.c
AgeCommit message (Collapse)AuthorFilesLines
2016-03-04Merge branch 'jk/pack-idx-corruption-safety'Libravatar Junio C Hamano1-0/+1
The code to read the pack data using the offsets stored in the pack idx file has been made more carefully check the validity of the data in the idx. * jk/pack-idx-corruption-safety: sha1_file.c: mark strings for translation use_pack: handle signed off_t overflow nth_packed_object_offset: bounds-check extended offset t5313: test bounds-checks of corrupted/malicious pack/idx files
2016-02-25nth_packed_object_offset: bounds-check extended offsetLibravatar Jeff King1-0/+1
If a pack .idx file has a corrupted offset for an object, we may try to access an offset in the .idx or .pack file that is larger than the file's size. For the .pack case, we have use_pack() to protect us, which realizes the access is out of bounds. But if the corrupted value asks us to look in the .idx file's secondary 64-bit offset table, we blindly add it to the mmap'd index data and access arbitrary memory. We can fix this with a simple bounds-check compared to the size we found when we opened the .idx file. Note that there's similar code in index-pack that is triggered only during "index-pack --verify". To support both, we pull the bounds-check into a separate function, which dies when it sees a corrupted file. It would be nice if we could return an error, so that the pack code could try to find a good copy of the object elsewhere. Currently nth_packed_object_offset doesn't have any way to return an error, but it could probably use "0" as a sentinel value (since no object can start there). This is the minimal fix, and we can improve the resilience later on top. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22use st_add and st_mult for allocation size computationLibravatar Jeff King1-2/+2
If our size computation overflows size_t, we may allocate a much smaller buffer than we expected and overflow it. It's probably impossible to trigger an overflow in most of these sites in practice, but it is easy enough convert their additions and multiplications into overflow-checking variants. This may be fixing real bugs, and it makes auditing the code easier. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22convert trivial cases to ALLOC_ARRAYLibravatar Jeff King1-2/+2
Each of these cases can be converted to use ALLOC_ARRAY or REALLOC_ARRAY, which has two advantages: 1. It automatically checks the array-size multiplication for overflow. 2. It always uses sizeof(*array) for the element-size, so that it can never go out of sync with the declared type of the array. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-20Remove get_object_hash.Libravatar brian m. carlson1-1/+1
Convert all instances of get_object_hash to use an appropriate reference to the hash member of the oid member of struct object. This provides no functional change, as it is essentially a macro substitution. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-11-20Convert struct object to object_idLibravatar brian m. carlson1-4/+4
struct object is one of the major data structures dealing with object IDs. Convert it to use struct object_id instead of an unsigned char array. Convert get_object_hash to refer to the new member as well. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-11-20Add several uses of get_object_hash.Libravatar brian m. carlson1-1/+1
Convert most instances where the sha1 member of struct object is dereferenced to use get_object_hash. Most instances that are passed to functions that have versions taking struct object_id, such as get_sha1_hex/get_oid_hex, or instances that can be trivially converted to use struct object_id instead, are not converted. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-09-25use xsnprintf for generating git object headersLibravatar Jeff King1-1/+1
We generally use 32-byte buffers to format git's "type size" header fields. These should not generally overflow unless you can produce some truly gigantic objects (and our types come from our internal array of constant strings). But it is a good idea to use xsnprintf to make sure this is the case. Note that we slightly modify the interface to write_sha1_file_prepare, which nows uses "hdrlen" as an "in" parameter as well as an "out" (on the way in it stores the allocated size of the header, and on the way out it returns the ultimate size of the header). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-19Merge branch 'jc/finalize-temp-file'Libravatar Junio C Hamano1-2/+2
Long overdue micro clean-up. * jc/finalize-temp-file: sha1_file.c: rename move_temp_to_file() to finalize_object_file()
2015-08-10sha1_file.c: rename move_temp_to_file() to finalize_object_file()Libravatar Junio C Hamano1-2/+2
Since 5a688fe4 ("core.sharedrepository = 0mode" should set, not loosen, 2009-03-25), we kept reminding ourselves: NEEDSWORK: this should be renamed to finalize_temp_file() as "moving" is only a part of what it does, when no patch between master to pu changes the call sites of this function. without doing anything about it. Let's do so. The purpose of this function was not to move but to finalize. The detail of the primarily implementation of finalizing was to link the temporary file to its final name and then to unlink, which wasn't even "moving". The alternative implementation did "move" by calling rename(2), which is a fun tangent. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-03Merge branch 'js/fsck-opt'Libravatar Junio C Hamano1-4/+9
Allow ignoring fsck errors on specific set of known-to-be-bad objects, and also tweaking warning level of various kinds of non critical breakages reported. * js/fsck-opt: fsck: support ignoring objects in `git fsck` via fsck.skiplist fsck: git receive-pack: support excluding objects from fsck'ing fsck: introduce `git fsck --connectivity-only` fsck: support demoting errors to warnings fsck: document the new receive.fsck.<msg-id> options fsck: allow upgrading fsck warnings to errors fsck: optionally ignore specific fsck issues completely fsck: disallow demoting grave fsck errors to warnings fsck: add a simple test for receive.fsck.<msg-id> fsck: make fsck_tag() warn-friendly fsck: handle multiple authors in commits specially fsck: make fsck_commit() warn-friendly fsck: make fsck_ident() warn-friendly fsck: report the ID of the error/warning fsck (receive-pack): allow demoting errors to warnings fsck: offer a function to demote fsck errors to warnings fsck: provide a function to parse fsck message IDs fsck: introduce identifiers for fsck messages fsck: introduce fsck options
2015-07-27Merge branch 'jk/index-pack-reduce-recheck' into maintLibravatar Junio C Hamano1-1/+1
Disable "have we lost a race with competing repack?" check while receiving a huge object transfer that runs index-pack. * jk/index-pack-reduce-recheck: index-pack: avoid excessive re-reading of pack directory
2015-07-09Merge branch 'jc/fix-alloc-sortbuf-in-index-pack'Libravatar Junio C Hamano1-8/+8
A hotfix for what is in 2.5-rc but not in 2.4. * jc/fix-alloc-sortbuf-in-index-pack: index-pack: fix allocation of sorted_by_pos array
2015-07-04index-pack: fix allocation of sorted_by_pos arrayLibravatar Junio C Hamano1-8/+8
When c6458e60 (index-pack: kill union delta_base to save memory, 2015-04-18) attempted to reduce the memory footprint of index-pack, one of the key thing it did was to keep track of ref-deltas and ofs-deltas separately. In fix_unresolved_deltas(), however it forgot that it now wants to look only at ref deltas in one place. The code allocated an array for nr_unresolved, which is sum of number of ref- and ofs-deltas minus nr_resolved, which may be larger or smaller than the number ref-deltas. Depending on nr_resolved, this was either under or over allocating. Also, the old code before this change had to use 'i' and 'n' because some of the things we see in the (old) deltas[] array we scanned with 'i' would not make it into the sorted_by_pos[] array in the old world order, but now because you have only ref delta in a separate ref_deltas[] array, they increment lock&step. We no longer need separate variables. And most importantly, we shouldn't pass the nr_unresolved parameter, as this number does not play a role in the working of this helper function. Helped-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-24Merge branch 'jk/index-pack-reduce-recheck'Libravatar Junio C Hamano1-1/+1
Disable "have we lost a race with competing repack?" check while receiving a huge object transfer that runs index-pack. * jk/index-pack-reduce-recheck: index-pack: avoid excessive re-reading of pack directory
2015-06-23fsck (receive-pack): allow demoting errors to warningsLibravatar Johannes Schindelin1-0/+4
For example, missing emails in commit and tag objects can be demoted to mere warnings with git config receive.fsck.missingemail=warn The value is actually a comma-separated list. In case that the same key is listed in multiple receive.fsck.<msg-id> lines in the config, the latter configuration wins (this can happen for example when both $HOME/.gitconfig and .git/config contain message type settings). As git receive-pack does not actually perform the checks, it hands off the setting to index-pack or unpack-objects in the form of an optional argument to the --strict option. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-22fsck: introduce fsck optionsLibravatar Johannes Schindelin1-4/+5
Just like the diff machinery, we are about to introduce more settings, therefore it makes sense to carry them around as a (pointer to a) struct containing all of them. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-16Merge branch 'nd/slim-index-pack-memory-usage'Libravatar Junio C Hamano1-2/+6
An earlier optimization broke index-pack for a large object transfer; this fixes it before the breakage hits any released version. * nd/slim-index-pack-memory-usage: index-pack: fix truncation of off_t in comparison
2015-06-09index-pack: avoid excessive re-reading of pack directoryLibravatar Jeff King1-1/+1
Since 45e8a74 (has_sha1_file: re-check pack directory before giving up, 2013-08-30), we spend extra effort for has_sha1_file to give the right answer when somebody else is repacking. Usually this effort does not matter, because after finding that the object does not exist, the next step is usually to die(). However, some code paths make a large number of has_sha1_file checks which are _not_ expected to return 1. The collision test in index-pack.c is such a case. On a local system, this can cause a performance slowdown of around 5%. But on a system with high-latency system calls (like NFS), it can be much worse. This patch introduces a "quick" flag to has_sha1_file which callers can use when they would prefer high performance at the cost of false negatives during repacks. There may be other code paths that can use this, but the index-pack one is the most obviously critical, so we'll start with switching that one. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-04index-pack: fix truncation of off_t in comparisonLibravatar Jeff King1-2/+6
Commit c6458e6 (index-pack: kill union delta_base to save memory, 2015-04-18) refactored the comparison functions used in sorting and binary searching our delta list. The resulting code does something like: int cmp_offsets(off_t a, off_t b) { return a - b; } This works most of the time, but produces nonsensical results when the difference between the two offsets is larger than what can be stored in an "int". This can lead to unresolved deltas if the packsize is larger than 2G (even on 64-bit systems, an int is still typically 32 bits): $ git clone git://github.com/mozilla/gecko-dev Cloning into 'gecko-dev'... remote: Counting objects: 4800161, done. remote: Compressing objects: 100% (178/178), done. remote: Total 4800161 (delta 88), reused 0 (delta 0), pack-reused 4799978 Receiving objects: 100% (4800161/4800161), 2.21 GiB | 3.26 MiB/s, done. Resolving deltas: 99% (3808820/3811944), completed with 0 local objects. fatal: pack has 3124 unresolved deltas fatal: index-pack failed We can fix it by doing direct comparisons between the offsets and returning constants; the callers only care about the sign of the comparison, not the magnitude. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-11Merge branch 'nd/slim-index-pack-memory-usage'Libravatar Junio C Hamano1-111/+179
Memory usage of "git index-pack" has been trimmed by tens of per-cent. * nd/slim-index-pack-memory-usage: index-pack: kill union delta_base to save memory index-pack: reduce object_entry size to save memory
2015-04-18index-pack: kill union delta_base to save memoryLibravatar Nguyễn Thái Ngọc Duy1-100/+160
Once we know the number of objects in the input pack, we allocate an array of nr_objects of struct delta_entry. On x86-64, this struct is 32 bytes long. The union delta_base, which is part of struct delta_entry, provides enough space to store either ofs-delta (8 bytes) or ref-delta (20 bytes). Because ofs-delta encoding is more efficient space-wise and more performant at runtime than ref-delta encoding, Git packers try to use ofs-delta whenever possible, and it is expected that objects encoded as ref-delta are minority. In the best clone case where no ref-delta object is present, we waste (20-8) * nr_objects bytes because of this union. That's about 38MB out of 100MB for deltas[] with 3.4M objects, or 38%. deltas[] would be around 62MB without the waste. This patch attempts to eliminate that. deltas[] array is split into two: one for ofs-delta and one for ref-delta. Many functions are also duplicated because of this split. With this patch, ofs_deltas[] array takes 51MB. ref_deltas[] should remain unallocated in clone case (0 bytes). This array grows as we see ref-delta. We save about half in this case, or 25% of total bookkeeping. The saving is more than the calculation above because some padding in the old delta_entry struct is removed. ofs_delta_entry is 16 bytes, including the 4 bytes padding. That's 13MB for padding, but packing the struct could break platforms that do not support unaligned access. If someone on 32-bit is really low on memory and only deals with packs smaller than 2G, using 32-bit off_t would eliminate the padding and save 27MB on top. A note about ofs_deltas allocation. We could use ref_deltas memory allocation strategy for ofs_deltas. But that probably just adds more overhead on top. ofs-deltas are generally the majority (1/2 to 2/3) in any pack. Incremental realloc may lead to too many memcpy. And if we preallocate, say 1/2 or 2/3 of nr_objects initially, the growth rate of ALLOC_GROW() could make this array larger than nr_objects, wasting more memory. Brought-up-by: Matthew Sporleder <msporleder@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-17Merge branch 'rs/deflate-init-cleanup'Libravatar Junio C Hamano1-1/+0
Code simplification. * rs/deflate-init-cleanup: zlib: initialize git_zstream in git_deflate_init{,_gzip,_raw}
2015-03-05zlib: initialize git_zstream in git_deflate_init{,_gzip,_raw}Libravatar René Scharfe1-1/+0
Clear the git_zstream variable at the start of git_deflate_init() etc. so that callers don't have to do that. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-02-27index-pack: reduce object_entry size to save memoryLibravatar Nguyễn Thái Ngọc Duy1-11/+19
For each object in the input pack, we need one struct object_entry. On x86-64, this struct is 64 bytes long. Although: - The 8 bytes for delta_depth and base_object_no are only useful when show_stat is set. And it's never set unless someone is debugging. - The three fields hdr_size, type and real_type take 4 bytes each even though they never use more than 4 bits. By moving delta_depth and base_object_no out of struct object_entry and make the other 3 fields one byte long instead of 4, we shrink 25% of this struct. On a 3.4M object repo (*) that's about 53MB. The saving is less impressive compared to index-pack memory use for basic bookkeeping (**), about 16%. (*) linux-2.6.git already has 4M objects as of v3.19-rc7 so this is not an unrealistic number of objects that we have to deal with. (**) 3.4M * (sizeof(object_entry) + sizeof(delta_entry)) = 311MB Brought-up-by: Matthew Sporleder <msporleder@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-22Merge branch 'js/fsck-tag-validation'Libravatar Junio C Hamano1-2/+2
New tag object format validation added in 2.2 showed garbage after a tagname it reported in its error message. * js/fsck-tag-validation: index-pack: terminate object buffers with NUL fsck: properly bound "invalid tag name" error message
2014-12-09index-pack: terminate object buffers with NULLibravatar Duy Nguyen1-2/+2
We have some tricky checks in fsck that rely on a side effect of require_end_of_header(), and would otherwise easily run outside non-NUL-terminated buffers. This is a bit brittle, so let's make sure that only NUL-terminated buffers are passed around to begin with. Jeff "Peff" King contributed the detailed analysis which call paths are involved and pointed out that we also have to patch the get_data() function in unpack-objects.c, which is what Johannes "Dscho" Schindelin implemented. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Analyzed-by: Jeff King <peff@peff.net> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-13index-pack: fix compilation with NO_PTHREADSLibravatar Etienne Buira1-0/+3
type_cas_lock/unlock() should be defined as no-op for NO_PTHREADS build, just like all the other locking primitives. Signed-off-by: Etienne Buira <etienne.buira@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-26Merge branch 'rs/realloc-array'Libravatar Junio C Hamano1-3/+1
Code cleanup. * rs/realloc-array: use REALLOC_ARRAY for changing the allocation size of arrays add macro REALLOC_ARRAY
2014-09-26Merge branch 'js/fsck-tag-validation'Libravatar Junio C Hamano1-1/+2
Teach "git fsck" to inspect the contents of annotated tag objects. * js/fsck-tag-validation: Make sure that index-pack --strict checks tag objects Add regression tests for stricter tag fsck'ing fsck: check tag objects' headers Make sure fsck_commit_buffer() does not run out of the buffer fsck_object(): allow passing object data separately from the object itself Refactor type_from_string() to allow continuing after detecting an error
2014-09-19Merge branch 'jk/index-pack-threading-races'Libravatar Junio C Hamano1-2/+31
When receiving an invalid pack stream that records the same object twice, multiple threads got confused due to a race. We should reject or correct such a stream upon receiving, but that will be a larger change. * jk/index-pack-threading-races: index-pack: fix race condition with duplicate bases
2014-09-18use REALLOC_ARRAY for changing the allocation size of arraysLibravatar René Scharfe1-3/+1
Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-10fsck_object(): allow passing object data separately from the object itselfLibravatar Johannes Schindelin1-1/+2
When fsck'ing an incoming pack, we need to fsck objects that cannot be read via read_sha1_file() because they are not local yet (and might even be rejected if transfer.fsckobjects is set to 'true'). For commits, there is a hack in place: we basically cache commit objects' buffers anyway, but the same is not true, say, for tag objects. By refactoring fsck_object() to take the object buffer and size as optional arguments -- optional, because we still fall back to the previous method to look at the cached commit objects if the caller passes NULL -- we prepare the machinery for the upcoming handling of tag objects. The assumption that such buffers are inherently NUL terminated is now wrong, of course, hence we pass the size of the buffer so that we can add a sanity check later, to prevent running past the end of the buffer. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-29index-pack: fix race condition with duplicate basesLibravatar Jeff King1-2/+31
When we are resolving deltas in an indexed pack, we do it by first selecting a potential base (either one stored in full in the pack, or one created by resolving another delta), and then resolving any deltas that use that base. When we resolve a particular delta, we flip its "real_type" field from OBJ_{REF,OFS}_DELTA to whatever the real type is. We assume that traversing the objects this way will visit each delta only once. This is correct for most packs; we visit the delta only when we process its base, and each object (and thus each base) appears only once. However, if a base object appears multiple times in the pack, we will try to resolve any deltas based on it once for each instance. We can detect this case by noting that a delta we are about to resolve has already had its real_type field flipped, and we already do so with an assert(). However, if multiple threads are in use, we may race with another thread on comparing and flipping the field. We need to synchronize the access. The right mechanism for doing this is a compare-and-swap (we atomically "claim" the delta for our own and find out whether our claim was successful). We can implement this in C by using a pthread mutex to protect the operation. This is not the fastest way of doing a compare-and-swap; many processors provide instructions for this, and gcc and other compilers provide builtins to access them. However, some experiments showed that lock contention does not cause a significant slowdown here. Adding c-a-s support for many compilers would increase the maintenance burden (and we would still end up including the pthread version as a fallback). Note that we only need to touch the OBJ_REF_DELTA codepath here. An OBJ_OFS_DELTA object points to its base using an offset, and therefore has only one base, even if another copy of that base object appears in the pack (we do still touch it briefly because the setting of real_type is factored out of resolve_data). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-21Merge branch 'maint'Libravatar Junio C Hamano1-2/+1
* maint: use xmemdupz() to allocate copies of strings given by start and length use xcalloc() to allocate zero-initialized memory
2014-07-21use xcalloc() to allocate zero-initialized memoryLibravatar René Scharfe1-2/+1
Use xcalloc() instead of xmalloc() followed by memset() to allocate and zero out memory because it's shorter and avoids duplicating the function parameters. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-16Merge branch 'jk/strip-suffix'Libravatar Junio C Hamano1-15/+14
* jk/strip-suffix: prepare_packed_git_one: refactor duplicate-pack check verify-pack: use strbuf_strip_suffix strbuf: implement strbuf_strip_suffix index-pack: use strip_suffix to avoid magic numbers use strip_suffix instead of ends_with in simple cases replace has_extension with ends_with implement ends_with via strip_suffix add strip_suffix function sha1_file: replace PATH_MAX buffer with strbuf in prepare_packed_git_one()
2014-07-16Merge branch 'jk/commit-buffer-length' into maintLibravatar Junio C Hamano1-1/+2
A handful of code paths had to read the commit object more than once when showing header fields that are usually not parsed. The internal data structure to keep track of the contents of the commit object has been updated to reduce the need for this double-reading, and to allow the caller find the length of the object. * jk/commit-buffer-length: reuse cached commit buffer when parsing signatures commit: record buffer length in cache commit: convert commit->buffer to a slab commit-slab: provide a static initializer use get_commit_buffer everywhere convert logmsg_reencode to get_commit_buffer use get_commit_buffer to avoid duplicate code use get_cached_commit_buffer where appropriate provide helpers to access the commit buffer provide a helper to set the commit buffer provide a helper to free commit buffer sequencer: use logmsg_reencode in get_message logmsg_reencode: return const buffer do not create "struct commit" with xcalloc commit: push commit_index update into alloc_commit_node alloc: include any-object allocations in alloc_report replace dangerous uses of strbuf_attach commit_tree: take a pointer/len pair rather than a const strbuf
2014-07-02Merge branch 'jk/commit-buffer-length'Libravatar Junio C Hamano1-1/+2
Move "commit->buffer" out of the in-core commit object and keep track of their lengths. Use this to optimize the code paths to validate GPG signatures in commit objects. * jk/commit-buffer-length: reuse cached commit buffer when parsing signatures commit: record buffer length in cache commit: convert commit->buffer to a slab commit-slab: provide a static initializer use get_commit_buffer everywhere convert logmsg_reencode to get_commit_buffer use get_commit_buffer to avoid duplicate code use get_cached_commit_buffer where appropriate provide helpers to access the commit buffer provide a helper to set the commit buffer provide a helper to free commit buffer sequencer: use logmsg_reencode in get_message logmsg_reencode: return const buffer do not create "struct commit" with xcalloc commit: push commit_index update into alloc_commit_node alloc: include any-object allocations in alloc_report replace dangerous uses of strbuf_attach commit_tree: take a pointer/len pair rather than a const strbuf
2014-06-30index-pack: use strip_suffix to avoid magic numbersLibravatar Jeff King1-15/+14
We also switch to using strbufs, which lets us avoid the potentially dangerous combination of a manual malloc followed by a strcpy. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-30replace has_extension with ends_withLibravatar Jeff King1-2/+2
These two are almost the same function, with the exception that has_extension only matches if there is content before the suffix. So ends_with(".exe", ".exe") is true, but has_extension would not be. This distinction does not matter to any of the callers, though, and we can just replace uses of has_extension with ends_with. We prefer the "ends_with" name because it is more generic, and there is nothing about the function that requires it to be used for file extensions. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-25Merge branch 'ym/fix-opportunistic-index-update-race' into maintLibravatar Junio C Hamano1-1/+1
"git status", even though it is a read-only operation, tries to update the index with refreshed lstat(2) info to optimize future accesses to the working tree opportunistically, but this could race with a "read-write" operation that modify the index while it is running. Detect such a race and avoid overwriting the index. * ym/fix-opportunistic-index-update-race: read-cache.c: verify index file before we opportunistically update it wrapper.c: add xpread() similar to xread()
2014-06-25Merge branch 'jk/index-pack-report-missing' into maintLibravatar Junio C Hamano1-2/+7
The error reporting from "git index-pack" has been improved to distinguish missing objects from type errors. * jk/index-pack-report-missing: index-pack: distinguish missing objects from type errors
2014-06-25Merge branch 'nd/index-pack-one-fd-per-thread' into maintLibravatar Junio C Hamano1-10/+17
We used to disable threaded "git index-pack" on platforms without thread-safe pread(); use a different workaround for such platforms to allow threaded "git index-pack". * nd/index-pack-one-fd-per-thread: index-pack: work around thread-unsafe pread()
2014-06-13commit: record buffer length in cacheLibravatar Jeff King1-1/+1
Most callsites which use the commit buffer try to use the cached version attached to the commit, rather than re-reading from disk. Unfortunately, that interface provides only a pointer to the NUL-terminated buffer, with no indication of the original length. For the most part, this doesn't matter. People do not put NULs in their commit messages, and the log code is happy to treat it all as a NUL-terminated string. However, some code paths do care. For example, when checking signatures, we want to be very careful that we verify all the bytes to avoid malicious trickery. This patch just adds an optional "size" out-pointer to get_commit_buffer and friends. The existing callers all pass NULL (there did not seem to be any obvious sites where we could avoid an immediate strlen() call, though perhaps with some further refactoring we could). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-13provide a helper to free commit bufferLibravatar Jeff King1-1/+2
This converts two lines into one at each caller. But more importantly, it abstracts the concept of freeing the buffer, which will make it easier to change later. Note that we also need to provide a "detach" mechanism for a tricky case in index-pack. We are passed a buffer for the object generated by processing the incoming pack. If we are not using --strict, we just calculate the sha1 on that buffer and return, leaving the caller to free it. But if we are using --strict, we actually attach that buffer to an object, pass the object to the fsck functions, and then detach the buffer from the object again (so that the caller can free it as usual). In this case, we don't want to free the buffer ourselves, but just make sure it is no longer associated with the commit. Note that we are making the assumption here that the attach/detach process does not impact the buffer at all (e.g., it is never reallocated or modified). That holds true now, and we have no plans to change that. However, as we abstract the commit_buffer code, this dependency becomes less obvious. So when we detach, let's also make sure that we get back the same buffer that we gave to the commit_buffer code. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-06Merge branch 'jk/index-pack-report-missing'Libravatar Junio C Hamano1-2/+7
* jk/index-pack-report-missing: index-pack: distinguish missing objects from type errors
2014-06-03Merge branch 'nd/index-pack-one-fd-per-thread'Libravatar Junio C Hamano1-10/+17
Enable threaded index-pack on platforms without thread-unsafe pread() emulation. * nd/index-pack-one-fd-per-thread: index-pack: work around thread-unsafe pread()
2014-06-03Merge branch 'ym/fix-opportunistic-index-update-race'Libravatar Junio C Hamano1-1/+1
Read-only operations such as "git status" that internally refreshes the index write out the refreshed index to the disk to optimize future accesses to the working tree, but this could race with a "read-write" operation that modify the index while it is running. Detect such a race and avoid overwriting the index. Duy raised a good point that we may need to do the same for the normal writeout codepath, not just the "opportunistic" update codepath. While that is true, nobody sane would be running two simultaneous operations that are clearly write-oriented competing with each other against the same index file. So in that sense that can be done as a less urgent follow-up for this topic. * ym/fix-opportunistic-index-update-race: read-cache.c: verify index file before we opportunistically update it wrapper.c: add xpread() similar to xread()
2014-05-12index-pack: distinguish missing objects from type errorsLibravatar Jeff King1-2/+7
When we fetch a pack that does not contain an object we expected to receive, we get an error like: $ git init --bare tmp.git && cd tmp.git $ git fetch ../parent.git [...] error: Could not read 964953ec7bcc0245cb1d0db4095455edd21a2f2e fatal: Failed to traverse parents of commit b8247b40caf6704fe52736cdece6d6aae87471aa error: ../parent.git did not send all necessary objects This comes from the check_everything_connected rev-list. If we try cloning the same repo (rather than a fetch), we end up using index-pack's --check-self-contained-and-connected option instead, which produces output like: $ git clone --no-local --bare parent.git tmp.git [...] fatal: object of unexpected type fatal: index-pack failed Not only is the sha1 missing, but it's a misleading message. There's no type problem, but rather a missing object problem; we don't notice the difference because we simply compare OBJ_BAD != OBJ_BLOB. Let's provide a different message for this case: $ git clone --no-local --bare parent.git tmp.git fatal: did not receive expected object 6b00a8c61ed379d5f925a72c1987c9c52129d364 fatal: index-pack failed While we're at it, let's also improve a true type mismatch error to look like fatal: object 6b00a8c61ed379d5f925a72c1987c9c52129d364: expected type blob, got tree Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>