Age | Commit message (Collapse) | Author | Files | Lines |
|
* tr/sha1-file-silence-loose-object-info-under-prune-race:
sha1_file: silence sha1_loose_object_info
|
|
sha1_object_info() returns -1 (OBJ_BAD) if it cannot find the object
for some reason, which suggests that it wants the _caller_ to report
this error. However, part of its work happens in
sha1_loose_object_info, which _does_ report errors itself. This is
doubly strange because:
* packed_object_info(), which is the other half of the duo, does _not_
report this.
* In the event that an object is packed and pruned while
sha1_object_info_extended() goes looking for it, we would
erroneously show the error -- even though the code of the latter
function purports to handle this case gracefully.
* A caller might invoke sha1_object_info() to find the type of an
object even if that object is not known to exist.
Silence this error. The others remain untouched as a corrupt object
is a much more grave error than it merely being absent.
Signed-off-by: Thomas Rast <trast@inf.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* tr/unpack-entry-use-after-free-fix:
unpack_entry: avoid freeing objects in base cache
|
|
In the !delta_data error path of unpack_entry(), we run free(base).
This became a window for use-after-free() in abe601b (sha1_file:
remove recursion in unpack_entry, 2013-03-27), as follows:
Before abe601b, we got the 'base' from cache_or_unpack_entry(..., 0);
keep_cache=0 tells it to also remove that entry. So the 'base' is at
this point not cached, and freeing it in the error path is the right
thing.
After abe601b, the structure changed: we use a three-phase approach
where phase 1 finds the innermost base or a base that is already in
the cache. In phase 3 we therefore know that all bases we unpack are
not part of the delta cache yet. (Observe that we pop from the cache
in phase 1, so this is also true for the very first base.) So we make
no further attempts to look up the bases in the cache, and just call
add_delta_base_cache() on every base object we have assembled.
But the !delta_data error path remained unchanged, and now calls
free() on a base that has already been entered in the cache. This
means that there is a use-after-free if we later use the same base
again.
So remove that free(); we are still going to use that data.
Reported-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Thomas Rast <trast@inf.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Attempts to reduce the stack footprint of sha1_object_info()
and unpack_entry() codepaths.
* tr/packed-object-info-wo-recursion:
sha1_file: remove recursion in unpack_entry
Refactor parts of in_delta_base_cache/cache_or_unpack_entry
sha1_file: remove recursion in packed_object_info
|
|
Have the streaming interface and other codepaths more carefully
examine for corrupt objects.
* jk/check-corrupt-objects-carefully:
clone: leave repo in place after checkout errors
clone: run check_everything_connected
clone: die on errors from unpack_trees
add tests for cloning corrupted repositories
streaming_write_entry: propagate streaming errors
add test for streaming corrupt blobs
avoid infinite loop in read_istream_loose
read_istream_filtered: propagate read error from upstream
check_sha1_signature: check return value from read_istream
stream_blob_to_fd: detect errors reading from stream
|
|
* sw/safe-create-leading-dir-race:
safe_create_leading_directories: fix race that could give a false negative
|
|
It's possible for read_istream to return an error, in which
case we just end up in an infinite loop (aside from EOF, we
do not even look at the result, but just feed it straight
into our running hash).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Similar to the recursion in packed_object_info(), this leads to
problems on stack-space-constrained systems in the presence of long
delta chains.
We proceed in three phases:
1. Dig through the delta chain, saving each delta object's offsets and
size on an ad-hoc stack.
2. Unpack the base object at the bottom.
3. Unpack and apply the deltas from the stack.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The delta base cache lookup and test were shared. Refactor them;
we'll need both parts again. Also, we'll use the clearing routine
later.
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If two processes are racing to create the same directory tree, they
will both see that the directory doesn't exist, both try to mkdir(),
and one of them will fail. This is okay, as we only care that the
directory gets created. So, we add a check for EEXIST from mkdir,
and continue when the directory exists, taking the same codepath as
the case where the earlier stat() succeeds and finds a directory.
Signed-off-by: Steven Walter <stevenrwalter@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
packed_object_info() and packed_delta_info() were mutually recursive.
The former would handle ordinary types and defer deltas to the latter;
the latter would use the former to resolve the delta base.
This arrangement, however, leads to trouble with threaded index-pack
and long delta chains on platforms where thread stacks are small, as
happened on OS X (512kB thread stacks by default) with the chromium
repo.
The task of the two functions is not all that hard to describe without
any recursion, however. It proceeds in three steps:
- determine the representation type and size, based on the outermost
object (delta or not)
- follow through the delta chain, if any
- determine the object type from what is found at the end of the delta
chain
The only complication stems from the error recovery. If parsing fails
at any step, we want to mark that object (within the pack) as bad and
try getting the corresponding SHA1 from elsewhere. If that also
fails, we want to repeat this process back up the delta chain until we
find a reasonable solution or conclude that there is no way to
reconstruct the object. (This is conveniently checked by t5303.)
To achieve that within the pack, we keep track of the entire delta
chain in a stack. When things go sour, we process that stack from the
top, marking entries as bad and attempting to re-resolve by sha1. To
avoid excessive malloc(), the stack starts out with a small
stack-allocated array. The choice of 64 is based on the default of
pack.depth, which is 50, in the hope that it covers "most" delta
chains without any need for malloc().
It's much harder to make the actual re-resolving by sha1 nonrecursive,
so we skip that. If you can't afford *that* recursion, your
corruption problems are more serious than your stack size problems.
Reported-by: Stefan Zager <szager@google.com>
Signed-off-by: Thomas Rast <trast@student.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
prepare_packed_git_one() is modified to allow count-objects to hook a
report function to so we don't need to duplicate the pack searching
logic in count-objects.c. When report_pack_garbage is NULL, the
overhead is insignificant.
The garbage is reported with warning() instead of error() in packed
garbage case because it's not an error to have garbage. Loose garbage
is still reported as errors and will be converted to warnings later.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The current loop does
while (...) {
if (it is not an .idx file)
continue;
process .idx file;
}
and is reordered to
while (...) {
if (it is an .idx file) {
process .idx file;
}
}
This makes it easier to add new extension file processing.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Change link_alt_odb_entries() to take the length of the "alt"
parameter rather than a pointer to the end of the "alt" string. This
is the more common calling convention and simplifies the code a tiny
bit.
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Jeff King <peff@peff.net>
|
|
Change link_alt_odb_entry() to take a NUL-terminated string instead of
(char *, len). Use string_list_split_in_place() rather than inline
code in link_alt_odb_entries().
This approach saves some code and also avoids the (probably harmless)
error of passing a non-NUL-terminated string to is_absolute_path().
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Jeff King <peff@peff.net>
|
|
Not all platforms have getrlimit(), but there are other ways to see
the maximum number of files that a process can have open. If
getrlimit() is unavailable, fall back to sysconf(_SC_OPEN_MAX) if
available, and use OPEN_MAX from <limits.h>.
Signed-off-by: Joachim Schmitz <jojo@schmitz-digital.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The code to avoid mistaken attempt to add the object directory
itself as its own alternate could read beyond end of a string while
comparison.
* hv/link-alt-odb-entry:
link_alt_odb_entry: fix read over array bounds reported by valgrind
|
|
pfxlen can be longer than the path in objdir when relative_base
contains the path to gits object directory. Here we are interested
in checking if ent->base[] (the part that corresponds to .git/objects)
is the same string as objdir, and the code NUL-terminated ent->base[]
to
LEADING PATH\0XX/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\0
in preparation for these "duplicate check" step (before we return
from the function, the first NUL is turned into '/' so that we can
fill XX when probing for loose objects). All we need to do is to
compare the string with the path to our object directory.
Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When peeking into object stores of submodules, the code forgot that they
might borrow objects from alternate object stores on their own.
By Heiko Voigt
* hv/submodule-alt-odb:
teach add_submodule_odb() to look for alternates
|
|
Since we allow to link other object databases when loading a submodules
database we should also load possible alternates.
Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When write_loose_object() finds that it is unable to
create a temporary file, it complains, for instance:
unable to create temporary sha1 filename : Too many open files
That extra space was supposed to be the name of the file,
and will be an empty string if the git_mkstemps_mode() fails.
The name of the temporary file is unimportant; delete it.
Signed-off-by: Pete Wyckoff <pw@padd.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The error handling routines add a newline. Remove
the duplicate ones in error messages.
Signed-off-by: Pete Wyckoff <pw@padd.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Traditionally, all the callers of check_sha1_signature() first
called read_sha1_file() to prepare the whole object data in core,
and called this function. The function is used to revalidate what
we read from the object database actually matches the object name we
used to ask for the data from the object database.
Update the API to allow callers to pass NULL as the object data, and
have the function read and hash the object data using streaming API
to recompute the object name, without having to hold everything in
core at the same time. This is most useful in parse_object() that
parses a blob object, because this caller does not have to keep the
actual blob data around in memory after a "struct blob" is returned.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* jk/maint-avoid-streaming-filtered-contents:
do not stream large files to pack when filters are in use
teach dry-run convert_to_git not to require a src buffer
teach convert_to_git a "dry run" mode
|
|
* jk/maint-avoid-streaming-filtered-contents:
do not stream large files to pack when filters are in use
teach dry-run convert_to_git not to require a src buffer
teach convert_to_git a "dry run" mode
|
|
Because git's object format requires us to specify the
number of bytes in the object in its header, we must know
the size before streaming a blob into the object database.
This is not a problem when adding a regular file, as we can
get the size from stat(). However, when filters are in use
(such as autocrlf, or the ident, filter, or eol
gitattributes), we have no idea what the ultimate size will
be.
The current code just punts on the whole issue and ignores
filter configuration entirely for files larger than
core.bigfilethreshold. This can generate confusing results
if you use filters for large binary files, as the filter
will suddenly stop working as the file goes over a certain
size. Rather than try to handle unknown input sizes with
streaming, this patch just turns off the streaming
optimization when filters are in use.
This has a slight performance regression in a very specific
case: if you have autocrlf on, but no gitattributes, a large
binary file will avoid the streaming code path because we
don't know beforehand whether it will need conversion or
not. But if you are handling large binary files, you should
be marking them as such via attributes (or at least not
using autocrlf, and instead marking your text files as
such). And the flip side is that if you have a large
_non_-binary file, there is a correctness improvement;
before we did not apply the conversion at all.
The first half of the new t1051 script covers these failures
on input. The second half tests the matching output code
paths. These already work correctly, and do not need any
adjustment.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* nd/find-pack-entry-recent-cache-invalidation:
find_pack_entry(): do not keep packed_git pointer locally
sha1_file.c: move the core logic of find_pack_entry() into fill_pack_entry()
|
|
* mm/empty-loose-error-message:
fsck: give accurate error message on empty loose object files
|
|
* nd/find-pack-entry-recent-cache-invalidation:
find_pack_entry(): do not keep packed_git pointer locally
sha1_file.c: move the core logic of find_pack_entry() into fill_pack_entry()
|
|
* mm/empty-loose-error-message:
fsck: give accurate error message on empty loose object files
|
|
Since 3ba7a065527a (A loose object is not corrupt if it
cannot be read due to EMFILE), "git fsck" on a repository with an empty
loose object file complains with the error message
fatal: failed to read object <sha1>: Invalid argument
This comes from a failure of mmap on this empty file, which sets errno to
EINVAL. Instead of calling xmmap on empty file, we display a clean error
message ourselves, and return a NULL pointer. The new message is
error: object file .git/objects/09/<rest-of-sha1> is empty
fatal: loose object <sha1> (stored in .git/objects/09/<rest-of-sha1>) is corrupt
The second line was already there before the regression in 3ba7a065527a,
and the first is an additional message, that should help diagnosing the
problem for the user.
Signed-off-by: Matthieu Moy <Matthieu.Moy@imag.fr>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit f7c22cc (always start looking up objects in the last used pack
first - 2007-05-30) introduce a static packed_git* pointer as an
optimization. The kept pointer however may become invalid if
free_pack_by_name() happens to free that particular pack.
Current code base does not access packs after calling
free_pack_by_name() so it should not be a problem. Anyway, move the
pointer out so that free_pack_by_name() can reset it to avoid running
into troubles in future.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The new helper function implements the logic to find the offset for the
object in one pack and fill a pack_entry structure. The next patch will
restructure the loop and will call the helper from two places.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
On Solaris the system headers define the "tmpfile" name, which'll
cause Git compiled with Sun Studio 12 Update 1 to whine about us
redefining the name:
"pack-write.c", line 76: warning: name redefined by pragma redefine_extname declared static: tmpfile (E_PRAGMA_REDEFINE_STATIC)
"sha1_file.c", line 2455: warning: name redefined by pragma redefine_extname declared static: tmpfile (E_PRAGMA_REDEFINE_STATIC)
"fast-import.c", line 858: warning: name redefined by pragma redefine_extname declared static: tmpfile (E_PRAGMA_REDEFINE_STATIC)
"builtin/index-pack.c", line 175: warning: name redefined by pragma redefine_extname declared static: tmpfile (E_PRAGMA_REDEFINE_STATIC)
Just renaming the "tmpfile" variable to "tmp_file" in the relevant
places is the easiest way to fix this.
Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* jc/stream-to-pack:
bulk-checkin: replace fast-import based implementation
csum-file: introduce sha1file_checkpoint
finish_tmp_packfile(): a helper function
create_tmp_packfile(): a helper function
write_pack_header(): a helper function
Conflicts:
pack.h
|
|
* nd/misc-cleanups:
unpack_object_header_buffer(): clear the size field upon error
tree_entry_interesting: make use of local pointer "item"
tree_entry_interesting(): give meaningful names to return values
read_directory_recursive: reduce one indentation level
get_tree_entry(): do not call find_tree_entry() on an empty tree
tree-walk.c: do not leak internal structure in tree_entry_len()
|
|
* nd/misc-cleanups:
unpack_object_header_buffer(): clear the size field upon error
tree_entry_interesting: make use of local pointer "item"
tree_entry_interesting(): give meaningful names to return values
read_directory_recursive: reduce one indentation level
get_tree_entry(): do not call find_tree_entry() on an empty tree
tree-walk.c: do not leak internal structure in tree_entry_len()
|
|
This extends the earlier approach to stream a large file directly from the
filesystem to its own packfile, and allows "git add" to send large files
directly into a single pack. Older code used to spawn fast-import, but the
new bulk-checkin API replaces it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The callers do not use the returned size when the function says
it did not use any bytes and sets the type to OBJ_BAD, so this
should not matter in practice, but it is a good code hygiene
anyway.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* jk/maint-pack-objects-compete-with-delete:
downgrade "packfile cannot be accessed" errors to warnings
pack-objects: protect against disappearing packs
|
|
These can happen if another process simultaneously prunes a
pack. But that is not usually an error condition, because a
properly-running prune should have repacked the object into
a new pack. So we will notice that the pack has disappeared
unexpectedly, print a message, try other packs (possibly
after re-scanning the list of packs), and find it in the new
pack.
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
It's possible that while pack-objects is running, a
simultaneously running prune process might delete a pack
that we are interested in. Because we load the pack indices
early on, we know that the pack contains our item, but by
the time we try to open and map it, it is gone.
Since c715f78, we already protect against this in the normal
object access code path, but pack-objects accesses the packs
at a lower level. In the normal access path, we call
find_pack_entry, which will call find_pack_entry_one on each
pack index, which does the actual lookup. If it gets a hit,
we will actually open and verify the validity of the
matching packfile (using c715f78's is_pack_valid). If we
can't open it, we'll issue a warning and pretend that we
didn't find it, causing us to go on to the next pack (or on
to loose objects).
Furthermore, we will cache the descriptor to the opened
packfile. Which means that later, when we actually try to
access the object, we are likely to still have that packfile
opened, and won't care if it has been unlinked from the
filesystem.
Notice the "likely" above. If there is another pack access
in the interim, and we run out of descriptors, we could
close the pack. And then a later attempt to access the
closed pack could fail (we'll try to re-open it, of course,
but it may have been deleted). In practice, this doesn't
happen because we tend to look up items and then access them
immediately.
Pack-objects does not follow this code path. Instead, it
accesses the packs at a much lower level, using
find_pack_entry_one directly. This means we skip the
is_pack_valid check, and may end up with the name of a
packfile, but no open descriptor.
We can add the same is_pack_valid check here. Unfortunately,
the access patterns of pack-objects are not quite as nice
for keeping lookup and object access together. We look up
each object as we find out about it, and the only later when
writing the packfile do we necessarily access it. Which
means that the opened packfile may be closed in the interim.
In practice, however, adding this check still has value, for
three reasons.
1. If you have a reasonable number of packs and/or a
reasonable file descriptor limit, you can keep all of
your packs open simultaneously. If this is the case,
then the race is impossible to trigger.
2. Even if you can't keep all packs open at once, you
may end up keeping the deleted one open (i.e., you may
get lucky).
3. The race window is shortened. You may notice early that
the pack is gone, and not try to access it. Triggering
the problem without this check means deleting the pack
any time after we read the list of index files, but
before we access the looked-up objects. Triggering it
with this check means deleting the pack means deleting
the pack after we do a lookup (and successfully access
the packfile), but before we access the object. Which
is a smaller window.
Acked-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* wh/normalize-alt-odb-path:
sha1_file: normalize alt_odb path before comparing and storing
|
|
When it needs to compare and add an alt object path to the
alt_odb_list, we normalize this path first since comparing normalized
path is easy to get correct result.
Use strbuf to replace some string operations, since it is cleaner and
safer.
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Hui Wang <Hui.Wang@windriver.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* jc/maint-clone-alternates:
clone: clone from a repository with relative alternates
clone: allow more than one --reference
Conflicts:
builtin/clone.c
|
|
* rt/zlib-smaller-window:
test: consolidate definition of $LF
Tolerate zlib deflation with window size < 32Kb
|
|
Cloning from a local repository blindly copies or hardlinks all the files
under objects/ hierarchy. This results in two issues:
- If the repository cloned has an "objects/info/alternates" file, and the
command line of clone specifies --reference, the ones specified on the
command line get overwritten by the copy from the original repository.
- An entry in a "objects/info/alternates" file can specify the object
stores it borrows objects from as a path relative to the "objects/"
directory. When cloning a repository with such an alternates file, if
the new repository is not sitting next to the original repository, such
relative paths needs to be adjusted so that they can be used in the new
repository.
This updates add_to_alternates_file() to take the path to the alternate
object store, including the "/objects" part at the end (earlier, it was
taking the path to $GIT_DIR and was adding "/objects" itself), as it is
technically possible to specify in objects/info/alternates file the path
of a directory whose name does not end with "/objects".
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|