Age | Commit message (Collapse) | Author | Files | Lines |
|
Update various codepaths to avoid manually-counted malloc().
* jk/tighten-alloc: (22 commits)
ewah: convert to REALLOC_ARRAY, etc
convert ewah/bitmap code to use xmalloc
diff_populate_gitlink: use a strbuf
transport_anonymize_url: use xstrfmt
git-compat-util: drop mempcpy compat code
sequencer: simplify memory allocation of get_message
test-path-utils: fix normalize_path_copy output buffer size
fetch-pack: simplify add_sought_entry
fast-import: simplify allocation in start_packfile
write_untracked_extension: use FLEX_ALLOC helper
prepare_{git,shell}_cmd: use argv_array
use st_add and st_mult for allocation size computation
convert trivial cases to FLEX_ARRAY macros
use xmallocz to avoid size arithmetic
convert trivial cases to ALLOC_ARRAY
convert manual allocations to argv_array
argv-array: add detach function
add helpers for allocating flex-array structs
harden REALLOC_ARRAY and xcalloc against size_t overflow
tree-diff: catch integer overflow in combine_diff_path allocation
...
|
|
REALLOC_ARRAY inherently involves a multiplication which can
overflow size_t, resulting in a much smaller buffer than we
think we've allocated. We can easily harden it by using
st_mult() to check for overflow. Likewise, we can add
ALLOC_ARRAY to do the same thing for xmalloc calls.
xcalloc() should already be fine, because it takes the two
factors separately, assuming the system calloc actually
checks for overflow. However, before we even hit the system
calloc(), we do our memory_limit_check, which involves a
multiplication. Let's check for overflow ourselves so that
this limit cannot be bypassed.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR
(e.g. COMMIT_EDITMSG) that is meant to be left after the command is
done. This however did not work well if the repository is set to
be shared with core.sharedRepository and the umask of the previous
user is tighter. They have been made to work better by calling
unlink(2) and retrying after fopen(3) fails with EPERM.
* js/fopen-harder:
Handle more file writes correctly in shared repos
commit: allow editing the commit message even in shared repos
|
|
Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR
(e.g. COMMIT_EDITMSG) that is meant to be left after the command is
done. This however did not work well if the repository is set to
be shared with core.sharedRepository and the umask of the previous
user is tighter. They have been made to work better by calling
unlink(2) and retrying after fopen(3) fails with EPERM.
* js/fopen-harder:
Handle more file writes correctly in shared repos
commit: allow editing the commit message even in shared repos
|
|
Add a framework to spawn a group of processes in parallel, and use
it to run "git fetch --recurse-submodules" in parallel.
Rerolled and this seems to be a lot cleaner. The merge of the
earlier one to 'next' has been reverted.
* sb/submodule-parallel-fetch:
submodules: allow parallel fetching, add tests and documentation
fetch_populated_submodules: use new parallel job processing
run-command: add an asynchronous parallel child processor
sigchain: add command to pop all common signals
strbuf: add strbuf_read_once to read without blocking
xread: poll on non blocking fds
submodule.c: write "Fetching submodule <foo>" to stderr
|
|
It was pointed out by Yaroslav Halchenko that the file containing the
commit message is writable only by the owner, which means that we have
to rewrite it from scratch in a shared repository.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When getpwuid() on the system returned NULL (e.g. the user is not
in the /etc/passwd file or other uid-to-name mappings), the
codepath to find who the user is to record it in the reflog barfed
and died. Loosen the check in this codepath, which already accepts
questionable ident string (e.g. host part of the e-mail address is
obviously bogus), and in general when we operate fmt_ident() function
in non-strict mode.
* jk/ident-loosen-getpwuid:
ident: loosen getpwuid error in non-strict mode
ident: keep a flag for bogus default_email
ident: make xgetpwuid_self() a static local helper
|
|
The man page of read(2) says:
EAGAIN The file descriptor fd refers to a file other than a socket
and has been marked nonblocking (O_NONBLOCK), and the read
would block.
EAGAIN or EWOULDBLOCK
The file descriptor fd refers to a socket and has been marked
nonblocking (O_NONBLOCK), and the read would block. POSIX.1-2001
allows either error to be returned for this case, and does not
require these constants to have the same value, so a portable
application should check for both possibilities.
If we get an EAGAIN or EWOULDBLOCK the fd must have set O_NONBLOCK.
As the intent of xread is to read as much as possible either until the
fd is EOF or an actual error occurs, we can ease the feeder of the fd
by not spinning the whole time, but rather wait for it politely by not
busy waiting.
We should not care if the call to poll failed, as we're in an infinite
loop and can only get out with the correct read().
Signed-off-by: Stefan Beller <sbeller@google.com>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This function is defined in wrapper.c, but nobody besides
ident.c uses it. And nobody is likely to in the future,
either, as anything that cares about the user's name should
be going through the ident code.
Moving it here is a cleanup of the global namespace, but it
will also enable further cleanups inside ident.c.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
There are a number of places in the code where we call
sprintf(), with the assumption that the output will fit into
the buffer. In many cases this is true (e.g., formatting a
number into a large buffer), but it is hard to tell
immediately from looking at the code. It would be nice if we
had some run-time check to make sure that our assumption is
correct (and to communicate to readers of the code that we
are not blindly calling sprintf, but have actually thought
about this case).
This patch introduces xsnprintf, which behaves just like
snprintf, except that it dies whenever the output is
truncated. This acts as a sort of assert() for these cases,
which can help find places where the assumption is violated
(as opposed to truncating and proceeding, which may just
silently give a wrong answer).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
All existing callers to this function use it to produce a text file
or an empty file, and a new callsite that mimick them must end their
payload with a LF. If they forget to do so, the resulting file will
end with an incomplete line.
Teach write_file_v() to complete the incomplete line, if exists, so
that the callers do not have to.
With this, the caller-side fix in builtin/am.c becomes unnecessary.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
All callers except three passed 1 for the "fatal" parameter to ask
this function to die upon error, but to a casual reader of the code,
it was not all obvious what that 1 meant. Instead, split the
function into two based on a common write_file_v() that takes the
flag, introduce write_file_gently() as a new way to attempt creating
a file without dying on error, and make three callers to call it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A common usage pattern of fopen() is to check if it succeeded, and die()
if it failed:
FILE *fp = fopen(path, "w");
if (!fp)
die_errno(_("could not open '%s' for writing"), path);
Implement a wrapper function xfopen() for the above, so that we can save
a few lines of code and make the die() messages consistent.
Helped-by: Jeff King <peff@peff.net>
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A common usage pattern of open() is to check if it was successful, and
die() if it was not:
int fd = open(path, O_WRONLY | O_CREAT, 0777);
if (fd < 0)
die_errno(_("Could not open '%s' for writing."), path);
Implement a wrapper function xopen() that does the above so that we can
save a few lines of code, and make the die() messages consistent.
Helped-by: Torsten Bögershausen <tboegi@web.de>
Helped-by: Jeff King <peff@peff.net>
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We want to use the new function elsewhere in a moment.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A replacement for contrib/workdir/git-new-workdir that does not
rely on symbolic links and make sharing of objects and refs safer
by making the borrowee and borrowers aware of each other.
* nd/multiple-work-trees: (41 commits)
prune --worktrees: fix expire vs worktree existence condition
t1501: fix test with split index
t2026: fix broken &&-chain
t2026 needs procondition SANITY
git-checkout.txt: a note about multiple checkout support for submodules
checkout: add --ignore-other-wortrees
checkout: pass whole struct to parse_branchname_arg instead of individual flags
git-common-dir: make "modules/" per-working-directory directory
checkout: do not fail if target is an empty directory
t2025: add a test to make sure grafts is working from a linked checkout
checkout: don't require a work tree when checking out into a new one
git_path(): keep "info/sparse-checkout" per work-tree
count-objects: report unused files in $GIT_DIR/worktrees/...
gc: support prune --worktrees
gc: factor out gc.pruneexpire parsing code
gc: style change -- no SP before closing parenthesis
checkout: clean up half-prepared directories in --to mode
checkout: reject if the branch is already checked out elsewhere
prune: strategies for linked checkouts
checkout: support checking out into a new working directory
...
|
|
Our default I/O size (8 MiB) for large files was too large for some
platforms with smaller SSIZE_MAX, leading to read(2)/write(2)
failures.
* jc/max-io-size-and-ssize-max:
xread/xwrite: clip MAX_IO_SIZE to SSIZE_MAX
|
|
Since 0b6806b9 (xread, xwrite: limit size of IO to 8MB, 2013-08-20),
we chomp our calls to read(2) and write(2) into chunks of
MAX_IO_SIZE bytes (8 MiB), because a large IO results in a bad
latency when the program needs to be killed. This also brought our
IO below SSIZE_MAX, which is a limit POSIX allows read(2) and
write(2) to fail when the IO size exceeds it, for OS X, where a
problem was originally reported.
However, there are other systems that define SSIZE_MAX smaller than
our default, and feeding 8 MiB to underlying read(2)/write(2) would
fail. Make sure we clip our calls to the lower limit as well.
Reported-by: Joachim Schmitz <jojo@schmitz-digital.de>
Helped-by: Torsten Bögershausen <tboegi@web.de>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This behaves like unlink_or_warn except that on failure it writes the message
to its 'err' argument, which the caller can display in an appropriate way or
ignore.
Signed-off-by: Ronnie Sahlberg <sahlberg@google.com>
Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Simplify the function warn_if_unremovable slightly. Additionally, change
behaviour slightly. If we failed to remove the object because the object
does not exist, we can still return success back to the caller since none of
the callers depend on "fail if the file did not exist".
Signed-off-by: Ronnie Sahlberg <sahlberg@google.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When running a required clean filter, we do not have to mmap the
original before feeding the filter. Instead, stream the file
contents directly to the filter and process its output.
* sp/stream-clean-filter:
sha1_file: don't convert off_t to size_t too early to avoid potential die()
convert: stream from fd to required clean filter to reduce used address space
copy_fd(): do not close the input file descriptor
mmap_limit: introduce GIT_MMAP_LIMIT to allow testing expected mmap size
memory_limit: use git_env_ulong() to parse GIT_ALLOC_LIMIT
config.c: add git_env_ulong() to parse environment variable
convert: drop arguments other than 'path' from would_convert_to_git()
|
|
Teach a few codepaths to punt (instead of dying) when large blobs
that would not fit in core are involved in the operation.
* nd/large-blobs:
diff: shortcut for diff'ing two binary SHA-1 objects
diff --stat: mark any file larger than core.bigfilethreshold binary
diff.c: allow to pass more flags to diff_populate_filespec
sha1_file.c: do not die failing to malloc in unpack_compressed_entry
wrapper.c: introduce gentle xmallocz that does not die()
|
|
GIT_ALLOC_LIMIT limits xmalloc()'s size, which is of type size_t.
Better use git_env_ulong() to parse the environment variable, so
that the postfixes 'k', 'm', and 'g' can be used; and use size_t to
store the limit for consistency. The change to size_t has no direct
practical impact, because the environment variable is only meant to
be used for our own tests, and we use it to test small sizes.
The cast of size in the call to die() is changed to uintmax_t to
match the format string PRIuMAX.
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Add the helper function xgetcwd(), which returns the current directory
or dies. The returned string has to be free()d after use.
Helped-by: Duy Nguyen <pclouds@gmail.com>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Before we proceed to opportunistically update the index (often done
by an otherwise read-only operation like "git status" and "git diff"
that internally refreshes the index), we must verify that the
current index file is the same as the one that we read earlier
before we took the lock on it, in order to avoid a possible race.
In the example below git-status does "opportunistic update" and
git-rebase updates the index, but the race can happen in general.
1. process A calls git-rebase (or does anything that uses the index)
2. process A applies 1st commit
3. process B calls git-status (or does anything that updates the index)
4. process B reads index
5. process A applies 2nd commit
6. process B takes the lock, then overwrites process A's changes.
7. process A applies 3rd commit
As an end result the 3rd commit will have a revert of the 2nd commit.
When process B takes the lock, it needs to make sure that the index
hasn't changed since step 4.
Signed-off-by: Yiannis Marangos <yiannis.marangos@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
It is a common mistake to call read(2)/pread(2) and forget to
anticipate that they may return error with EAGAIN/EINTR when the
system call is interrupted.
We have xread() helper to relieve callers of read(2) from having to
worry about it; add xpread() helper to do the same for pread(2).
Update the caller in the builtin/index-pack.c and the mmap emulation
in compat/.
Signed-off-by: Yiannis Marangos <yiannis.marangos@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When the NO_MKSTEMPS build variable is not set, the gitmkstemps
function is dead code. Use a preprocessor conditional to only include
the definition when needed.
Noticed by sparse. ("'gitmkstemps' was not declared. Should it be
static?")
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
Checking out 2GB or more through an external filter (see test) fails
on Mac OS X 10.8.4 (12E55) for a 64-bit executable with:
error: read from external filter cat failed
error: cannot feed the input to external filter cat
error: cat died of signal 13
error: external filter cat failed 141
error: external filter cat failed
The reason is that read() immediately returns with EINVAL when asked
to read more than 2GB. According to POSIX [1], if the value of
nbyte passed to read() is greater than SSIZE_MAX, the result is
implementation-defined. The write function has the same restriction
[2]. Since OS X still supports running 32-bit executables, the
32-bit limit (SSIZE_MAX = INT_MAX = 2GB - 1) seems to be also
imposed on 64-bit executables under certain conditions. For write,
the problem has been addressed earlier [6c642a].
Address the problem for read() and write() differently, by limiting
size of IO chunks unconditionally on all platforms in xread() and
xwrite(). Large chunks only cause problems, like causing latencies
when killing the process, even if OS X was not buggy. Doing IO in
reasonably sized smaller chunks should have no negative impact on
performance.
The compat wrapper clipped_write() introduced earlier [6c642a] is
not needed anymore. It will be reverted in a separate commit. The
new test catches read and write problems.
Note that 'git add' exits with 0 even if it prints filtering errors
to stderr. The test, therefore, checks stderr. 'git add' should
probably be changed (sometime in another commit) to exit with
nonzero if filtering fails. The test could then be changed to use
test_must_fail.
Thanks to the following people for suggestions and testing:
Johannes Sixt <j6t@kdbg.org>
John Keeping <john@keeping.me.uk>
Jonathan Nieder <jrnieder@gmail.com>
Kyle J. McKay <mackyle@gmail.com>
Linus Torvalds <torvalds@linux-foundation.org>
Torsten Bögershausen <tboegi@web.de>
[1] http://pubs.opengroup.org/onlinepubs/009695399/functions/read.html
[2] http://pubs.opengroup.org/onlinepubs/009695399/functions/write.html
[6c642a] commit 6c642a878688adf46b226903858b53e2d31ac5c3
compate/clipped-write.c: large write(2) fails on Mac OS X/XNU
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Two places we did not check return value (expected to be a file
descriptor) correctly.
* tr/fd-gotcha-fixes:
run-command: dup_devnull(): guard against syscalls failing
git_mkstemps: correctly test return value of open()
|
|
open() returns -1 on failure, and indeed 0 is a possible success value
if the user closed stdin in our process. Fix the test.
Signed-off-by: Thomas Rast <trast@inf.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The changes v1.7.12.1~2^2~4 (config: warn on inaccessible files,
2012-08-21) and v1.8.1.1~22^2~2 (config: treat user and xdg config
permission problems as errors, 2012-10-13) were intended to prevent
important configuration (think "[transfer] fsckobjects") from being
ignored when the configuration is unintentionally unreadable (for
example with EIO on a flaky filesystem, or with ENOMEM due to a DoS
attack). Usually ~/.gitconfig and ~/.config/git are readable by the
current user, and if they aren't then it would be easy to fix those
permissions, so the damage from adding this check should have been
minimal.
Unfortunately the access() check often trips when git is being run as
a server. A daemon (such as inetd or git-daemon) starts as "root",
creates a listening socket, and then drops privileges, meaning that
when git commands are invoked they cannot access $HOME and die with
fatal: unable to access '/root/.config/git/config': Permission denied
Any patch to fix this would have one of three problems:
1. We annoy sysadmins who need to take an extra step to handle HOME
when dropping privileges (the current behavior, or any other
proposal that they have to opt into).
2. We annoy sysadmins who want to set HOME when dropping privileges,
either by making what they want to do impossible, or making them
set an extra variable or option to accomplish what used to work
(e.g., a patch to git-daemon to set HOME when --user is passed).
3. We loosen the check, so some cases which might be noteworthy are
not caught.
This patch is of type (3).
Treat user and xdg configuration that are inaccessible due to
permissions (EACCES) as though no user configuration was provided at
all.
An alternative method would be to check if $HOME is readable, but that
would not help in cases where the user who dropped privileges had a
globally readable HOME with only .config or .gitconfig being private.
This does not change the behavior when /etc/gitconfig or .git/config
is unreadable (since those are more serious configuration errors),
nor when ~/.gitconfig or ~/.config/git is unreadable due to problems
other than permissions.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When attempting to read the XDG-style $HOME/.config/git/config and
finding that $HOME/.config/git is a file, we gave a wrong error
message, instead of treating the case as "a custom config file does
not exist there" and moving on.
* jn/warn-on-inaccessible-loosen:
config: exit on error accessing any config file
doc: advertise GIT_CONFIG_NOSYSTEM
config: treat user and xdg config permission problems as errors
config, gitignore: failure to access with ENOTDIR is ok
|
|
Deal with a situation where .config/git is a file and we notice
.config/git/config is not readable due to ENOTDIR, not ENOENT.
* jn/warn-on-inaccessible-loosen:
config: exit on error accessing any config file
doc: advertise GIT_CONFIG_NOSYSTEM
config: treat user and xdg config permission problems as errors
config, gitignore: failure to access with ENOTDIR is ok
|
|
Some implementations of xmkstemp() leaves the given in/out buffer
truncated when they return with failure.
6cf6bb3 (Improve error messages when temporary file creation fails,
2010-12-18) attempted to show the real filename we tried to create
(but failed), and if that is not available due to such truncation,
to show the original template that was given by the caller.
But it failed to take into account that the given template could
have "directory/" in front, in which case the truncation point may
not be template[0] but somewhere else.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Git reads multiple configuration files: settings come first from the
system config file (typically /etc/gitconfig), then the xdg config
file (typically ~/.config/git/config), then the user's dotfile
(~/.gitconfig), then the repository configuration (.git/config).
Git has always used access(2) to decide whether to use each file; as
an unfortunate side effect, that means that if one of these files is
unreadable (e.g., EPERM or EIO), git skips it. So if I use
~/.gitconfig to override some settings but make a mistake and give it
the wrong permissions then I am subject to the settings the sysadmin
chose for /etc/gitconfig.
Better to error out and ask the user to correct the problem.
This only affects the user and xdg config files, since the user
presumably has enough access to fix their permissions. If the system
config file is unreadable, the best we can do is to warn about it so
the user knows to notify someone and get on with work in the meantime.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The access_or_warn() function is used to check for optional
configuration files like .gitconfig and .gitignore and warn when they
are not accessible due to a configuration issue (e.g., bad
permissions). It is not supposed to complain when a file is simply
missing.
Noticed on a system where ~/.config/git was a file --- when the new
XDG_CONFIG_HOME support looks for ~/.config/git/config it should
ignore ~/.config/git instead of printing irritating warnings:
$ git status -s
warning: unable to access '/home/jrn/.config/git/config': Not a directory
warning: unable to access '/home/jrn/.config/git/config': Not a directory
warning: unable to access '/home/jrn/.config/git/config': Not a directory
warning: unable to access '/home/jrn/.config/git/config': Not a directory
Compare v1.7.12.1~2^2 (attr:failure to open a .gitattributes file
is OK with ENOTDIR, 2012-09-13).
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The previous series introduced warnings to multiple places, but it
could become tiring to see the warning on the same path over and
over again during a single run of Git. Making just one function
responsible for issuing this warning, we could later choose to keep
track of which paths we issued a warning (it would involve a hash
table of paths after running them through real_path() or something)
in order to reduce noise.
Right now we do not know if the noise reduction is necessary, but it
still would be a good code reduction/sharing anyway.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Before reading a config file, we check "!access(path, R_OK)"
to make sure that the file exists and is readable. If it's
not, then we silently ignore it.
For the case of ENOENT, this is fine, as the presence of the
file is optional. For other cases, though, it may indicate a
configuration error (e.g., not having permissions to read
the file). Let's print a warning in these cases to let the
user know.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When getpwuid fails, we give a cute but cryptic message.
While it makes sense if you know that getpwuid or identity
functions are being called, this code is triggered behind
the scenes by quite a few git commands these days (e.g.,
receive-pack on a remote server might use it for a reflog;
the current message is hard to distinguish from an
authentication error). Let's switch to something that gives
a little more context.
While we're at it, we can factor out all of the
cut-and-pastes of the "you don't exist" message into a
wrapper function. Rather than provide xgetpwuid, let's make
it even more specific to just getting the passwd entry for
the current uid. That's the only way we use getpwuid anyway,
and it lets us make an even more specific error message.
The current message also fails to mention errno. While the
usual cause for getpwuid failing is that the user does not
exist, mentioning errno makes it easier to diagnose these
problems. Note that POSIX specifies that errno remain
untouched if the passwd entry does not exist (but will be
set on actual errors), whereas some systems will return
ENOENT or similar for a missing entry. We handle both cases
in our wrapper.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
New test cases list commands that should work when memory is
limited. All memory allocation functions (*) learn to reject any
allocation larger than $GIT_ALLOC_LIMIT if set.
(*) Not exactly all. Some places do not use x* functions, but
malloc/calloc directly, notably diff-delta. These code path should
never be run on large blobs.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The read_in_full function repeatedly calls read() to fill a
buffer. If the first read() returns an error, we notify the
caller by returning the error. However, if we read some data
and then get an error on a subsequent read, we simply return
the amount of data that we did read, and the caller is
unaware of the error.
This makes the tradeoff that seeing the partial data is more
important than the fact that an error occurred. In practice,
this is generally not the case; we care more if an error
occurred, and should throw away any partial data.
I audited the current callers. In most cases, this will make
no difference at all, as they do:
if (read_in_full(fd, buf, size) != size)
error("short read");
However, it will help in a few cases:
1. In sha1_file.c:index_stream, we would fail to notice
errors in the incoming stream.
2. When reading symbolic refs in resolve_ref, we would
fail to notice errors and potentially use a truncated
ref name.
3. In various places, we will get much better error
messages. For example, callers of safe_read would
erroneously print "the remote end hung up unexpectedly"
instead of showing the read error.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Rename the make_*_path functions so it's clearer what they do, in
particlar make clear what the differnce between make_absolute_path and
make_nonrelative_path is by renaming them real_path and absolute_path
respectively. make_relative_path has an understandable name and is
renamed to relative_path to maintain the name convention.
The function calls have been replaced 1-to-1 in their usage.
Signed-off-by: Carlos Martín Nieto <cmn@elego.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* maint:
compat: helper for detecting unsigned overflow
|
|
The idiom (a + b < a) works fine for detecting that an unsigned
integer has overflowed, but a more explicit
unsigned_add_overflows(a, b)
might be easier to read.
Define such a macro, expanding roughly to ((a) < UINT_MAX - (b)).
Because the expansion uses each argument only once outside of sizeof()
expressions, it is safe to use with arguments that have side effects.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Before, when creating a temporary file failed, a generic 'Unable to create
temporary file' message was printed. In some cases this could lead to
confusion as to which directory should be checked for correct permissions etc.
This patch adds the template for the temporary filename to the error message,
converting it to an absolute path if needed. A test verifies that the template
is indeed printed when pointing to a nonexistent or unwritable directory.
A copy of the original template is made in case mkstemp clears the template.
Signed-off-by: Arnout Engelen <arnouten@bzzt.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This way, the next caller that wants to disable our memory reclamation
machinery does not have to define its own do_nothing() stub.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
As v1.7.0-rc0~43 (slim down "git show-index", 2010-01-21) explains,
use of xmalloc() brings in a dependency on zlib, the sha1 lib, and the
rest of git's object file access machinery via try_to_free_pack_memory.
That is overkill when xmalloc is just being used as a convenience
wrapper to exit when no memory is available.
So defer setting try_to_free_pack_memory as try_to_free_routine until
the first packfile is opened in add_packed_git().
After this change, a simple program using xmalloc() and no other
functions will not pull in any code from libgit.a aside from wrapper.o
and usage.o.
Improved-by: René Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|