Age | Commit message (Collapse) | Author | Files | Lines |
|
gethostname(2) may not NUL terminate the buffer if hostname does
not fit; unfortunately there is no easy way to see if our buffer
was too small, but at least this will make sure we will not end up
using garbage past the end of the buffer.
* dt/xgethostname-nul-termination:
xgethostname: handle long hostnames
use HOST_NAME_MAX to size buffers for gethostname(2)
|
|
If the full hostname doesn't fit in the buffer supplied to
gethostname, POSIX does not specify whether the buffer will be
null-terminated, so to be safe, we should do it ourselves. Introduce
new function, xgethostname, which ensures that there is always a \0
at the end of the buffer.
Signed-off-by: David Turner <dturner@twosigma.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The last call to the mkstemps() function was removed in commit 659488326
("wrapper.c: delete dead function git_mkstemps()", 22-04-2016). In order
to support platforms without mkstemps(), this functionality was provided,
along with a Makefile build variable (NO_MKSTEMPS), by the gitmkstemps()
function. Remove the dead code, along with the defunct build machinery.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The last caller of git_mkstemp() was removed in commit 6fec0a89
("verify_signed_buffer: use tempfile object", 16-06-2016). Since
the introduction of the 'tempfile' APIs, along with git_mkstemp_mode,
it is unlikely that new callers will materialize. Remove the dead
code.
Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
General code clean-up around a helper function to write a
single-liner to a file.
* jk/write-file:
branch: use write_file_buf instead of write_file
use write_file_buf where applicable
write_file: add format attribute
write_file: add pointer+len variant
write_file: use xopen
write_file: drop "gently" form
branch: use non-gentle write_file for branch description
am: ignore return value of write_file()
config: fix bogus fd check when setting up default config
|
|
Fix recently introduced codepaths that are involved in parallel
submodule operations, which gave up on reading too early, and
could have wasted CPU while attempting to write under a corner
case condition.
* sb/submodule-parallel-fetch:
hoist out handle_nonblock function for xread and xwrite
xwrite: poll on non-blocking FDs
xread: retry after poll on EAGAIN/EWOULDBLOCK
|
|
At least for me, this improves the readability of xread and
xwrite; hopefully allowing missing "continue" statements to
be spotted more easily.
Signed-off-by: Eric Wong <e@80x24.org>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
There are many callsites which could use write_file, but for
which it is a little awkward because they have a strbuf or
other pointer/len combo. Specifically:
1. write_file() takes a format string, so we have to use
"%s" or "%.*s", which are ugly.
2. Using any form of "%s" does not handle embedded NULs in
the output. That probably doesn't matter for our
call-sites, but it's nicer not to have to worry.
3. It's less efficient; we format into another strbuf
just to do the write. That's probably not measurably
slow for our uses, but it's simply inelegant.
We can fix this by providing a helper to write out the
formatted buffer, and just calling it from write_file().
Note that we don't do the usual "complete with a newline"
that write_file does. If the caller has their own buffer,
there's a reasonable chance they're doing something more
complicated than a single line, and they can call
strbuf_complete_line() themselves.
We could go even further and add strbuf_write_file(), but it
doesn't save much:
- write_file_buf(path, sb.buf, sb.len);
+ strbuf_write_file(&sb, path);
It would also be somewhat asymmetric with strbuf_read_file,
which actually returns errors rather than dying (and the
error handling is most of the benefit of write_file() in the
first place).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This simplifies the code a tiny bit, and provides consistent
error messages with other users of xopen().
While we're here, let's also switch to using O_WRONLY. We
know we're only going to open/write/close the file, so
there's no point in asking for O_RDWR.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
There are no callers left of write_file_gently(). Let's drop
it, as it doesn't seem likely for new callers to be added
(since its inception, the only callers who wanted the gentle
form generally just died immediately themselves, and have
since been converted).
While we're there, let's also drop the "int" return from
write_file, as it is never meaningful (in the non-gentle
form, we always either die or return 0).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
write(2) can hit the same EAGAIN/EWOULDBLOCK errors as read(2),
so busy-looping on a non-blocking FD is a waste of resources.
Currently, I do not know of a way for this happen:
* the NonBlocking directive in systemd does not apply to stdin,
stdout, or stderr.
* xinetd provides no way to set the non-blocking flag at all
But theoretically, it's possible a careless C10K HTTP server
could use pipe2(..., O_NONBLOCK) to setup a pipe for
git-http-backend with only the intent to use non-blocking reads;
but accidentally leave non-blocking set on the write end passed
as stdout to git-upload-pack.
Followup-to: 1079c4be0b720 ("xread: poll on non blocking fds")
Signed-off-by: Eric Wong <e@80x24.org>
Reviewed-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We should continue to loop after EAGAIN/EWOULDBLOCK as the
intent of xread is to try until there is available data,
EOF, or an unrecoverable error.
Fixes: 1079c4be0b720 ("xread: poll on non blocking fds")
Signed-off-by: Eric Wong <e@80x24.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The code for warning_errno/die_errno has been refactored and a new
error_errno() reporting helper is introduced.
* nd/error-errno: (41 commits)
wrapper.c: use warning_errno()
vcs-svn: use error_errno()
upload-pack.c: use error_errno()
unpack-trees.c: use error_errno()
transport-helper.c: use error_errno()
sha1_file.c: use {error,die,warning}_errno()
server-info.c: use error_errno()
sequencer.c: use error_errno()
run-command.c: use error_errno()
rerere.c: use error_errno() and warning_errno()
reachable.c: use error_errno()
mailmap.c: use error_errno()
ident.c: use warning_errno()
http.c: use error_errno() and warning_errno()
grep.c: use error_errno()
gpg-interface.c: use error_errno()
fast-import.c: use error_errno()
entry.c: use error_errno()
editor.c: use error_errno()
diff-no-index.c: use error_errno()
...
|
|
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Its last call site was replaced by mks_tempfile_ts() in 284098f (diff:
use tempfile module - 2015-08-12) and there's a good chance
mks_tempfile_ts will continue to successfully handle this job. Delete
it.
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Update various codepaths to avoid manually-counted malloc().
* jk/tighten-alloc: (22 commits)
ewah: convert to REALLOC_ARRAY, etc
convert ewah/bitmap code to use xmalloc
diff_populate_gitlink: use a strbuf
transport_anonymize_url: use xstrfmt
git-compat-util: drop mempcpy compat code
sequencer: simplify memory allocation of get_message
test-path-utils: fix normalize_path_copy output buffer size
fetch-pack: simplify add_sought_entry
fast-import: simplify allocation in start_packfile
write_untracked_extension: use FLEX_ALLOC helper
prepare_{git,shell}_cmd: use argv_array
use st_add and st_mult for allocation size computation
convert trivial cases to FLEX_ARRAY macros
use xmallocz to avoid size arithmetic
convert trivial cases to ALLOC_ARRAY
convert manual allocations to argv_array
argv-array: add detach function
add helpers for allocating flex-array structs
harden REALLOC_ARRAY and xcalloc against size_t overflow
tree-diff: catch integer overflow in combine_diff_path allocation
...
|
|
REALLOC_ARRAY inherently involves a multiplication which can
overflow size_t, resulting in a much smaller buffer than we
think we've allocated. We can easily harden it by using
st_mult() to check for overflow. Likewise, we can add
ALLOC_ARRAY to do the same thing for xmalloc calls.
xcalloc() should already be fine, because it takes the two
factors separately, assuming the system calloc actually
checks for overflow. However, before we even hit the system
calloc(), we do our memory_limit_check, which involves a
multiplication. Let's check for overflow ourselves so that
this limit cannot be bypassed.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR
(e.g. COMMIT_EDITMSG) that is meant to be left after the command is
done. This however did not work well if the repository is set to
be shared with core.sharedRepository and the umask of the previous
user is tighter. They have been made to work better by calling
unlink(2) and retrying after fopen(3) fails with EPERM.
* js/fopen-harder:
Handle more file writes correctly in shared repos
commit: allow editing the commit message even in shared repos
|
|
Some codepaths used fopen(3) when opening a fixed path in $GIT_DIR
(e.g. COMMIT_EDITMSG) that is meant to be left after the command is
done. This however did not work well if the repository is set to
be shared with core.sharedRepository and the umask of the previous
user is tighter. They have been made to work better by calling
unlink(2) and retrying after fopen(3) fails with EPERM.
* js/fopen-harder:
Handle more file writes correctly in shared repos
commit: allow editing the commit message even in shared repos
|
|
Add a framework to spawn a group of processes in parallel, and use
it to run "git fetch --recurse-submodules" in parallel.
Rerolled and this seems to be a lot cleaner. The merge of the
earlier one to 'next' has been reverted.
* sb/submodule-parallel-fetch:
submodules: allow parallel fetching, add tests and documentation
fetch_populated_submodules: use new parallel job processing
run-command: add an asynchronous parallel child processor
sigchain: add command to pop all common signals
strbuf: add strbuf_read_once to read without blocking
xread: poll on non blocking fds
submodule.c: write "Fetching submodule <foo>" to stderr
|
|
It was pointed out by Yaroslav Halchenko that the file containing the
commit message is writable only by the owner, which means that we have
to rewrite it from scratch in a shared repository.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When getpwuid() on the system returned NULL (e.g. the user is not
in the /etc/passwd file or other uid-to-name mappings), the
codepath to find who the user is to record it in the reflog barfed
and died. Loosen the check in this codepath, which already accepts
questionable ident string (e.g. host part of the e-mail address is
obviously bogus), and in general when we operate fmt_ident() function
in non-strict mode.
* jk/ident-loosen-getpwuid:
ident: loosen getpwuid error in non-strict mode
ident: keep a flag for bogus default_email
ident: make xgetpwuid_self() a static local helper
|
|
The man page of read(2) says:
EAGAIN The file descriptor fd refers to a file other than a socket
and has been marked nonblocking (O_NONBLOCK), and the read
would block.
EAGAIN or EWOULDBLOCK
The file descriptor fd refers to a socket and has been marked
nonblocking (O_NONBLOCK), and the read would block. POSIX.1-2001
allows either error to be returned for this case, and does not
require these constants to have the same value, so a portable
application should check for both possibilities.
If we get an EAGAIN or EWOULDBLOCK the fd must have set O_NONBLOCK.
As the intent of xread is to read as much as possible either until the
fd is EOF or an actual error occurs, we can ease the feeder of the fd
by not spinning the whole time, but rather wait for it politely by not
busy waiting.
We should not care if the call to poll failed, as we're in an infinite
loop and can only get out with the correct read().
Signed-off-by: Stefan Beller <sbeller@google.com>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This function is defined in wrapper.c, but nobody besides
ident.c uses it. And nobody is likely to in the future,
either, as anything that cares about the user's name should
be going through the ident code.
Moving it here is a cleanup of the global namespace, but it
will also enable further cleanups inside ident.c.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
There are a number of places in the code where we call
sprintf(), with the assumption that the output will fit into
the buffer. In many cases this is true (e.g., formatting a
number into a large buffer), but it is hard to tell
immediately from looking at the code. It would be nice if we
had some run-time check to make sure that our assumption is
correct (and to communicate to readers of the code that we
are not blindly calling sprintf, but have actually thought
about this case).
This patch introduces xsnprintf, which behaves just like
snprintf, except that it dies whenever the output is
truncated. This acts as a sort of assert() for these cases,
which can help find places where the assumption is violated
(as opposed to truncating and proceeding, which may just
silently give a wrong answer).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
All existing callers to this function use it to produce a text file
or an empty file, and a new callsite that mimick them must end their
payload with a LF. If they forget to do so, the resulting file will
end with an incomplete line.
Teach write_file_v() to complete the incomplete line, if exists, so
that the callers do not have to.
With this, the caller-side fix in builtin/am.c becomes unnecessary.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
All callers except three passed 1 for the "fatal" parameter to ask
this function to die upon error, but to a casual reader of the code,
it was not all obvious what that 1 meant. Instead, split the
function into two based on a common write_file_v() that takes the
flag, introduce write_file_gently() as a new way to attempt creating
a file without dying on error, and make three callers to call it.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A common usage pattern of fopen() is to check if it succeeded, and die()
if it failed:
FILE *fp = fopen(path, "w");
if (!fp)
die_errno(_("could not open '%s' for writing"), path);
Implement a wrapper function xfopen() for the above, so that we can save
a few lines of code and make the die() messages consistent.
Helped-by: Jeff King <peff@peff.net>
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A common usage pattern of open() is to check if it was successful, and
die() if it was not:
int fd = open(path, O_WRONLY | O_CREAT, 0777);
if (fd < 0)
die_errno(_("Could not open '%s' for writing."), path);
Implement a wrapper function xopen() that does the above so that we can
save a few lines of code, and make the die() messages consistent.
Helped-by: Torsten Bögershausen <tboegi@web.de>
Helped-by: Jeff King <peff@peff.net>
Helped-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Helped-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We want to use the new function elsewhere in a moment.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
A replacement for contrib/workdir/git-new-workdir that does not
rely on symbolic links and make sharing of objects and refs safer
by making the borrowee and borrowers aware of each other.
* nd/multiple-work-trees: (41 commits)
prune --worktrees: fix expire vs worktree existence condition
t1501: fix test with split index
t2026: fix broken &&-chain
t2026 needs procondition SANITY
git-checkout.txt: a note about multiple checkout support for submodules
checkout: add --ignore-other-wortrees
checkout: pass whole struct to parse_branchname_arg instead of individual flags
git-common-dir: make "modules/" per-working-directory directory
checkout: do not fail if target is an empty directory
t2025: add a test to make sure grafts is working from a linked checkout
checkout: don't require a work tree when checking out into a new one
git_path(): keep "info/sparse-checkout" per work-tree
count-objects: report unused files in $GIT_DIR/worktrees/...
gc: support prune --worktrees
gc: factor out gc.pruneexpire parsing code
gc: style change -- no SP before closing parenthesis
checkout: clean up half-prepared directories in --to mode
checkout: reject if the branch is already checked out elsewhere
prune: strategies for linked checkouts
checkout: support checking out into a new working directory
...
|
|
Our default I/O size (8 MiB) for large files was too large for some
platforms with smaller SSIZE_MAX, leading to read(2)/write(2)
failures.
* jc/max-io-size-and-ssize-max:
xread/xwrite: clip MAX_IO_SIZE to SSIZE_MAX
|
|
Since 0b6806b9 (xread, xwrite: limit size of IO to 8MB, 2013-08-20),
we chomp our calls to read(2) and write(2) into chunks of
MAX_IO_SIZE bytes (8 MiB), because a large IO results in a bad
latency when the program needs to be killed. This also brought our
IO below SSIZE_MAX, which is a limit POSIX allows read(2) and
write(2) to fail when the IO size exceeds it, for OS X, where a
problem was originally reported.
However, there are other systems that define SSIZE_MAX smaller than
our default, and feeding 8 MiB to underlying read(2)/write(2) would
fail. Make sure we clip our calls to the lower limit as well.
Reported-by: Joachim Schmitz <jojo@schmitz-digital.de>
Helped-by: Torsten Bögershausen <tboegi@web.de>
Helped-by: Eric Sunshine <sunshine@sunshineco.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This behaves like unlink_or_warn except that on failure it writes the message
to its 'err' argument, which the caller can display in an appropriate way or
ignore.
Signed-off-by: Ronnie Sahlberg <sahlberg@google.com>
Reviewed-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Simplify the function warn_if_unremovable slightly. Additionally, change
behaviour slightly. If we failed to remove the object because the object
does not exist, we can still return success back to the caller since none of
the callers depend on "fail if the file did not exist".
Signed-off-by: Ronnie Sahlberg <sahlberg@google.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When running a required clean filter, we do not have to mmap the
original before feeding the filter. Instead, stream the file
contents directly to the filter and process its output.
* sp/stream-clean-filter:
sha1_file: don't convert off_t to size_t too early to avoid potential die()
convert: stream from fd to required clean filter to reduce used address space
copy_fd(): do not close the input file descriptor
mmap_limit: introduce GIT_MMAP_LIMIT to allow testing expected mmap size
memory_limit: use git_env_ulong() to parse GIT_ALLOC_LIMIT
config.c: add git_env_ulong() to parse environment variable
convert: drop arguments other than 'path' from would_convert_to_git()
|
|
Teach a few codepaths to punt (instead of dying) when large blobs
that would not fit in core are involved in the operation.
* nd/large-blobs:
diff: shortcut for diff'ing two binary SHA-1 objects
diff --stat: mark any file larger than core.bigfilethreshold binary
diff.c: allow to pass more flags to diff_populate_filespec
sha1_file.c: do not die failing to malloc in unpack_compressed_entry
wrapper.c: introduce gentle xmallocz that does not die()
|
|
GIT_ALLOC_LIMIT limits xmalloc()'s size, which is of type size_t.
Better use git_env_ulong() to parse the environment variable, so
that the postfixes 'k', 'm', and 'g' can be used; and use size_t to
store the limit for consistency. The change to size_t has no direct
practical impact, because the environment variable is only meant to
be used for our own tests, and we use it to test small sizes.
The cast of size in the call to die() is changed to uintmax_t to
match the format string PRIuMAX.
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Add the helper function xgetcwd(), which returns the current directory
or dies. The returned string has to be free()d after use.
Helped-by: Duy Nguyen <pclouds@gmail.com>
Signed-off-by: Rene Scharfe <l.s.r@web.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Before we proceed to opportunistically update the index (often done
by an otherwise read-only operation like "git status" and "git diff"
that internally refreshes the index), we must verify that the
current index file is the same as the one that we read earlier
before we took the lock on it, in order to avoid a possible race.
In the example below git-status does "opportunistic update" and
git-rebase updates the index, but the race can happen in general.
1. process A calls git-rebase (or does anything that uses the index)
2. process A applies 1st commit
3. process B calls git-status (or does anything that updates the index)
4. process B reads index
5. process A applies 2nd commit
6. process B takes the lock, then overwrites process A's changes.
7. process A applies 3rd commit
As an end result the 3rd commit will have a revert of the 2nd commit.
When process B takes the lock, it needs to make sure that the index
hasn't changed since step 4.
Signed-off-by: Yiannis Marangos <yiannis.marangos@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
It is a common mistake to call read(2)/pread(2) and forget to
anticipate that they may return error with EAGAIN/EINTR when the
system call is interrupted.
We have xread() helper to relieve callers of read(2) from having to
worry about it; add xpread() helper to do the same for pread(2).
Update the caller in the builtin/index-pack.c and the mmap emulation
in compat/.
Signed-off-by: Yiannis Marangos <yiannis.marangos@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When the NO_MKSTEMPS build variable is not set, the gitmkstemps
function is dead code. Use a preprocessor conditional to only include
the definition when needed.
Noticed by sparse. ("'gitmkstemps' was not declared. Should it be
static?")
Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
Checking out 2GB or more through an external filter (see test) fails
on Mac OS X 10.8.4 (12E55) for a 64-bit executable with:
error: read from external filter cat failed
error: cannot feed the input to external filter cat
error: cat died of signal 13
error: external filter cat failed 141
error: external filter cat failed
The reason is that read() immediately returns with EINVAL when asked
to read more than 2GB. According to POSIX [1], if the value of
nbyte passed to read() is greater than SSIZE_MAX, the result is
implementation-defined. The write function has the same restriction
[2]. Since OS X still supports running 32-bit executables, the
32-bit limit (SSIZE_MAX = INT_MAX = 2GB - 1) seems to be also
imposed on 64-bit executables under certain conditions. For write,
the problem has been addressed earlier [6c642a].
Address the problem for read() and write() differently, by limiting
size of IO chunks unconditionally on all platforms in xread() and
xwrite(). Large chunks only cause problems, like causing latencies
when killing the process, even if OS X was not buggy. Doing IO in
reasonably sized smaller chunks should have no negative impact on
performance.
The compat wrapper clipped_write() introduced earlier [6c642a] is
not needed anymore. It will be reverted in a separate commit. The
new test catches read and write problems.
Note that 'git add' exits with 0 even if it prints filtering errors
to stderr. The test, therefore, checks stderr. 'git add' should
probably be changed (sometime in another commit) to exit with
nonzero if filtering fails. The test could then be changed to use
test_must_fail.
Thanks to the following people for suggestions and testing:
Johannes Sixt <j6t@kdbg.org>
John Keeping <john@keeping.me.uk>
Jonathan Nieder <jrnieder@gmail.com>
Kyle J. McKay <mackyle@gmail.com>
Linus Torvalds <torvalds@linux-foundation.org>
Torsten Bögershausen <tboegi@web.de>
[1] http://pubs.opengroup.org/onlinepubs/009695399/functions/read.html
[2] http://pubs.opengroup.org/onlinepubs/009695399/functions/write.html
[6c642a] commit 6c642a878688adf46b226903858b53e2d31ac5c3
compate/clipped-write.c: large write(2) fails on Mac OS X/XNU
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Two places we did not check return value (expected to be a file
descriptor) correctly.
* tr/fd-gotcha-fixes:
run-command: dup_devnull(): guard against syscalls failing
git_mkstemps: correctly test return value of open()
|
|
open() returns -1 on failure, and indeed 0 is a possible success value
if the user closed stdin in our process. Fix the test.
Signed-off-by: Thomas Rast <trast@inf.ethz.ch>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The changes v1.7.12.1~2^2~4 (config: warn on inaccessible files,
2012-08-21) and v1.8.1.1~22^2~2 (config: treat user and xdg config
permission problems as errors, 2012-10-13) were intended to prevent
important configuration (think "[transfer] fsckobjects") from being
ignored when the configuration is unintentionally unreadable (for
example with EIO on a flaky filesystem, or with ENOMEM due to a DoS
attack). Usually ~/.gitconfig and ~/.config/git are readable by the
current user, and if they aren't then it would be easy to fix those
permissions, so the damage from adding this check should have been
minimal.
Unfortunately the access() check often trips when git is being run as
a server. A daemon (such as inetd or git-daemon) starts as "root",
creates a listening socket, and then drops privileges, meaning that
when git commands are invoked they cannot access $HOME and die with
fatal: unable to access '/root/.config/git/config': Permission denied
Any patch to fix this would have one of three problems:
1. We annoy sysadmins who need to take an extra step to handle HOME
when dropping privileges (the current behavior, or any other
proposal that they have to opt into).
2. We annoy sysadmins who want to set HOME when dropping privileges,
either by making what they want to do impossible, or making them
set an extra variable or option to accomplish what used to work
(e.g., a patch to git-daemon to set HOME when --user is passed).
3. We loosen the check, so some cases which might be noteworthy are
not caught.
This patch is of type (3).
Treat user and xdg configuration that are inaccessible due to
permissions (EACCES) as though no user configuration was provided at
all.
An alternative method would be to check if $HOME is readable, but that
would not help in cases where the user who dropped privileges had a
globally readable HOME with only .config or .gitconfig being private.
This does not change the behavior when /etc/gitconfig or .git/config
is unreadable (since those are more serious configuration errors),
nor when ~/.gitconfig or ~/.config/git is unreadable due to problems
other than permissions.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When attempting to read the XDG-style $HOME/.config/git/config and
finding that $HOME/.config/git is a file, we gave a wrong error
message, instead of treating the case as "a custom config file does
not exist there" and moving on.
* jn/warn-on-inaccessible-loosen:
config: exit on error accessing any config file
doc: advertise GIT_CONFIG_NOSYSTEM
config: treat user and xdg config permission problems as errors
config, gitignore: failure to access with ENOTDIR is ok
|