summaryrefslogtreecommitdiff
path: root/run-command.c
AgeCommit message (Collapse)AuthorFilesLines
2016-04-06Merge branch 'sb/submodule-parallel-update'Libravatar Junio C Hamano1-6/+6
A major part of "git submodule update" has been ported to C to take advantage of the recently added framework to run download tasks in parallel. * sb/submodule-parallel-update: clone: allow an explicit argument for parallel submodule clones submodule update: expose parallelism to the user submodule helper: remove double 'fatal: ' prefix git submodule update: have a dedicated helper for cloning run_processes_parallel: rename parameters for the callbacks run_processes_parallel: treat output of children as byte array submodule update: direct error message to stderr fetching submodules: respect `submodule.fetchJobs` config option submodule-config: drop check against NULL submodule-config: keep update strategy around
2016-03-04Merge branch 'sb/submodule-parallel-fetch'Libravatar Junio C Hamano1-21/+3
Simplify the two callback functions that are triggered when the child process terminates to avoid misuse of the child-process structure that has already been cleaned up. * sb/submodule-parallel-fetch: run-command: do not pass child process data into callbacks
2016-03-01run_processes_parallel: rename parameters for the callbacksLibravatar Stefan Beller1-2/+2
The refs code has a similar pattern of passing around 'struct strbuf *err', which is strictly used for error reporting. This is not the case here, as the strbuf is used to accumulate all the output (whether it is error or not) for the user. Rename it to 'out'. Suggested-by: Jonathan Nieder <jrnieder@gmail.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-03-01run_processes_parallel: treat output of children as byte arrayLibravatar Stefan Beller1-4/+4
We do not want the output to be interrupted by a NUL byte, so we cannot use raw fputs. Introduce strbuf_write to avoid having long arguments in run-command.c. Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-03-01run-command: do not pass child process data into callbacksLibravatar Stefan Beller1-21/+3
The expected way to pass data into the callback is to pass them via the customizable callback pointer. The error reporting in default_{start_failure, task_finished} is not user friendly enough, that we want to encourage using the child data for such purposes. Furthermore the struct child data is cleaned by the run-command API, before we access them in the callbacks, leading to use-after-free situations. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-26Merge branch 'jk/epipe-in-async'Libravatar Junio C Hamano1-0/+10
Handling of errors while writing into our internal asynchronous process has been made more robust, which reduces flakiness in our tests. * jk/epipe-in-async: t5504: handle expected output from SIGPIPE death test_must_fail: report number of unexpected signal fetch-pack: ignore SIGPIPE in sideband demuxer write_or_die: handle EPIPE in async threads
2016-02-26Merge branch 'jk/tighten-alloc'Libravatar Junio C Hamano1-35/+25
Update various codepaths to avoid manually-counted malloc(). * jk/tighten-alloc: (22 commits) ewah: convert to REALLOC_ARRAY, etc convert ewah/bitmap code to use xmalloc diff_populate_gitlink: use a strbuf transport_anonymize_url: use xstrfmt git-compat-util: drop mempcpy compat code sequencer: simplify memory allocation of get_message test-path-utils: fix normalize_path_copy output buffer size fetch-pack: simplify add_sought_entry fast-import: simplify allocation in start_packfile write_untracked_extension: use FLEX_ALLOC helper prepare_{git,shell}_cmd: use argv_array use st_add and st_mult for allocation size computation convert trivial cases to FLEX_ARRAY macros use xmallocz to avoid size arithmetic convert trivial cases to ALLOC_ARRAY convert manual allocations to argv_array argv-array: add detach function add helpers for allocating flex-array structs harden REALLOC_ARRAY and xcalloc against size_t overflow tree-diff: catch integer overflow in combine_diff_path allocation ...
2016-02-25write_or_die: handle EPIPE in async threadsLibravatar Jeff King1-0/+10
When write_or_die() sees EPIPE, it treats it specially by converting it into a SIGPIPE death. We obviously cannot ignore it, as the write has failed and the caller expects us to die. But likewise, we cannot just call die(), because printing any message at all would be a nuisance during normal operations. However, this is a problem if write_or_die() is called from a thread. Our raised signal ends up killing the whole process, when logically we just need to kill the thread (after all, if we are ignoring SIGPIPE, there is good reason to think that the main thread is expecting to handle it). Inside an async thread, the die() code already does the right thing, because we use our custom die_async() routine, which calls pthread_join(). So ideally we would piggy-back on that, and simply call: die_quietly_with_code(141); or similar. But refactoring the die code to do this is surprisingly non-trivial. The die_routines themselves handle both printing and the decision of the exit code. Every one of them would have to be modified to take new parameters for the code, and to tell us to be quiet. Instead, we can just teach write_or_die() to check for the async case and handle it specially. We do have to build an interface to abstract the async exit, but it's simple and self-contained. If we had many call-sites that wanted to do this die_quietly_with_code(), this approach wouldn't scale as well, but we don't. This is the only place where do this weird exit trick. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22prepare_{git,shell}_cmd: use argv_arrayLibravatar Jeff King1-35/+25
These functions transform an existing argv into one suitable for exec-ing or spawning via git or a shell. We can use an argv_array in each to avoid dealing with manual counting and allocation. This also makes the memory allocation more clear and fixes some leaks. In prepare_shell_cmd, we would sometimes allocate a new string with "$@" in it and sometimes not, meaning the caller could not correctly free it. On the non-Windows side, we are in a child process which will exec() or exit() immediately, so the leak isn't a big deal. On Windows, though, we use spawn() from the parent process, and leak a string for each shell command we run. On top of that, the Windows code did not free the allocated argv array at all (but does for the prepare_git_cmd case!). By switching both of these functions to write into an argv_array, we can consistently free the result as appropriate. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-20Merge branch 'nd/clear-gitenv-upon-use-of-alias'Libravatar Junio C Hamano1-1/+1
d95138e6 (setup: set env $GIT_WORK_TREE when work tree is set, like $GIT_DIR, 2015-06-26) attempted to work around a glitch in alias handling by overwriting GIT_WORK_TREE environment variable to affect subprocesses when set_git_work_tree() gets called, which resulted in a rather unpleasant regression to "clone" and "init". Try to address the same issue by always restoring the environment and respawning the real underlying command when handling alias. * nd/clear-gitenv-upon-use-of-alias: run-command: don't warn on SIGPIPE deaths git.c: make sure we do not leak GIT_* to alias scripts setup.c: re-fix d95138e (setup: set env $GIT_WORK_TREE when .. git.c: make it clear save_env() is for alias handling only
2015-12-29run-command: don't warn on SIGPIPE deathsLibravatar Jeff King1-1/+1
When git executes a sub-command, we print a warning if the command dies due to a signal, but make an exception for "uninteresting" cases like SIGINT and SIGQUIT (since the user presumably just hit ^C). We should make a similar exception for SIGPIPE, because it's an expected and uninteresting return in most cases; it generally means the user quit the pager before git had finished generating all output. This used to be very hard to trigger in practice, because: 1. We only complain if we see a real SIGPIPE death, not the shell-induced 141 exit code. This means that anything we run via the shell does not trigger the warning, which includes most non-trivial aliases. 2. The common case for SIGPIPE is the user quitting the pager before git has finished generating all output. But if the user triggers a pager with "-p", we redirect the git wrapper's stderr to that pager, too. Since the pager is dead, it means that the message goes nowhere. 3. You can see it if you run your own pager, like "git foo | head". But that only happens if "foo" is a non-builtin (so it doesn't work with "log", for example). However, it may become more common after 86d26f2, which teaches alias to re-exec builtins rather than running them in the same process. This case doesn't trigger (1), as we don't need a shell to run a git command. It doesn't trigger (2), because the pager is not started by the original git, but by the inner re-exec of git. And it doesn't trigger (3), because builtins are treated more like non-builtins in this case. Given how flaky this message already is (e.g., you cannot even know whether you will see it, as git optimizes out some shell invocations behind the scenes based on the contents of the command!), and that it is unlikely to ever provide useful information, let's suppress it for all cases of SIGPIPE. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-12-16run-command: add an asynchronous parallel child processorLibravatar Stefan Beller1-0/+335
This allows to run external commands in parallel with ordered output on stderr. If we run external commands in parallel we cannot pipe the output directly to the our stdout/err as it would mix up. So each process's output will flow through a pipe, which we buffer. One subprocess can be directly piped to out stdout/err for a low latency feedback to the user. Example: Let's assume we have 5 submodules A,B,C,D,E and each fetch takes a different amount of time as the different submodules vary in size, then the output of fetches in sequential order might look like this: time --> output: |---A---| |-B-| |-------C-------| |-D-| |-E-| When we schedule these submodules into maximal two parallel processes, a schedule and sample output over time may look like this: process 1: |---A---| |-D-| |-E-| process 2: |-B-| |-------C-------| output: |---A---|B|---C-------|DE So A will be perceived as it would run normally in the single child version. As B has finished by the time A is done, we can dump its whole progress buffer on stderr, such that it looks like it finished in no time. Once that is done, C is determined to be the visible child and its progress will be reported in real time. So this way of output is really good for human consumption, as it only changes the timing, not the actual output. For machine consumption the output needs to be prepared in the tasks, by either having a prefix per line or per block to indicate whose tasks output is displayed, because the output order may not follow the original sequential ordering: |----A----| |--B--| |-C-| will be scheduled to be all parallel: process 1: |----A----| process 2: |--B--| process 3: |-C-| output: |----A----|CB This happens because C finished before B did, so it will be queued for output before B. To detect when a child has finished executing, we check interleaved with other actions (such as checking the liveliness of children or starting new processes) whether the stderr pipe still exists. Once a child closed its stderr stream, we assume it is terminating very soon, and use `finish_command()` from the single external process execution interface to collect the exit status. By maintaining the strong assumption of stderr being open until the very end of a child process, we can avoid other hassle such as an implementation using `waitpid(-1)`, which is not implemented in Windows. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-03Merge branch 'rs/daemon-plug-child-leak'Libravatar Junio C Hamano1-6/+9
"git daemon" uses "run_command()" without "finish_command()", so it needs to release resources itself, which it forgot to do. * rs/daemon-plug-child-leak: daemon: plug memory leak run-command: factor out child_process_clear()
2015-11-02run-command: factor out child_process_clear()Libravatar René Scharfe1-6/+9
Avoid duplication by moving the code to release allocated memory for arguments and environment to its own function, child_process_clear(). Export it to provide a counterpart to child_process_init(). Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-10-07Merge branch 'ti/glibc-stdio-mutex-from-signal-handler'Libravatar Junio C Hamano1-8/+17
Allocation related functions and stdio are unsafe things to call inside a signal handler, and indeed killing the pager can cause glibc to deadlock waiting on allocation mutex as our signal handler tries to free() some data structures in wait_for_pager(). Reduce these unsafe calls. * ti/glibc-stdio-mutex-from-signal-handler: pager: don't use unsafe functions in signal handlers
2015-10-05Merge branch 'jk/async-pkt-line'Libravatar Junio C Hamano1-1/+15
The debugging infrastructure for pkt-line based communication has been improved to mark the side-band communication specifically. * jk/async-pkt-line: pkt-line: show packets in async processes as "sideband" run-command: provide in_async query function
2015-09-04pager: don't use unsafe functions in signal handlersLibravatar Takashi Iwai1-8/+17
Since the commit a3da8821208d (pager: do wait_for_pager on signal death), we call wait_for_pager() in the pager's signal handler. The recent bug report revealed that this causes a deadlock in glibc at aborting "git log" [*1*]. When this happens, git process is left unterminated, and it can't be killed by SIGTERM but only by SIGKILL. The problem is that wait_for_pager() function does more than waiting for pager process's termination, but it does cleanups and printing errors. Unfortunately, the functions that may be used in a signal handler are very limited [*2*]. Particularly, malloc(), free() and the variants can't be used in a signal handler because they take a mutex internally in glibc. This was the cause of the deadlock above. Other than the direct calls of malloc/free, many functions calling malloc/free can't be used. strerror() is such one, either. Also the usage of fflush() and printf() in a signal handler is bad, although it seems working so far. In a safer side, we should avoid them, too. This patch tries to reduce the calls of such functions in signal handlers. wait_for_signal() takes a flag and avoids the unsafe calls. Also, finish_command_in_signal() is introduced for the same reason. There the free() calls are removed, and only waits for the children without whining at errors. [*1*] https://bugzilla.opensuse.org/show_bug.cgi?id=942297 [*2*] http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_04_03 Signed-off-by: Takashi Iwai <tiwai@suse.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-01run-command: provide in_async query functionLibravatar Jeff King1-1/+15
It's not easy for arbitrary code to find out whether it is running in an async process or not. A top-level function which is fed to start_async() can know (you just pass down an argument saying "you are async"). But that function may call other global functions, and we would not want to have to pass the information all the way through the call stack. Nor can we simply set a global variable, as those may be shared between async threads and the main thread (if the platform supports pthreads). We need pthread tricks _or_ a global variable, depending on how start_async is implemented. The callers don't have enough information to do this right, so let's provide a simple query function that does. Fortunately we can reuse the existing infrastructure to make the pthread case simple (and even simplify die_async() by using our new function). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-25Merge branch 'jk/long-error-messages'Libravatar Junio C Hamano1-15/+2
The codepath to produce error messages had a hard-coded limit to the size of the message, primarily to avoid memory allocation while calling die(). * jk/long-error-messages: vreportf: avoid intermediate buffer vreportf: report to arbitrary filehandles
2015-08-11vreportf: report to arbitrary filehandlesLibravatar Jeff King1-15/+2
The vreportf function always goes to stderr, but run-command wants child errors to go to the parent's original stderr. To solve this, commit a5487dd duplicates the stderr fd and installs die and error handlers to direct the output appropriately (which later turned into the vwritef function). This has two downsides, though: - we make multiple calls to write(), which contradicts the "write at once" logic from d048a96 (print warning/error/fatal messages in one shot, 2007-11-09). - the custom handlers basically duplicate the normal handlers. They're only a few lines of code, but we should not have to repeat the magic "exit(128)", for example. We can solve the first by using fdopen() on the duplicated descriptor. We can't pass this to vreportf, but we could introduce a new vreportf_to to handle it. However, to fix the second problem, we instead introduce a new "set_error_handle" function, which lets the normal vreportf calls output to a handle besides stderr. Thus we can get rid of our custom handlers entirely, and just ask the regular handlers to output to our new descriptor. And as vwritef has no more callers, it can just go away. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-10find_hook: keep our own static bufferLibravatar Jeff King1-4/+6
The find_hook function returns the results of git_path, which is a static buffer shared by other path-related calls. Returning such a buffer is slightly dangerous, because it can be overwritten by seemingly unrelated functions. Let's at least keep our _own_ static buffer, so you can only get in trouble by calling find_hook in quick succession, which is less likely to happen and more obvious to notice. While we're at it, let's add some documentation of the function's limitations. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-11Merge branch 'nd/multiple-work-trees'Libravatar Junio C Hamano1-2/+2
A replacement for contrib/workdir/git-new-workdir that does not rely on symbolic links and make sharing of objects and refs safer by making the borrowee and borrowers aware of each other. * nd/multiple-work-trees: (41 commits) prune --worktrees: fix expire vs worktree existence condition t1501: fix test with split index t2026: fix broken &&-chain t2026 needs procondition SANITY git-checkout.txt: a note about multiple checkout support for submodules checkout: add --ignore-other-wortrees checkout: pass whole struct to parse_branchname_arg instead of individual flags git-common-dir: make "modules/" per-working-directory directory checkout: do not fail if target is an empty directory t2025: add a test to make sure grafts is working from a linked checkout checkout: don't require a work tree when checking out into a new one git_path(): keep "info/sparse-checkout" per work-tree count-objects: report unused files in $GIT_DIR/worktrees/... gc: support prune --worktrees gc: factor out gc.pruneexpire parsing code gc: style change -- no SP before closing parenthesis checkout: clean up half-prepared directories in --to mode checkout: reject if the branch is already checked out elsewhere prune: strategies for linked checkouts checkout: support checking out into a new working directory ...
2015-03-25Merge branch 'jk/run-command-capture'Libravatar Junio C Hamano1-1/+22
The run-command interface was easy to abuse and make a pipe for us to read from the process, wait for the process to finish and then attempt to read its output, which is a pattern that lead to a deadlock. Fix such uses by introducing a helper to do this correctly (i.e. we need to read first and then wait the process to finish) and also add code to prevent such abuse in the run-command helper. * jk/run-command-capture: run-command: forbid using run_command with piped output trailer: use capture_command submodule: use capture_command wt-status: use capture_command run-command: introduce capture_command helper wt_status: fix signedness mismatch in strbuf_read call wt-status: don't flush before running "submodule status"
2015-03-22run-command: forbid using run_command with piped outputLibravatar Jeff King1-1/+6
Because run_command both spawns and wait()s for the command before returning control to the caller, any reads from the pipes we open must necessarily happen after wait() returns. This can lead to deadlock, as the child process may block on writing to us while we are blocked waiting for it to exit. Worse, it only happens when the child fills the pipe buffer, which means that the problem may come and go depending on the platform and the size of the output produced by the child. Let's detect and flag this dangerous construct so that we can catch potential bugs early in the test suite rather than having them happen in the field. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-22run-command: introduce capture_command helperLibravatar Jeff King1-0/+16
Something as simple as reading the stdout from a command turns out to be rather hard to do right. Doing: cmd.out = -1; run_command(&cmd); strbuf_read(&buf, cmd.out, 0); can result in deadlock if the child process produces a large amount of output. What happens is: 1. The parent spawns the child with its stdout connected to a pipe, of which the parent is the sole reader. 2. The parent calls wait(), blocking until the child exits. 3. The child writes to stdout. If it writes more data than the OS pipe buffer can hold, the write() call will block. This is a deadlock; the parent is waiting for the child to exit, and the child is waiting for the parent to call read(). So we might try instead: start_command(&cmd); strbuf_read(&buf, cmd.out, 0); finish_command(&cmd); But that is not quite right either. We are examining cmd.out and running finish_command whether start_command succeeded or not, which is wrong. Moreover, these snippets do not do any error handling. If our read() fails, we must make sure to still call finish_command (to reap the child process). And both snippets failed to close the cmd.out descriptor, which they must do (provided start_command succeeded). Let's introduce a run-command helper that can make this a bit simpler for callers to get right. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-10git-compat-util.h: move SHELL_PATH default into headerLibravatar Kyle J. McKay1-4/+0
If SHELL_PATH is not defined we use "/bin/sh". However, run-command.c is not the only file that needs to use the default value so move it into a common header. Signed-off-by: Kyle J. McKay <mackyle@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-22Merge branch 'jc/hook-cleanup'Libravatar Junio C Hamano1-17/+0
Remove unused code. * jc/hook-cleanup: run-command.c: retire unused run_hook_with_custom_index()
2014-12-01path.c: make get_pathname() call sites return const char *Libravatar Nguyễn Thái Ngọc Duy1-2/+2
Before the previous commit, get_pathname returns an array of PATH_MAX length. Even if git_path() and similar functions does not use the whole array, git_path() caller can, in theory. After the commit, get_pathname() may return a buffer that has just enough room for the returned string and git_path() caller should never write beyond that. Make git_path(), mkpath() and git_path_submodule() return a const buffer to make sure callers do not write in it at all. This could have been part of the previous commit, but the "const" conversion is too much distraction from the core changes in path.c. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-01run-command.c: retire unused run_hook_with_custom_index()Libravatar Junio C Hamano1-17/+0
This was originally meant to be used to rewrite run_commit_hook() that only special cases the GIT_INDEX_FILE environment, but the run_hook_ve() refactoring done earlier made the implementation of run_commit_hook() thin and clean enough. Nobody uses this, so retire it as an unfinished clean-up made unnecessary. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-11-10run-command: use void to declare that functions take no parametersLibravatar René Scharfe1-2/+2
Explicitly declare that git_atexit_dispatch() and git_atexit_clear() take no parameters instead of leaving their parameter list empty and thus unspecified. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-24Merge branch 'eb/no-pthreads'Libravatar Junio C Hamano1-0/+40
Allow us build with NO_PTHREADS=NoThanks compilation option. * eb/no-pthreads: Handle atexit list internaly for unthreaded builds pack-objects: set number of threads before checking and warning index-pack: fix compilation with NO_PTHREADS
2014-10-19Handle atexit list internaly for unthreaded buildsLibravatar Etienne Buira1-0/+40
Wrap atexit()s calls on unthreaded builds to handle callback list internally. This is needed because on unthreaded builds, asyncs inherits parent's atexit() list, that gets run as soon as the async exit()s (and again at the end of async's parent process). That led to remove temporary files too early. Also remove a by-atexit-callback guard against this kind of issue in clone.c, as this patch makes it redundant. Fixes test 5537 (temporary shallow file vanished before unpack-objects could open it) BTW remove an unused variable in shallow.c. Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Andreas Schwab <schwab@linux-m68k.org> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Etienne Buira <etienne.buira@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-19run-command: add env_array, an optional argv_array for envLibravatar René Scharfe1-0/+6
Similar to args, add a struct argv_array member to struct child_process that simplifies specifying the environment for children. It is freed automatically by finish_command() or if start_command() encounters an error. Suggested-by: Jeff King <peff@peff.net> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20run-command: inline prepare_run_command_v_opt()Libravatar René Scharfe1-16/+8
Merge prepare_run_command_v_opt() and its only caller. This removes a pointer indirection and allows to initialize the struct child_process using CHILD_PROCESS_INIT. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20run-command: call run_command_v_opt_cd_env() instead of duplicating itLibravatar René Scharfe1-3/+1
Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20run-command: introduce child_process_init()Libravatar René Scharfe1-0/+6
Add a helper function for initializing those struct child_process variables for which the macro CHILD_PROCESS_INIT can't be used. Suggested-by: Jeff King <peff@peff.net> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20run-command: introduce CHILD_PROCESS_INITLibravatar René Scharfe1-2/+1
Most struct child_process variables are cleared using memset first after declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to initialize them statically instead. That's shorter, doesn't require a function call and is slightly more readable (especially given that we already have STRBUF_INIT, ARGV_ARRAY_INIT etc.). Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-30Merge branch 'sk/mingw-uni-fix-more'Libravatar Junio C Hamano1-8/+2
Most of these are battle-tested in msysgit and are needed to complete what has been merged to 'master' already. * sk/mingw-uni-fix-more: Win32: enable color output in Windows cmd.exe Win32: patch Windows environment on startup Win32: keep the environment sorted Win32: use low-level memory allocation during initialization Win32: reduce environment array reallocations Win32: don't copy the environment twice when spawning child processes Win32: factor out environment block creation Win32: unify environment function names Win32: unify environment case-sensitivity Win32: fix environment memory leaks Win32: Unicode environment (incoming) Win32: Unicode environment (outgoing) Revert "Windows: teach getenv to do a case-sensitive search" tests: do not pass iso8859-1 encoded parameter
2014-07-21Win32: don't copy the environment twice when spawning child processesLibravatar Karsten Blees1-8/+2
When spawning child processes via start_command(), the environment and all environment entries are copied twice. First by make_augmented_environ / copy_environ to merge with child_process.env. Then a second time by make_environment_block to create a sorted environment block string as required by CreateProcess. Move the merge logic to make_environment_block so that we only need to copy the environment once. This changes semantics of the env parameter: it now expects a delta (such as child_process.env) rather than a full environment. This is not a problem as the parameter is only used by start_command() (all other callers previously passed char **environ, and now pass NULL). The merge logic no longer xstrdup()s the environment strings, so do_putenv must not free them. Add a parameter to distinguish this from normal putenv. Remove the now unused make_augmented_environ / free_environ API. Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Stepan Kasal <kasal@ucw.cz> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-17run-command: use internal argv_array of struct child_process in run_hook_ve()Libravatar René Scharfe1-11/+4
Use the existing argv_array member instead of providing our own. This way we don't have to initialize or clean it up explicitly. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-15run-command: store an optional argv_arrayLibravatar Jeff King1-1/+8
All child_process structs need to point to an argv. For flexibility, we do not mandate the use of a dynamic argv_array. However, because the child_process does not own the memory, this can make memory management with a separate argv_array difficult. For example, if a function calls start_command but not finish_command, the argv memory must persist. The code needs to arrange to clean up the argv_array separately after finish_command runs. As a result, some of our code in this situation just leaks the memory. To help such cases, this patch adds a built-in argv_array to the child_process, which gets cleaned up automatically (both in finish_command and when start_command fails). Callers may use it if they choose, but can continue to use the raw argv if they wish. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-18commit: fix patch hunk editing with "commit -p -m"Libravatar Benoit Pierre1-12/+32
Don't change git environment: move the GIT_EDITOR=":" override to the hook command subprocess, like it's already done for GIT_INDEX_FILE. Signed-off-by: Benoit Pierre <benoit.pierre@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-31run-command: trivial style fixesLibravatar Felipe Contreras1-8/+5
Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-22Merge branch 'tr/fd-gotcha-fixes'Libravatar Junio C Hamano1-1/+4
Two places we did not check return value (expected to be a file descriptor) correctly. * tr/fd-gotcha-fixes: run-command: dup_devnull(): guard against syscalls failing git_mkstemps: correctly test return value of open()
2013-07-12run-command: dup_devnull(): guard against syscalls failingLibravatar Thomas Rast1-1/+4
dup_devnull() did not check the return values of open() and dup2(). Fix this omission. Signed-off-by: Thomas Rast <trast@inf.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-05-08mingw: rename WIN32 cpp macro to GIT_WINDOWS_NATIVELibravatar Jonathan Nieder1-5/+5
Throughout git, it is assumed that the WIN32 preprocessor symbol is defined on native Windows setups (mingw and msvc) and not on Cygwin. On Cygwin, most of the time git can pretend this is just another Unix machine, and Windows-specific magic is generally counterproductive. Unfortunately Cygwin *does* define the WIN32 symbol in some headers. Best to rely on a new git-specific symbol GIT_WINDOWS_NATIVE instead, defined as follows: #if defined(WIN32) && !defined(__CYGWIN__) # define GIT_WINDOWS_NATIVE #endif After this change, it should be possible to drop the CYGWIN_V15_WIN32API setting without any negative effect. [rj: %s/WINDOWS_NATIVE/GIT_WINDOWS_NATIVE/g ] Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-19Merge branch 'jk/a-thread-only-dies-once'Libravatar Junio C Hamano1-0/+11
A regression fix for the logic to detect die() handler triggering itself recursively. * jk/a-thread-only-dies-once: run-command: use thread-aware die_is_recursing routine usage: allow pluggable die-recursion checks
2013-04-16run-command: use thread-aware die_is_recursing routineLibravatar Jeff King1-0/+11
If we die from an async thread, we do not actually exit the program, but just kill the thread. This confuses the static counter in usage.c's default die_is_recursing function; it updates the counter once for the thread death, and then when the main program calls die() itself, it erroneously thinks we are recursing. The end result is that we print "recursion detected in die handler" instead of the real error in such a case (the easiest way to trigger this is having a remote connection hang up while running a sideband demultiplexer). This patch solves it by using a per-thread counter when the async_die function is installed; we detect recursion in each thread (including the main one), but they do not step on each other's toes. Other threaded code does not need to worry about this, as they do not install specialized die handlers; they just let a die() from a sub-thread take down the whole program. Since we are overriding the default recursion-check function, there is an interesting corner case that is not a problem, but bears some explanation. Imagine the main thread calls die(), and then in the die_routine starts an async call. We will switch to using thread-local storage, which starts at 0, for the main thread's counter, even though the original counter was actually at 1. That's OK, though, for two reasons: 1. It would miss only the first level of recursion, and would still find recursive failures inside the async helper. 2. We do not currently and are not likely to start doing anything as heavyweight as starting an async routine from within a die routine or helper function. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-21run-command: always set failed_errno in start_commandLibravatar Jeff King1-2/+3
When we fail to fork, we set the failed_errno variable to the value of errno so it is not clobbered by later syscalls. However, we do so in a conditional, and it is hard to see later under what conditions the variable has a valid value. Instead of setting it only when fork fails, let's just always set it after forking. This is more obvious for human readers (as we are no longer setting it as a side effect of a strerror call), and it is more obvious to gcc, which no longer generates a spurious -Wuninitialized warning. It also happens to match what the WIN32 half of the #ifdef does. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-07Merge branch 'sb/run-command-fd-error-reporting'Libravatar Junio C Hamano1-2/+6
* sb/run-command-fd-error-reporting: run-command: be more informative about what failed