summaryrefslogtreecommitdiff
path: root/http-backend.c
AgeCommit message (Collapse)AuthorFilesLines
2021-03-13use CALLOC_ARRAYLibravatar René Scharfe1-1/+1
Add and apply a semantic patch for converting code that open-codes CALLOC_ARRAY to use it instead. It shortens the code and infers the element size automatically. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: fix indentation in renamed callsLibravatar Jeff King1-1/+1
Code which split an argv_array call across multiple lines, like: argv_array_pushl(&args, "one argument", "another argument", "and more", NULL); was recently mechanically renamed to use strvec, which results in mis-matched indentation like: strvec_pushl(&args, "one argument", "another argument", "and more", NULL); Let's fix these up to align the arguments with the opening paren. I did this manually by sifting through the results of: git jump grep 'strvec_.*,$' and liberally applying my editor's auto-format. Most of the changes are of the form shown above, though I also normalized a few that had originally used a single-tab indentation (rather than our usual style of aligning with the open paren). I also rewrapped a couple of obvious cases (e.g., where previously too-long lines became short enough to fit on one), but I wasn't aggressive about it. In cases broken to three or more lines, the grouping of arguments is sometimes meaningful, and it wasn't worth my time or reviewer time to ponder each case individually. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: convert more callers away from argv_array nameLibravatar Jeff King1-2/+2
We eventually want to drop the argv_array name and just use strvec consistently. There's no particular reason we have to do it all at once, or care about interactions between converted and unconverted bits. Because of our preprocessor compat layer, the names are interchangeable to the compiler (so even a definition and declaration using different names is OK). This patch converts remaining files from the first half of the alphabet, to keep the diff to a manageable size. The conversion was done purely mechanically with: git ls-files '*.c' '*.h' | xargs perl -i -pe ' s/ARGV_ARRAY/STRVEC/g; s/argv_array/strvec/g; ' and then selectively staging files with "git add '[abcdefghjkl]*'". We'll deal with any indentation/style fallouts separately. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: rename files from argv-array to strvecLibravatar Jeff King1-1/+1
This requires updating #include lines across the code-base, but that's all fairly mechanical, and was done with: git ls-files '*.c' '*.h' | xargs perl -i -pe 's/argv-array.h/strvec.h/' Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01http-backend: allow 64-character hex namesLibravatar brian m. carlson1-0/+3
In an SHA-256-backed repository using the http-backend handler for dumb protocol clients, it may be necessary to access the raw packs using their full SHA-256-specified names. Allow packs and loose objects to be accessed using their full SHA-256-specified 64-character hex names. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-04Merge branch 'mk/http-backend-kill-children-before-exit'Libravatar Junio C Hamano1-0/+2
The http-backend CGI process did not correctly clean up the child processes it spawns to run upload-pack etc. when it dies itself, which has been corrected. * mk/http-backend-kill-children-before-exit: http-backend: enable cleaning up forked upload/receive-pack on exit
2018-11-26http-backend: enable cleaning up forked upload/receive-pack on exitLibravatar Max Kirillov1-0/+2
If http-backend dies because of errors, started upload-pack or receive-pack are not killed and waited, but rather stay running for some time until they exit because of closed stdin. It may be undesirable in working environment, and it also causes occasional failure of t5562, because the processes keep opened act.err, and sometimes write there errors after next test started using the file. Fix by enabling cleaning of the command at http-backed exit. Reported-by: Carlo Arenas <carenas@gmail.com> Helped-by: Carlo Arenas <carenas@gmail.com> Signed-off-by: Max Kirillov <max@max630.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-09-17Merge branch 'ds/multi-pack-index'Libravatar Junio C Hamano1-2/+2
When there are too many packfiles in a repository (which is not recommended), looking up an object in these would require consulting many pack .idx files; a new mechanism to have a single file that consolidates all of these .idx files is introduced. * ds/multi-pack-index: (32 commits) pack-objects: consider packs in multi-pack-index midx: test a few commands that use get_all_packs treewide: use get_all_packs packfile: add all_packs list midx: fix bug that skips midx with alternates midx: stop reporting garbage midx: mark bad packed objects multi-pack-index: store local property multi-pack-index: provide more helpful usage info midx: clear midx on repack packfile: skip loading index if in multi-pack-index midx: prevent duplicate packfile loads midx: use midx in approximate_object_count midx: use existing midx when writing new one midx: use midx in abbreviation calculations midx: read objects from multi-pack-index config: create core.multiPackIndex setting midx: write object offsets midx: write object id fanout chunk midx: write object ids in a chunk ...
2018-09-10Merge branch 'mk/http-backend-content-length'Libravatar Junio C Hamano1-1/+1
The earlier attempt barfed when given a CONTENT_LENGTH that is set to an empty string. RFC 3875 is fairly clear that in this case we should not read any message body, but we've been reading through to the EOF in previous versions (which did not even pay attention to the environment variable), so keep that behaviour for now in this late update. * mk/http-backend-content-length: http-backend: allow empty CONTENT_LENGTH
2018-09-07http-backend: allow empty CONTENT_LENGTHLibravatar Max Kirillov1-1/+1
According to RFC3875, empty environment variable is equivalent to unset, and for CONTENT_LENGTH it should mean zero body to read. However, unset CONTENT_LENGTH is also used for chunked encoding to indicate reading until EOF. At least, the test "large fetch-pack requests can be split across POSTs" from t5551 starts faliing, if unset or empty CONTENT_LENGTH is treated as zero length body. So keep the existing behavior as much as possible. Add a test for the case. Reported-By: Jelmer Vernooij <jelmer@jelmer.uk> Signed-off-by: Max Kirillov <max@max630.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-20treewide: use get_all_packsLibravatar Derrick Stolee1-2/+2
There are many places in the codebase that want to iterate over all packfiles known to Git. The purposes are wide-ranging, and those that can take advantage of the multi-pack-index already do. So, use get_all_packs() instead of get_packed_git() to be sure we are iterating over all packfiles. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-17Merge branch 'mk/http-backend-content-length'Libravatar Junio C Hamano1-14/+86
The http-backend (used for smart-http transport) used to slurp the whole input until EOF, without paying attention to CONTENT_LENGTH that is supplied in the environment and instead expecting the Web server to close the input stream. This has been fixed. * mk/http-backend-content-length: t5562: avoid non-portable "export FOO=bar" construct http-backend: respect CONTENT_LENGTH for receive-pack http-backend: respect CONTENT_LENGTH as specified by rfc3875 http-backend: cleanup writing to child process
2018-07-27http-backend: respect CONTENT_LENGTH for receive-packLibravatar Max Kirillov1-2/+30
Push passes to another commands, as described in https://public-inbox.org/git/20171129032214.GB32345@sigill.intra.peff.net/ As it gets complicated to correctly track the data length, instead transfer the data through parent process and cut the pipe as the specified length is reached. Do it only when CONTENT_LENGTH is set, otherwise pass the input directly to the forked commands. Add tests for cases: * CONTENT_LENGTH is set, script's stdin has more data, with all combinations of variations: fetch or push, plain or compressed body, correct or truncated input. * CONTENT_LENGTH is specified to a value which does not fit into ssize_t. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Max Kirillov <max@max630.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-29tag: add repository argument to deref_tagLibravatar Stefan Beller1-1/+1
Add a repository argument to allow the callers of deref_tag to be more specific about which repository to act on. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-29object: add repository argument to parse_objectLibravatar Stefan Beller1-1/+1
Add a repository argument to allow the callers of parse_object to be more specific about which repository to act on. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-11http-backend: respect CONTENT_LENGTH as specified by rfc3875Libravatar Max Kirillov1-7/+47
http-backend reads whole input until EOF. However, the RFC 3875 specifies that a script must read only as many bytes as specified by CONTENT_LENGTH environment variable. Web server may exercise the specification by not closing the script's standard input after writing content. In that case http-backend would hang waiting for the input. The issue is known to happen with IIS/Windows, for example. Make http-backend read only CONTENT_LENGTH bytes, if it's defined, rather than the whole input until EOF. If the variable is not defined, keep older behavior of reading until EOF because it is used to support chunked transfer-encoding. This commit only fixes buffered input, whcih reads whole body before processign it. Non-buffered input is going to be fixed in subsequent commit. Signed-off-by: Florian Manschwetus <manschwetus@cs-software-gmbh.de> [mk: fixed trivial build failures and polished style issues] Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Max Kirillov <max@max630.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-11http-backend: cleanup writing to child processLibravatar Max Kirillov1-5/+9
As explained in [1], we should not assume the reason why the writing has failed, and even if the reason is that child has existed not the reason why it have done so. So instead just say that writing has failed. [1] https://public-inbox.org/git/20180604044408.GD14451@sigill.intra.peff.net/ Signed-off-by: Max Kirillov <max@max630.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-08Merge branch 'bw/protocol-v2'Libravatar Junio C Hamano1-2/+6
The beginning of the next-gen transfer protocol. * bw/protocol-v2: (35 commits) remote-curl: don't request v2 when pushing remote-curl: implement stateless-connect command http: eliminate "# service" line when using protocol v2 http: don't always add Git-Protocol header http: allow providing extra headers for http requests remote-curl: store the protocol version the server responded with remote-curl: create copy of the service name pkt-line: add packet_buf_write_len function transport-helper: introduce stateless-connect transport-helper: refactor process_connect_service transport-helper: remove name parameter connect: don't request v2 when pushing connect: refactor git_connect to only get the protocol version once fetch-pack: support shallow requests fetch-pack: perform a fetch using v2 upload-pack: introduce fetch server command push: pass ref prefixes when pushing fetch: pass ref prefixes when fetching ls-remote: pass ref prefixes when requesting a remote's refs transport: convert transport_get_remote_refs to take a list of ref prefixes ...
2018-04-11exec_cmd: rename to use dash in file nameLibravatar Stefan Beller1-1/+1
This is more consistent with the project style. The majority of Git's source files use dashes in preference to underscores in their file names. Signed-off-by: Stefan Beller <sbeller@google.com>
2018-03-26packfile: keep prepare_packed_git() privateLibravatar Nguyễn Thái Ngọc Duy1-1/+0
The reason callers have to call this is to make sure either packed_git or packed_git_mru pointers are initialized since we don't do that by default. Sometimes it's hard to see this connection between where the function is called and where packed_git pointer is used (sometimes in separate functions). Keep this dependency internal because now all access to packed_git and packed_git_mru must go through get_xxx() wrappers. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-26packfile: add repository argument to prepare_packed_gitLibravatar Stefan Beller1-1/+1
Add a repository argument to allow prepare_packed_git callers to be more specific about which repository to handle. See commit "sha1_file: add repository argument to link_alt_odb_entry" for an explanation of the #define trick. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-26object-store: move packed_git and packed_git_mru to object storeLibravatar Stefan Beller1-2/+4
In a process with multiple repositories open, packfile accessors should be associated to a single repository and not shared globally. Move packed_git and packed_git_mru into the_repository and adjust callers to reflect this. [nd: while at there, wrap access to these two fields in get_packed_git() and get_packed_git_mru(). This allows us to lazily initialize these fields without caller doing that explicitly] Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15http: eliminate "# service" line when using protocol v2Libravatar Brandon Williams1-2/+6
When an http info/refs request is made, requesting that protocol v2 be used, don't send a "# service" line since this line is not part of the v2 spec. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-28Merge branch 'rs/resolve-ref-optional-result'Libravatar Junio C Hamano1-2/+1
Code clean-up. * rs/resolve-ref-optional-result: refs: pass NULL to resolve_ref_unsafe() if hash is not needed refs: pass NULL to refs_resolve_ref_unsafe() if hash is not needed refs: make sha1 output parameter of refs_resolve_ref_unsafe() optional
2017-09-25Merge branch 'jk/write-in-full-fix'Libravatar Junio C Hamano1-2/+2
Many codepaths did not diagnose write failures correctly when disks go full, due to their misuse of write_in_full() helper function, which have been corrected. * jk/write-in-full-fix: read_pack_header: handle signed/unsigned comparison in read result config: flip return value of store_write_*() notes-merge: use ssize_t for write_in_full() return value pkt-line: check write_in_full() errors against "< 0" convert less-trivial versions of "write_in_full() != len" avoid "write_in_full(fd, buf, len) != len" pattern get-tar-commit-id: check write_in_full() return against 0 config: avoid "write_in_full(fd, buf, len) < len" pattern
2017-09-24refs: pass NULL to resolve_ref_unsafe() if hash is not neededLibravatar René Scharfe1-2/+1
This allows us to get rid of some write-only variables, among them seven SHA1 buffers. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14avoid "write_in_full(fd, buf, len) != len" patternLibravatar Jeff King1-2/+2
The return value of write_in_full() is either "-1", or the requested number of bytes[1]. If we make a partial write before seeing an error, we still return -1, not a partial value. This goes back to f6aa66cb95 (write_in_full: really write in full or return error on disk full., 2007-01-11). So checking anything except "was the return value negative" is pointless. And there are a couple of reasons not to do so: 1. It can do a funny signed/unsigned comparison. If your "len" is signed (e.g., a size_t) then the compiler will promote the "-1" to its unsigned variant. This works out for "!= len" (unless you really were trying to write the maximum size_t bytes), but is a bug if you check "< len" (an example of which was fixed recently in config.c). We should avoid promoting the mental model that you need to check the length at all, so that new sites are not tempted to copy us. 2. Checking for a negative value is shorter to type, especially when the length is an expression. 3. Linus says so. In d34cf19b89 (Clean up write_in_full() users, 2007-01-11), right after the write_in_full() semantics were changed, he wrote: I really wish every "write_in_full()" user would just check against "<0" now, but this fixes the nasty and stupid ones. Appeals to authority aside, this makes it clear that writing it this way does not have an intentional benefit. It's a historical curiosity that we never bothered to clean up (and which was undoubtedly cargo-culted into new sites). So let's convert these obviously-correct cases (this includes write_str_in_full(), which is just a wrapper for write_in_full()). [1] A careful reader may notice there is one way that write_in_full() can return a different value. If we ask write() to write N bytes and get a return value that is _larger_ than N, we could return a larger total. But besides the fact that this would imply a totally broken version of write(), it would already invoke undefined behavior. Our internal remaining counter is an unsigned size_t, which means that subtracting too many byte will wrap it around to a very large number. So we'll instantly begin reading off the end of the buffer, trying to write gigabytes (or petabytes) of data. Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-23pack: move {,re}prepare_packed_git and approximate_object_countLibravatar Jonathan Tan1-0/+1
Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-24Merge branch 'bw/config-h'Libravatar Junio C Hamano1-0/+1
Fix configuration codepath to pay proper attention to commondir that is used in multi-worktree situation, and isolate config API into its own header file. * bw/config-h: config: don't implicitly use gitdir or commondir config: respect commondir setup: teach discover_git_directory to respect the commondir config: don't include config.h by default config: remove git_config_iter config: create config.h
2017-06-15config: don't include config.h by defaultLibravatar Brandon Williams1-0/+1
Stop including config.h by default in cache.h. Instead only include config.h in those files which require use of the config system. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-29Merge branch 'bc/object-id'Libravatar Junio C Hamano1-1/+1
Conversion from uchar[20] to struct object_id continues. * bc/object-id: (53 commits) object: convert parse_object* to take struct object_id tree: convert parse_tree_indirect to struct object_id sequencer: convert do_recursive_merge to struct object_id diff-lib: convert do_diff_cache to struct object_id builtin/ls-tree: convert to struct object_id merge: convert checkout_fast_forward to struct object_id sequencer: convert fast_forward_to to struct object_id builtin/ls-files: convert overlay_tree_on_cache to object_id builtin/read-tree: convert to struct object_id sha1_name: convert internals of peel_onion to object_id upload-pack: convert remaining parse_object callers to object_id revision: convert remaining parse_object callers to object_id revision: rename add_pending_sha1 to add_pending_oid http-push: convert process_ls_object and descendants to object_id refs/files-backend: convert many internals to struct object_id refs: convert struct ref_update to use struct object_id ref-filter: convert some static functions to struct object_id Convert struct ref_array_item to struct object_id Convert the verify_pack callback to struct object_id Convert lookup_tag to struct object_id ...
2017-05-08object: convert parse_object* to take struct object_idLibravatar brian m. carlson1-1/+1
Make parse_object, parse_object_or_die, and parse_object_buffer take a pointer to struct object_id. Remove the temporary variables inserted earlier, since they are no longer necessary. Transform all of the callers using the following semantic patch: @@ expression E1; @@ - parse_object(E1.hash) + parse_object(&E1) @@ expression E1; @@ - parse_object(E1->hash) + parse_object(E1) @@ expression E1, E2; @@ - parse_object_or_die(E1.hash, E2) + parse_object_or_die(&E1, E2) @@ expression E1, E2; @@ - parse_object_or_die(E1->hash, E2) + parse_object_or_die(E1, E2) @@ expression E1, E2, E3, E4, E5; @@ - parse_object_buffer(E1.hash, E2, E3, E4, E5) + parse_object_buffer(&E1, E2, E3, E4, E5) @@ expression E1, E2, E3, E4, E5; @@ - parse_object_buffer(E1->hash, E2, E3, E4, E5) + parse_object_buffer(E1, E2, E3, E4, E5) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-27timestamp_t: a new data type for timestampsLibravatar Johannes Schindelin1-2/+2
Git's source code assumes that unsigned long is at least as precise as time_t. Which is incorrect, and causes a lot of problems, in particular where unsigned long is only 32-bit (notably on Windows, even in 64-bit versions). So let's just use a more appropriate data type instead. In preparation for this, we introduce the new `timestamp_t` data type. By necessity, this is a very, very large patch, as it has to replace all timestamps' data type in one go. As we will use a data type that is not necessarily identical to `time_t`, we need to be very careful to use `time_t` whenever we interact with the system functions, and `timestamp_t` everywhere else. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-17pkt-line: rename packet_write() to packet_write_fmt()Libravatar Lars Schneider1-1/+1
packet_write() should be called packet_write_fmt() because it is a printf-like function that takes a format string as first parameter. packet_write_fmt() should be used for text strings only. Arbitrary binary data should use a new packet_write() function that is introduced in a subsequent patch. Suggested-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Lars Schneider <larsxschneider@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-08-12Merge branch 'ew/http-backend-batch-headers'Libravatar Junio C Hamano1-104/+116
The http-backend (the server-side component of smart-http transport) used to trickle the HTTP header one at a time. Now these write(2)s are batched. * ew/http-backend-batch-headers: http-backend: buffer headers before sending
2016-08-10http-backend: buffer headers before sendingLibravatar Eric Wong1-104/+116
Avoid waking up the readers for unnecessary context switches for each line of header data being written, as all the headers are written in short succession. It is unlikely any HTTP/1.x server would want to read a CGI response one-line-at-a-time and trickle each to the client. Instead, I'd expect HTTP servers want to minimize syscall and TCP/IP framing overhead by trying to send all of its response headers in a single syscall or even combining the headers and first chunk of the body with MSG_MORE or writev. Verified by strace-ing response parsing on the CGI side. Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-06Merge branch 'jk/common-main-2.8' into jk/common-mainLibravatar Junio C Hamano1-4/+1
* jk/common-main-2.8: mingw: declare main()'s argv as const common-main: call git_setup_gettext() common-main: call restore_sigpipe_to_default() common-main: call sanitize_stdfds() common-main: call git_extract_argv0_path() add an extra level of indirection to main()
2016-07-01common-main: call git_setup_gettext()Libravatar Jeff King1-2/+0
This should be part of every program, as otherwise users do not get translated error messages. However, some external commands forgot to do so (e.g., git-credential-store). This fixes them, and eliminates the repeated code in programs that did remember to use it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01common-main: call git_extract_argv0_path()Libravatar Jeff King1-1/+0
Every program which links against libgit.a must call this function, or risk hitting an assert() in system_path() that checks whether we have configured argv0_path (though only when RUNTIME_PREFIX is defined, so essentially only on Windows). Looking at the diff, you can see that putting it into the common main() saves us having to do it individually in each of the external commands. But what you can't see are the cases where we _should_ have been doing so, but weren't (e.g., git-credential-store, and all of the t/helper test programs). This has been an accident-waiting-to-happen for a long time, but wasn't triggered until recently because it involves one of those programs actually calling system_path(). That happened with git-credential-store in v2.8.0 with ae5f677 (lazily load core.sharedrepository, 2016-03-11). The program: - takes a lock file, which... - opens a tempfile, which... - calls adjust_shared_perm to fix permissions, which... - lazy-loads the config (as of ae5f677), which... - calls system_path() to find the location of /etc/gitconfig On systems with RUNTIME_PREFIX, this means credential-store reliably hits that assert() and cannot be used. We never noticed in the test suite, because we set GIT_CONFIG_NOSYSTEM there, which skips the system_path() lookup entirely. But if we were to tweak git_config() to find /etc/gitconfig even when we aren't going to open it, then the test suite shows multiple failures (for credential-store, and for some other test helpers). I didn't include that tweak here because it's way too specific to this particular call to be worth carrying around what is essentially dead code. The implementation is fairly straightforward, with one exception: there is exactly one caller (git.c) that actually cares about the result of the function, and not the side-effect of setting up argv0_path. We can accommodate that by simply replacing the value of argv[0] in the array we hand down to cmd_main(). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01add an extra level of indirection to main()Libravatar Jeff King1-1/+1
There are certain startup tasks that we expect every git process to do. In some cases this is just to improve the quality of the program (e.g., setting up gettext()). In others it is a requirement for using certain functions in libgit.a (e.g., system_path() expects that you have called git_extract_argv0_path()). Most commands are builtins and are covered by the git.c version of main(). However, there are still a few external commands that use their own main(). Each of these has to remember to include the correct startup sequence, and we are not always consistent. Rather than just fix the inconsistencies, let's make this harder to get wrong by providing a common main() that can run this standard startup. We basically have two options to do this: - the compat/mingw.h file already does something like this by adding a #define that replaces the definition of main with a wrapper that calls mingw_startup(). The upside is that the code in each program doesn't need to be changed at all; it's rewritten on the fly by the preprocessor. The downside is that it may make debugging of the startup sequence a bit more confusing, as the preprocessor is quietly inserting new code. - the builtin functions are all of the form cmd_foo(), and git.c's main() calls them. This is much more explicit, which may make things more obvious to somebody reading the code. It's also more flexible (because of course we have to figure out _which_ cmd_foo() to call). The downside is that each of the builtins must define cmd_foo(), instead of just main(). This patch chooses the latter option, preferring the more explicit approach, even though it is more invasive. We introduce a new file common-main.c, with the "real" main. It expects to call cmd_main() from whatever other objects it is linked against. We link common-main.o against anything that links against libgit.a, since we know that such programs will need to do this setup. Note that common-main.o can't actually go inside libgit.a, as the linker would not pick up its main() function automatically (it has no callers). The rest of the patch is just adjusting all of the various external programs (mostly in t/helper) to use cmd_main(). I've provided a global declaration for cmd_main(), which means that all of the programs also need to match its signature. In particular, many functions need to switch to "const char **" instead of "char **" for argv. This effect ripples out to a few other variables and functions, as well. This makes the patch even more invasive, but the end result is much better. We should be treating argv strings as const anyway, and now all programs conform to the same signature (which also matches the way builtins are defined). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-10show_head_ref(): check the result of resolve_ref_namespace()Libravatar Michael Haggerty1-2/+2
Only use the result of resolve_ref_namespace() if it is non-NULL. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: David Turner <dturner@twopensource.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-20Convert struct object to object_idLibravatar brian m. carlson1-1/+1
struct object is one of the major data structures dealing with object IDs. Convert it to use struct object_id instead of an unsigned char array. Convert get_object_hash to refer to the new member as well. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-08-10prefer git_pathdup to git_path in some possibly-dangerous casesLibravatar Jeff King1-1/+2
Because git_path uses a static buffer that is shared with calls to git_path, mkpath, etc, it can be dangerous to assign the result to a variable or pass it to a non-trivial function. The value may change unexpectedly due to other calls. None of the cases changed here has a known bug, but they're worth converting away from git_path because: 1. It's easy to use git_pathdup in these cases. 2. They use constructs (like assignment) that make it hard to tell whether they're safe or not. The extra malloc overhead should be trivial, as an allocation should be an order of magnitude cheaper than a system call (which we are clearly about to make, since we are constructing a filename). The real cost is that we must remember to free the result. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-29convert "enum date_mode" into a structLibravatar Jeff King1-1/+1
In preparation for adding date modes that may carry extra information beyond the mode itself, this patch converts the date_mode enum into a struct. Most of the conversion is fairly straightforward; we pass the struct as a pointer and dereference the type field where necessary. Locations that declare a date_mode can use a "{}" constructor. However, the tricky case is where we use the enum labels as constants, like: show_date(t, tz, DATE_NORMAL); Ideally we could say: show_date(t, tz, &{ DATE_NORMAL }); but of course C does not allow that. Likewise, we cannot cast the constant to a struct, because we need to pass an actual address. Our options are basically: 1. Manually add a "struct date_mode d = { DATE_NORMAL }" definition to each caller, and pass "&d". This makes the callers uglier, because they sometimes do not even have their own scope (e.g., they are inside a switch statement). 2. Provide a pre-made global "date_normal" struct that can be passed by address. We'd also need "date_rfc2822", "date_iso8601", and so forth. But at least the ugliness is defined in one place. 3. Provide a wrapper that generates the correct struct on the fly. The big downside is that we end up pointing to a single global, which makes our wrapper non-reentrant. But show_date is already not reentrant, so it does not matter. This patch implements 3, along with a minor macro to keep the size of the callers sane. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-06-05Merge branch 'bc/object-id'Libravatar Junio C Hamano1-9/+9
for_each_ref() callback functions were taught to name the objects not with "unsigned char sha1[20]" but with "struct object_id". * bc/object-id: (56 commits) struct ref_lock: convert old_sha1 member to object_id warn_if_dangling_symref(): convert local variable "junk" to object_id each_ref_fn_adapter(): remove adapter rev_list_insert_ref(): remove unneeded arguments rev_list_insert_ref_oid(): new function, taking an object_oid mark_complete(): remove unneeded arguments mark_complete_oid(): new function, taking an object_oid clear_marks(): rewrite to take an object_id argument mark_complete(): rewrite to take an object_id argument send_ref(): convert local variable "peeled" to object_id upload-pack: rewrite functions to take object_id arguments find_symref(): convert local variable "unused" to object_id find_symref(): rewrite to take an object_id argument write_one_ref(): rewrite to take an object_id argument write_refs_to_temp_dir(): convert local variable sha1 to object_id submodule: rewrite to take an object_id argument shallow: rewrite functions to take object_id arguments handle_one_ref(): rewrite to take an object_id argument add_info_ref(): rewrite to take an object_id argument handle_one_reflog(): rewrite to take an object_id argument ...
2015-05-25http-backend: spool ref negotiation requests to bufferLibravatar Jeff King1-11/+85
When http-backend spawns "upload-pack" to do ref negotiation, it streams the http request body to upload-pack, who then streams the http response back to the client as it reads. In theory, git can go full-duplex; the client can consume our response while it is still sending the request. In practice, however, HTTP is a half-duplex protocol. Even if our client is ready to read and write simultaneously, we may have other HTTP infrastructure in the way, including the webserver that spawns our CGI, or any intermediate proxies. In at least one documented case[1], this leads to deadlock when trying a fetch over http. What happens is basically: 1. Apache proxies the request to the CGI, http-backend. 2. http-backend gzip-inflates the data and sends the result to upload-pack. 3. upload-pack acts on the data and generates output over the pipe back to Apache. Apache isn't reading because it's busy writing (step 1). This works fine most of the time, because the upload-pack output ends up in a system pipe buffer, and Apache reads it as soon as it finishes writing. But if both the request and the response exceed the system pipe buffer size, then we deadlock (Apache blocks writing to http-backend, http-backend blocks writing to upload-pack, and upload-pack blocks writing to Apache). We need to break the deadlock by spooling either the input or the output. In this case, it's ideal to spool the input, because Apache does not start reading either stdout _or_ stderr until we have consumed all of the input. So until we do so, we cannot even get an error message out to the client. The solution is fairly straight-forward: we read the request body into an in-memory buffer in http-backend, freeing up Apache, and then feed the data ourselves to upload-pack. But there are a few important things to note: 1. We limit the in-memory buffer to prevent an obvious denial-of-service attack. This is a new hard limit on requests, but it's unlikely to come into play. The default value is 10MB, which covers even the ridiculous 100,000-ref negotation in the included test (that actually caps out just over 5MB). But it's configurable on the off chance that you don't mind spending some extra memory to make even ridiculous requests work. 2. We must take care only to buffer when we have to. For pushes, the incoming packfile may be of arbitrary size, and we should connect the input directly to receive-pack. There's no deadlock problem here, though, because we do not produce any output until the whole packfile has been read. For upload-pack's initial ref advertisement, we similarly do not need to buffer. Even though we may generate a lot of output, there is no request body at all (i.e., it is a GET, not a POST). [1] http://article.gmane.org/gmane.comp.version-control.git/269020 Test-adapted-from: Dennis Kaarsemaker <dennis@kaarsemaker.net> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25show_head_ref(): convert local variable "unused" to object_idLibravatar Michael Haggerty1-2/+2
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25http-backend: rewrite to take an object_id argumentLibravatar Michael Haggerty1-14/+9
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-25each_ref_fn: change to take an object_id parameterLibravatar Michael Haggerty1-2/+7
Change typedef each_ref_fn to take a "const struct object_id *oid" parameter instead of "const unsigned char *sha1". To aid this transition, implement an adapter that can be used to wrap old-style functions matching the old typedef, which is now called "each_ref_sha1_fn"), and make such functions callable via the new interface. This requires the old function and its cb_data to be wrapped in a "struct each_ref_fn_sha1_adapter", and that object to be used as the cb_data for an adapter function, each_ref_fn_adapter(). This is an enormous diff, but most of it consists of simple, mechanical changes to the sites that call any of the "for_each_ref" family of functions. Subsequent to this change, the call sites can be rewritten one by one to use the new interface. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-15http-backend: fix die recursion with custom handlerLibravatar Jeff King1-5/+9
When we die() in http-backend, we call a custom handler that writes an HTTP 500 response to stdout, then reports the error to stderr. Our routines for writing out the HTTP response may themselves die, leading to us entering die() again. When it was originally written, that was OK; our custom handler keeps a variable to notice this and does not recurse. However, since cd163d4 (usage.c: detect recursion in die routines and bail out immediately, 2012-11-14), the main die() implementation detects recursion before we even get to our custom handler, and bails without printing anything useful. We can handle this case by doing two things: 1. Installing a custom die_is_recursing handler that allows us to enter up to one level of recursion. Only the first call to our custom handler will try to write out the error response. So if we die again, that is OK. If we end up dying more than that, it is a sign that we are in an infinite recursion. 2. Reporting the error to stderr before trying to write out the HTTP response. In the current code, if we do die() trying to write out the response, we'll exit immediately from this second die(), and never get a chance to output the original error (which is almost certainly the more interesting one; the second die is just going to be along the lines of "I tried to write to stdout but it was closed"). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>