summaryrefslogtreecommitdiff
path: root/remote-curl.c
AgeCommit message (Collapse)AuthorFilesLines
2018-05-08Merge branch 'ma/http-walker-no-partial'Libravatar Junio C Hamano1-3/+0
"git http-fetch" (deprecated) had an optional and experimental "feature" to fetch only commits and/or trees, which nobody used. This has been removed. * ma/http-walker-no-partial: walker: drop fields of `struct walker` which are always 1 http-fetch: make `-a` standard behaviour
2018-05-08Merge branch 'bw/protocol-v2'Libravatar Junio C Hamano1-4/+276
The beginning of the next-gen transfer protocol. * bw/protocol-v2: (35 commits) remote-curl: don't request v2 when pushing remote-curl: implement stateless-connect command http: eliminate "# service" line when using protocol v2 http: don't always add Git-Protocol header http: allow providing extra headers for http requests remote-curl: store the protocol version the server responded with remote-curl: create copy of the service name pkt-line: add packet_buf_write_len function transport-helper: introduce stateless-connect transport-helper: refactor process_connect_service transport-helper: remove name parameter connect: don't request v2 when pushing connect: refactor git_connect to only get the protocol version once fetch-pack: support shallow requests fetch-pack: perform a fetch using v2 upload-pack: introduce fetch server command push: pass ref prefixes when pushing fetch: pass ref prefixes when fetching ls-remote: pass ref prefixes when requesting a remote's refs transport: convert transport_get_remote_refs to take a list of ref prefixes ...
2018-04-24walker: drop fields of `struct walker` which are always 1Libravatar Martin Ågren1-3/+0
After the previous commit, both users of `struct walker` set `get_tree`, `get_history` and `get_all` to 1. Drop those fields and simplify the walker implementation accordingly. Let's hope that any out-of-tree users will not mind this change. They should notice that the compilation fails as they try to set these fields. (If they do not set them, note that `get_http_walker()` leaves them undefined, so the behavior will have been undefined all the time.) Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-11exec_cmd: rename to use dash in file nameLibravatar Stefan Beller1-1/+1
This is more consistent with the project style. The majority of Git's source files use dashes in preference to underscores in their file names. Signed-off-by: Stefan Beller <sbeller@google.com>
2018-03-15remote-curl: don't request v2 when pushingLibravatar Brandon Williams1-1/+10
In order to be able to ship protocol v2 with only supporting fetch, we need clients to not issue a request to use protocol v2 when pushing (since the client currently doesn't know how to push using protocol v2). This allows a client to have protocol v2 configured in `protocol.version` and take advantage of using v2 for fetch and falling back to using v0 when pushing while v2 for push is being designed. We could run into issues if we didn't fall back to protocol v2 when pushing right now. This is because currently a server will ignore a request to use v2 when contacting the 'receive-pack' endpoint and fall back to using v0, but when push v2 is rolled out to servers, the 'receive-pack' endpoint will start responding using v2. So we don't want to get into a state where a client is requesting to push with v2 before they actually know how to push using v2. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15remote-curl: implement stateless-connect commandLibravatar Brandon Williams1-1/+206
Teach remote-curl the 'stateless-connect' command which is used to establish a stateless connection with servers which support protocol version 2. This allows remote-curl to act as a proxy, allowing the git client to communicate natively with a remote end, simply using remote-curl as a pass through to convert requests to http. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15http: eliminate "# service" line when using protocol v2Libravatar Brandon Williams1-0/+3
When an http info/refs request is made, requesting that protocol v2 be used, don't send a "# service" line since this line is not part of the v2 spec. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15http: don't always add Git-Protocol headerLibravatar Brandon Williams1-0/+33
Instead of always sending the Git-Protocol header with the configured version with every http request, explicitly send it when discovering refs and then only send it on subsequent http requests if the server understood the version requested. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15remote-curl: store the protocol version the server responded withLibravatar Brandon Williams1-1/+3
Store the protocol version the server responded with when performing discovery. This will be used in a future patch to either change the 'Git-Protocol' header sent in subsequent requests or to determine if a client needs to fallback to using a different protocol version. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-15remote-curl: create copy of the service nameLibravatar Brandon Williams1-2/+3
Make a copy of the service name being requested instead of relying on the buffer pointed to by the passed in 'const char *' to remain unchanged. Currently, all service names are string constants, but a subsequent patch will introduce service names from external sources. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14protocol: introduce enum protocol_version value protocol_v2Libravatar Brandon Williams1-0/+3
Introduce protocol_v2, a new value for 'enum protocol_version'. Subsequent patches will fill in the implementation of protocol_v2. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14connect: discover protocol version outside of get_remote_headsLibravatar Brandon Williams1-2/+18
In order to prepare for the addition of protocol_v2 push the protocol version discovery outside of 'get_remote_heads()'. This will allow for keeping the logic for processing the reference advertisement for protocol_v1 and protocol_v0 separate from the logic for protocol_v2. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-28Merge branch 'jk/push-options-via-transport-fix'Libravatar Junio C Hamano1-1/+10
"git push" over http transport did not unquote the push-options correctly. * jk/push-options-via-transport-fix: remote-curl: unquote incoming push-options t5545: factor out http repository setup
2018-02-27Merge branch 'js/packet-read-line-check-null'Libravatar Junio C Hamano1-0/+2
Some low level protocol codepath could crash when they get an unexpected flush packet, which is now fixed. * js/packet-read-line-check-null: always check for NULL return from packet_read_line() correct error messages for NULL packet_read_line()
2018-02-20remote-curl: unquote incoming push-optionsLibravatar Jeff King1-1/+10
The transport-helper protocol c-style quotes the value of any options passed to the helper via the "option <key> <value>" directive. However, remote-curl doesn't actually unquote the push-option values, meaning that we will send the quoted version to the other side (whereas git-over-ssh would send the raw value). The pack-protocol.txt documentation defines the push-options as a series of VCHARs, which excludes most characters that would need quoting. But: 1. You can still see the bug with a valid push-option that starts with a double-quote (since that triggers quoting). 2. We do currently handle any non-NUL characters correctly in git-over-ssh. So even though the spec does not say that we need to handle most quoted characters, it's nice if our behavior is consistent between protocols. There are two new tests: the "direct" one shows that this already works in the non-http case, and the http one covers this bugfix. Reported-by: Jon Simons <jon@jonsimons.org> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-08always check for NULL return from packet_read_line()Libravatar Jon Simons1-0/+2
The packet_read_line() function will die if it sees any protocol or socket errors. But it will return NULL for a flush packet; some callers which are not expecting this may dereference NULL if they get an unexpected flush. This would involve the other side breaking protocol, but we should flag the error rather than segfault. Signed-off-by: Jon Simons <jon@jonsimons.org> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-08fetch: support filtersLibravatar Jeff Hostetler1-0/+6
Teach fetch to support filters. This is only allowed for the remote configured in extensions.partialcloneremote. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-05introduce fetch-object: fetch one promisor objectLibravatar Jonathan Tan1-1/+13
Introduce fetch-object, providing the ability to fetch one object from a promisor remote. This uses fetch-pack. To do this, the transport mechanism has been updated with 2 flags, "from-promisor" to indicate that the resulting pack comes from a promisor remote (and thus should be annotated as such by index-pack), and "no-dependents" to indicate that only the objects themselves need to be fetched (but fetching additional objects is nevertheless safe). Whenever "no-dependents" is used, fetch-pack will refrain from using any object flags, because it is most likely invoked as part of a dynamic object fetch by another Git command (which may itself use object flags). An alternative to this is to leave fetch-pack alone, and instead update the allocation of flags so that fetch-pack's flags never overlap with any others, but this will end up shrinking the number of flags available to nearly every other Git command (that is, every Git command that accesses objects), so the approach in this commit was used instead. This will be tested in a subsequent commit. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-15config: don't include config.h by defaultLibravatar Brandon Williams1-0/+1
Stop including config.h by default in cache.h. Instead only include config.h in those files which require use of the config system. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-23Merge branch 'dt/http-postbuffer-can-be-large'Libravatar Junio C Hamano1-3/+9
Allow the http.postbuffer configuration variable to be set to a size that can be expressed in size_t, which can be larger than ulong on some platforms. * dt/http-postbuffer-can-be-large: http.postbuffer: allow full range of ssize_t values
2017-04-19Merge branch 'bc/object-id'Libravatar Junio C Hamano1-2/+2
Conversion from unsigned char [40] to struct object_id continues. * bc/object-id: Documentation: update and rename api-sha1-array.txt Rename sha1_array to oid_array Convert sha1_array_for_each_unique and for_each_abbrev to object_id Convert sha1_array_lookup to take struct object_id Convert remaining callers of sha1_array_lookup to object_id Make sha1_array_append take a struct object_id * sha1-array: convert internal storage for struct sha1_array to object_id builtin/pull: convert to struct object_id submodule: convert check_for_new_submodule_commits to object_id sha1_name: convert disambiguate_hint_fn to take object_id sha1_name: convert struct disambiguate_state to object_id test-sha1-array: convert most code to struct object_id parse-options-cb: convert sha1_array_append caller to struct object_id fsck: convert init_skiplist to struct object_id builtin/receive-pack: convert portions to struct object_id builtin/pull: convert portions to struct object_id builtin/diff: convert to struct object_id Convert GIT_SHA1_RAWSZ used for allocation to GIT_MAX_RAWSZ Convert GIT_SHA1_HEXSZ used for allocation to GIT_MAX_HEXSZ Define new hash-size constants for allocating memory
2017-04-13http.postbuffer: allow full range of ssize_t valuesLibravatar David Turner1-3/+9
Unfortunately, in order to push some large repos where a server does not support chunked encoding, the http postbuffer must sometimes exceed two gigabytes. On a 64-bit system, this is OK: we just malloc a larger buffer. This means that we need to use CURLOPT_POSTFIELDSIZE_LARGE to set the buffer size. Signed-off-by: David Turner <dturner@twosigma.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31Rename sha1_array to oid_arrayLibravatar brian m. carlson1-1/+1
Since this structure handles an array of object IDs, rename it to struct oid_array. Also rename the accessor functions and the initialization constant. This commit was produced mechanically by providing non-Documentation files to the following Perl one-liners: perl -pi -E 's/struct sha1_array/struct oid_array/g' perl -pi -E 's/\bsha1_array_/oid_array_/g' perl -pi -E 's/SHA1_ARRAY_INIT/OID_ARRAY_INIT/g' Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-28sha1-array: convert internal storage for struct sha1_array to object_idLibravatar brian m. carlson1-1/+1
Make the internal storage for struct sha1_array use an array of struct object_id internally. Update the users of this struct which inspect its internals. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-22remote-curl: allow push optionsLibravatar Brandon Williams1-0/+8
Teach remote-curl to understand push options and to be able to convey them across HTTP. Signed-off-by: Brandon Williams <bmwill@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-10Merge branch 'dt/smart-http-detect-server-going-away'Libravatar Junio C Hamano1-0/+8
When the http server gives an incomplete response to a smart-http rpc call, it could lead to client waiting for a full response that will never come. Teach the client side to notice this condition and abort the transfer. An improvement counterproposal has failed. cf. <20161114194049.mktpsvgdhex2f4zv@sigill.intra.peff.net> * dt/smart-http-detect-server-going-away: upload-pack: optionally allow fetching any sha1 remote-curl: don't hang when a server dies before any output
2016-12-19Merge branch 'jk/http-walker-limit-redirect-2.9'Libravatar Junio C Hamano1-9/+13
Transport with dumb http can be fooled into following foreign URLs that the end user does not intend to, especially with the server side redirects and http-alternates mechanism, which can lead to security issues. Tighten the redirection and make it more obvious to the end user when it happens. * jk/http-walker-limit-redirect-2.9: http: treat http-alternates like redirects http: make redirects more obvious remote-curl: rename shadowed options variable http: always update the base URL for redirects http: simplify update_url_from_redirect
2016-12-06http: make redirects more obviousLibravatar Jeff King1-0/+4
We instruct curl to always follow HTTP redirects. This is convenient, but it creates opportunities for malicious servers to create confusing situations. For instance, imagine Alice is a git user with access to a private repository on Bob's server. Mallory runs her own server and wants to access objects from Bob's repository. Mallory may try a few tricks that involve asking Alice to clone from her, build on top, and then push the result: 1. Mallory may simply redirect all fetch requests to Bob's server. Git will transparently follow those redirects and fetch Bob's history, which Alice may believe she got from Mallory. The subsequent push seems like it is just feeding Mallory back her own objects, but is actually leaking Bob's objects. There is nothing in git's output to indicate that Bob's repository was involved at all. The downside (for Mallory) of this attack is that Alice will have received Bob's entire repository, and is likely to notice that when building on top of it. 2. If Mallory happens to know the sha1 of some object X in Bob's repository, she can instead build her own history that references that object. She then runs a dumb http server, and Alice's client will fetch each object individually. When it asks for X, Mallory redirects her to Bob's server. The end result is that Alice obtains objects from Bob, but they may be buried deep in history. Alice is less likely to notice. Both of these attacks are fairly hard to pull off. There's a social component in getting Mallory to convince Alice to work with her. Alice may be prompted for credentials in accessing Bob's repository (but not always, if she is using a credential helper that caches). Attack (1) requires a certain amount of obliviousness on Alice's part while making a new commit. Attack (2) requires that Mallory knows a sha1 in Bob's repository, that Bob's server supports dumb http, and that the object in question is loose on Bob's server. But we can probably make things a bit more obvious without any loss of functionality. This patch does two things to that end. First, when we encounter a whole-repo redirect during the initial ref discovery, we now inform the user on stderr, making attack (1) much more obvious. Second, the decision to follow redirects is now configurable. The truly paranoid can set the new http.followRedirects to false to avoid any redirection entirely. But for a more practical default, we will disallow redirects only after the initial ref discovery. This is enough to thwart attacks similar to (2), while still allowing the common use of redirects at the repository level. Since c93c92f30 (http: update base URLs when we see redirects, 2013-09-28) we re-root all further requests from the redirect destination, which should generally mean that no further redirection is necessary. As an escape hatch, in case there really is a server that needs to redirect individual requests, the user can set http.followRedirects to "true" (and this can be done on a per-server basis via http.*.followRedirects config). Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-06remote-curl: rename shadowed options variableLibravatar Jeff King1-9/+9
The discover_refs() function has a local "options" variable to hold the http_get_options we pass to http_get_strbuf(). But this shadows the global "struct options" that holds our program-level options, which cannot be accessed from this function. Let's give the local one a more descriptive name so we can tell the two apart. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-11-18remote-curl: don't hang when a server dies before any outputLibravatar David Turner1-0/+8
In the event that a HTTP server closes the connection after giving a 200 but before giving any packets, we don't want to hang forever waiting for a response that will never come. Instead, we should die immediately. One case where this happens is when attempting to fetch a dangling object by its object name. In this case, the server dies before sending any data. Prior to this patch, fetch-pack would wait for data from the server, and remote-curl would wait for fetch-pack, causing a deadlock. Despite this patch, there is other possible malformed input that could cause the same deadlock (e.g. a half-finished pktline, or a pktline but no trailing flush). There are a few possible solutions to this: 1. Allowing remote-curl to tell fetch-pack about the EOF (so that fetch-pack could know that no more data is coming until it says something else). This is tricky because an out-of-band signal would be required, or the http response would have to be re-framed inside another layer of pkt-line or something. 2. Make remote-curl understand some of the protocol. It turns out that in addition to understanding pkt-line, it would need to watch for ack/nak. This is somewhat fragile, as information about the protocol would end up in two places. Also, pkt-lines which are already at the length limit would need special handling. Both of these solutions would require a fair amount of work, whereas this hack is easy and solves at least some of the problem. Still to do: it would be good to give a better error message than "fatal: The remote end hung up unexpectedly". Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-10-10Merge branch 'nd/shallow-deepen'Libravatar Junio C Hamano1-31/+49
The existing "git fetch --depth=<n>" option was hard to use correctly when making the history of an existing shallow clone deeper. A new option, "--deepen=<n>", has been added to make this easier to use. "git clone" also learned "--shallow-since=<date>" and "--shallow-exclude=<tag>" options to make it easier to specify "I am interested only in the recent N months worth of history" and "Give me only the history since that version". * nd/shallow-deepen: (27 commits) fetch, upload-pack: --deepen=N extends shallow boundary by N commits upload-pack: add get_reachable_list() upload-pack: split check_unreachable() in two, prep for get_reachable_list() t5500, t5539: tests for shallow depth excluding a ref clone: define shallow clone boundary with --shallow-exclude fetch: define shallow boundary with --shallow-exclude upload-pack: support define shallow boundary by excluding revisions refs: add expand_ref() t5500, t5539: tests for shallow depth since a specific date clone: define shallow clone boundary based on time with --shallow-since fetch: define shallow boundary with --shallow-since upload-pack: add deepen-since to cut shallow repos based on time shallow.c: implement a generic shallow boundary finder based on rev-list fetch-pack: use a separate flag for fetch in deepening mode fetch-pack.c: mark strings for translating fetch-pack: use a common function for verbose printing fetch-pack: use skip_prefix() instead of starts_with() upload-pack: move rev-list code out of check_non_tip() upload-pack: make check_non_tip() clean things up on error upload-pack: tighten number parsing at "deepen" lines ...
2016-07-06Merge branch 'jk/common-main-2.8' into jk/common-mainLibravatar Junio C Hamano1-4/+1
* jk/common-main-2.8: mingw: declare main()'s argv as const common-main: call git_setup_gettext() common-main: call restore_sigpipe_to_default() common-main: call sanitize_stdfds() common-main: call git_extract_argv0_path() add an extra level of indirection to main()
2016-07-01common-main: call git_setup_gettext()Libravatar Jeff King1-2/+0
This should be part of every program, as otherwise users do not get translated error messages. However, some external commands forgot to do so (e.g., git-credential-store). This fixes them, and eliminates the repeated code in programs that did remember to use it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01common-main: call git_extract_argv0_path()Libravatar Jeff King1-1/+0
Every program which links against libgit.a must call this function, or risk hitting an assert() in system_path() that checks whether we have configured argv0_path (though only when RUNTIME_PREFIX is defined, so essentially only on Windows). Looking at the diff, you can see that putting it into the common main() saves us having to do it individually in each of the external commands. But what you can't see are the cases where we _should_ have been doing so, but weren't (e.g., git-credential-store, and all of the t/helper test programs). This has been an accident-waiting-to-happen for a long time, but wasn't triggered until recently because it involves one of those programs actually calling system_path(). That happened with git-credential-store in v2.8.0 with ae5f677 (lazily load core.sharedrepository, 2016-03-11). The program: - takes a lock file, which... - opens a tempfile, which... - calls adjust_shared_perm to fix permissions, which... - lazy-loads the config (as of ae5f677), which... - calls system_path() to find the location of /etc/gitconfig On systems with RUNTIME_PREFIX, this means credential-store reliably hits that assert() and cannot be used. We never noticed in the test suite, because we set GIT_CONFIG_NOSYSTEM there, which skips the system_path() lookup entirely. But if we were to tweak git_config() to find /etc/gitconfig even when we aren't going to open it, then the test suite shows multiple failures (for credential-store, and for some other test helpers). I didn't include that tweak here because it's way too specific to this particular call to be worth carrying around what is essentially dead code. The implementation is fairly straightforward, with one exception: there is exactly one caller (git.c) that actually cares about the result of the function, and not the side-effect of setting up argv0_path. We can accommodate that by simply replacing the value of argv[0] in the array we hand down to cmd_main(). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-01add an extra level of indirection to main()Libravatar Jeff King1-1/+1
There are certain startup tasks that we expect every git process to do. In some cases this is just to improve the quality of the program (e.g., setting up gettext()). In others it is a requirement for using certain functions in libgit.a (e.g., system_path() expects that you have called git_extract_argv0_path()). Most commands are builtins and are covered by the git.c version of main(). However, there are still a few external commands that use their own main(). Each of these has to remember to include the correct startup sequence, and we are not always consistent. Rather than just fix the inconsistencies, let's make this harder to get wrong by providing a common main() that can run this standard startup. We basically have two options to do this: - the compat/mingw.h file already does something like this by adding a #define that replaces the definition of main with a wrapper that calls mingw_startup(). The upside is that the code in each program doesn't need to be changed at all; it's rewritten on the fly by the preprocessor. The downside is that it may make debugging of the startup sequence a bit more confusing, as the preprocessor is quietly inserting new code. - the builtin functions are all of the form cmd_foo(), and git.c's main() calls them. This is much more explicit, which may make things more obvious to somebody reading the code. It's also more flexible (because of course we have to figure out _which_ cmd_foo() to call). The downside is that each of the builtins must define cmd_foo(), instead of just main(). This patch chooses the latter option, preferring the more explicit approach, even though it is more invasive. We introduce a new file common-main.c, with the "real" main. It expects to call cmd_main() from whatever other objects it is linked against. We link common-main.o against anything that links against libgit.a, since we know that such programs will need to do this setup. Note that common-main.o can't actually go inside libgit.a, as the linker would not pick up its main() function automatically (it has no callers). The rest of the patch is just adjusting all of the various external programs (mostly in t/helper) to use cmd_main(). I've provided a global declaration for cmd_main(), which means that all of the programs also need to match its signature. In particular, many functions need to switch to "const char **" instead of "char **" for argv. This effect ripples out to a few other variables and functions, as well. This makes the patch even more invasive, but the end result is much better. We should be treating argv strings as const anyway, and now all programs conform to the same signature (which also matches the way builtins are defined). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch, upload-pack: --deepen=N extends shallow boundary by N commitsLibravatar Nguyễn Thái Ngọc Duy1-1/+13
In git-fetch, --depth argument is always relative with the latest remote refs. This makes it a bit difficult to cover this use case, where the user wants to make the shallow history, say 3 levels deeper. It would work if remote refs have not moved yet, but nobody can guarantee that, especially when that use case is performed a couple months after the last clone or "git fetch --depth". Also, modifying shallow boundary using --depth does not work well with clones created by --since or --not. This patch fixes that. A new argument --deepen=<N> will add <N> more (*) parent commits to the current history regardless of where remote refs are. Have/Want negotiation is still respected. So if remote refs move, the server will send two chunks: one between "have" and "want" and another to extend shallow history. In theory, the client could send no "want"s in order to get the second chunk only. But the protocol does not allow that. Either you send no want lines, which means ls-remote; or you have to send at least one want line that carries deep-relative to the server.. The main work was done by Dongcan Jiang. I fixed it up here and there. And of course all the bugs belong to me. (*) We could even support --deepen=<N> where <N> is negative. In that case we can cut some history from the shallow clone. This operation (and --depth=<shorter depth>) does not require interaction with remote side (and more complicated to implement as a result). Helped-by: Duy Nguyen <pclouds@gmail.com> Helped-by: Eric Sunshine <sunshine@sunshineco.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Dongcan Jiang <dongcan.jiang@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch: define shallow boundary with --shallow-excludeLibravatar Nguyễn Thái Ngọc Duy1-0/+9
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13fetch: define shallow boundary with --shallow-sinceLibravatar Nguyễn Thái Ngọc Duy1-2/+9
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-13remote-curl.c: convert fetch_git() to use argv_arrayLibravatar Nguyễn Thái Ngọc Duy1-28/+18
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-27http: support sending custom HTTP headersLibravatar Johannes Schindelin1-2/+2
We introduce a way to send custom HTTP headers with all requests. This allows us, for example, to send an extra token from build agents for temporary access to private repositories. (This is the use case that triggered this patch.) This feature can be used like this: git -c http.extraheader='Secret: sssh!' fetch $URL $REF Note that `curl_easy_setopt(..., CURLOPT_HTTPHEADER, ...)` takes only a single list, overriding any previous call. This means we have to collect _all_ of the headers we want to use into a single list, and feed it to cURL in one shot. Since we already unconditionally set a "pragma" header when initializing the curl handles, we can add our new headers to that list. For callers which override the default header list (like probe_rpc), we provide `http_copy_default_headers()` so they can do the same trick. Big thanks to Jeff King and Junio Hamano for their outstanding help and patient reviews. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-26Merge branch 'jk/tighten-alloc'Libravatar Junio C Hamano1-13/+13
Update various codepaths to avoid manually-counted malloc(). * jk/tighten-alloc: (22 commits) ewah: convert to REALLOC_ARRAY, etc convert ewah/bitmap code to use xmalloc diff_populate_gitlink: use a strbuf transport_anonymize_url: use xstrfmt git-compat-util: drop mempcpy compat code sequencer: simplify memory allocation of get_message test-path-utils: fix normalize_path_copy output buffer size fetch-pack: simplify add_sought_entry fast-import: simplify allocation in start_packfile write_untracked_extension: use FLEX_ALLOC helper prepare_{git,shell}_cmd: use argv_array use st_add and st_mult for allocation size computation convert trivial cases to FLEX_ARRAY macros use xmallocz to avoid size arithmetic convert trivial cases to ALLOC_ARRAY convert manual allocations to argv_array argv-array: add detach function add helpers for allocating flex-array structs harden REALLOC_ARRAY and xcalloc against size_t overflow tree-diff: catch integer overflow in combine_diff_path allocation ...
2016-02-24Merge branch 'sp/remote-curl-ssl-strerror'Libravatar Junio C Hamano1-2/+14
Help those who debug http(s) part of the system. * sp/remote-curl-ssl-strerror: remote-curl: include curl_errorstr on SSL setup failures
2016-02-24Merge branch 'ew/force-ipv4'Libravatar Junio C Hamano1-0/+13
"git fetch" and friends that make network connections can now be told to only use ipv4 (or ipv6). * ew/force-ipv4: connect & http: support -4 and -6 switches for remote operations
2016-02-22convert trivial cases to ALLOC_ARRAYLibravatar Jeff King1-1/+2
Each of these cases can be converted to use ALLOC_ARRAY or REALLOC_ARRAY, which has two advantages: 1. It automatically checks the array-size multiplication for overflow. 2. It always uses sizeof(*array) for the element-size, so that it can never go out of sync with the declared type of the array. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22convert manual allocations to argv_arrayLibravatar Jeff King1-12/+11
There are many manual argv allocations that predate the argv_array API. Switching to that API brings a few advantages: 1. We no longer have to manually compute the correct final array size (so it's one less thing we can screw up). 2. In many cases we had to make a separate pass to count, then allocate, then fill in the array. Now we can do it in one pass, making the code shorter and easier to follow. 3. argv_array handles memory ownership for us, making it more obvious when things should be free()d and and when not. Most of these cases are pretty straightforward. In some, we switch from "run_command_v" to "run_command" which lets us directly use the argv_array embedded in "struct child_process". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-15remote-curl: include curl_errorstr on SSL setup failuresLibravatar Shawn Pearce1-2/+14
For curl error 35 (CURLE_SSL_CONNECT_ERROR) users need the additional text stored in CURLOPT_ERRORBUFFER to debug why the connection did not start. This is curl_errorstr inside of http.c, so include that in the message if it is non-empty. Sometimes HTTP response codes aren't yet available, such as when the SSL setup fails. Don't include HTTP 0 in the message. Signed-off-by: Shawn Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-12connect & http: support -4 and -6 switches for remote operationsLibravatar Eric Wong1-0/+13
Sometimes it is necessary to force IPv4-only or IPv6-only operation on networks where name lookups may return a non-routable address and stall remote operations. The ssh(1) command has an equivalent switches which we may pass when we run them. There may be old ssh(1) implementations out there which do not support these switches; they should report the appropriate error in that case. rsync support is untouched for now since it is deprecated and scheduled to be removed. Signed-off-by: Eric Wong <normalperson@yhbt.net> Reviewed-by: Torsten Bögershausen <tboegi@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-01-15strbuf: introduce strbuf_getline_{lf,nul}()Libravatar Junio C Hamano1-3/+3
The strbuf_getline() interface allows a byte other than LF or NUL as the line terminator, but this is only because I wrote these codepaths anticipating that there might be a value other than NUL and LF that could be useful when I introduced line_termination long time ago. No useful caller that uses other value has emerged. By now, it is clear that the interface is overly broad without a good reason. Many codepaths have hardcoded preference to read either LF terminated or NUL terminated records from their input, and then call strbuf_getline() with LF or NUL as the third parameter. This step introduces two thin wrappers around strbuf_getline(), namely, strbuf_getline_lf() and strbuf_getline_nul(), and mechanically rewrites these call sites to call either one of them. The changes contained in this patch are: * introduction of these two functions in strbuf.[ch] * mechanical conversion of all callers to strbuf_getline() with either '\n' or '\0' as the third parameter to instead call the respective thin wrapper. After this step, output from "git grep 'strbuf_getline('" would become a lot smaller. An interim goal of this series is to make this an empty set, so that we can have strbuf_getline_crlf() take over the shorter name strbuf_getline(). Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-20parse_fetch: convert to use struct object_idLibravatar brian m. carlson1-6/+6
Convert the parse_fetch function to use struct object_id. Remove the strlen check as get_oid_hex will fail safely on receiving a too-short NUL-terminated string. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-11-20Convert struct ref to use object_id.Libravatar brian m. carlson1-5/+5
Use struct object_id in three fields in struct ref and convert all the necessary places that use it. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>