summaryrefslogtreecommitdiff
path: root/http.c
AgeCommit message (Collapse)AuthorFilesLines
2015-09-28Sync with v2.5.4Libravatar Junio C Hamano1-0/+18
2015-09-28Sync with 2.4.10Libravatar Junio C Hamano1-0/+18
2015-09-28Sync with 2.3.10Libravatar Junio C Hamano1-0/+18
2015-09-25http: limit redirection depthLibravatar Blake Burkhart1-0/+1
By default, libcurl will follow circular http redirects forever. Let's put a cap on this so that somebody who can trigger an automated fetch of an arbitrary repository (e.g., for CI) cannot convince git to loop infinitely. The value chosen is 20, which is the same default that Firefox uses. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-25http: limit redirection to protocol-whitelistLibravatar Blake Burkhart1-0/+17
Previously, libcurl would follow redirection to any protocol it was compiled for support with. This is desirable to allow redirection from HTTP to HTTPS. However, it would even successfully allow redirection from HTTP to SFTP, a protocol that git does not otherwise support at all. Furthermore git's new protocol-whitelisting could be bypassed by following a redirect within the remote helper, as it was only enforced at transport selection time. This patch limits redirects within libcurl to HTTP, HTTPS, FTP and FTPS. If there is a protocol-whitelist present, this list is limited to those also allowed by the whitelist. As redirection happens from within libcurl, it is impossible for an HTTP redirect to a protocol implemented within another remote helper. When the curl version git was compiled with is too old to support restrictions on protocol redirection, we warn the user if GIT_ALLOW_PROTOCOL restrictions were requested. This is a little inaccurate, as even without that variable in the environment, we would still restrict SFTP, etc, and we do not warn in that case. But anything else means we would literally warn every time git accesses an http remote. This commit includes a test, but it is not as robust as we would hope. It redirects an http request to ftp, and checks that curl complained about the protocol, which means that we are relying on curl's specific error message to know what happened. Ideally we would redirect to a working ftp server and confirm that we can clone without protocol restrictions, and not with them. But we do not have a portable way of providing an ftp server, nor any other protocol that curl supports (https is the closest, but we would have to deal with certificates). [jk: added test and version warning] Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-26Merge branch 'ep/http-configure-ssl-version'Libravatar Junio C Hamano1-1/+32
A new configuration variable http.sslVersion can be used to specify what specific version of SSL/TLS to use to make a connection. * ep/http-configure-ssl-version: http: add support for specifying the SSL version
2015-08-19Merge branch 'jc/finalize-temp-file'Libravatar Junio C Hamano1-5/+5
Long overdue micro clean-up. * jc/finalize-temp-file: sha1_file.c: rename move_temp_to_file() to finalize_object_file()
2015-08-17http: add support for specifying the SSL versionLibravatar Elia Pinto1-1/+32
Teach git about a new option, "http.sslVersion", which permits one to specify the SSL version to use when negotiating SSL connections. The setting can be overridden by the GIT_SSL_VERSION environment variable. Signed-off-by: Elia Pinto <gitter.spiros@gmail.com> Helped-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-10sha1_file.c: rename move_temp_to_file() to finalize_object_file()Libravatar Junio C Hamano1-5/+5
Since 5a688fe4 ("core.sharedrepository = 0mode" should set, not loosen, 2009-03-25), we kept reminding ourselves: NEEDSWORK: this should be renamed to finalize_temp_file() as "moving" is only a part of what it does, when no patch between master to pu changes the call sites of this function. without doing anything about it. Let's do so. The purpose of this function was not to move but to finalize. The detail of the primarily implementation of finalizing was to link the temporary file to its final name and then to unlink, which wasn't even "moving". The alternative implementation did "move" by calling rename(2), which is a fun tangent. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-07-13Merge branch 'et/http-proxyauth'Libravatar Junio C Hamano1-2/+2
We used to ask libCURL to use the most secure authentication method available when talking to an HTTP proxy only when we were told to talk to one via configuration variables. We now ask libCURL to always use the most secure authentication method, because the user can tell libCURL to use an HTTP proxy via an environment variable without using configuration variables. * et/http-proxyauth: http: always use any proxy auth method available
2015-06-29http: always use any proxy auth method availableLibravatar Enrique Tobis1-2/+2
We set CURLOPT_PROXYAUTH to use the most secure authentication method available only when the user has set configuration variables to specify a proxy. However, libcurl also supports specifying a proxy through environment variables. In that case libcurl defaults to only using the Basic proxy authentication method, because we do not use CURLOPT_PROXYAUTH. Set CURLOPT_PROXYAUTH to always use the most secure authentication method available, even when there is no git configuration telling us to use a proxy. This allows the user to use environment variables to configure a proxy that requires an authentication method different from Basic. Signed-off-by: Enrique A. Tobis <etobis@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-22Merge branch 'ls/http-ssl-cipher-list'Libravatar Junio C Hamano1-0/+10
Introduce http.<url>.SSLCipherList configuration variable to tweak the list of cipher suite to be used with libcURL when talking with https:// sites. * ls/http-ssl-cipher-list: http: add support for specifying an SSL cipher list
2015-05-08http: add support for specifying an SSL cipher listLibravatar Lars Kellogg-Stedman1-0/+10
Teach git about a new option, "http.sslCipherList", which permits one to specify a list of ciphers to use when negotiating SSL connections. The setting can be overwridden by the GIT_SSL_CIPHER_LIST environment variable. Signed-off-by: Lars Kellogg-Stedman <lars@redhat.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-24http: release the memory of a http pack request as wellLibravatar Stefan Beller1-0/+1
The cleanup function is used in 4 places now and it's always safe to free up the memory as well. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-06Merge branch 'ye/http-accept-language'Libravatar Junio C Hamano1-26/+1
Compilation fix for a recent topic in 'master'. * ye/http-accept-language: gettext.c: move get_preferred_languages() from http.c
2015-02-26gettext.c: move get_preferred_languages() from http.cLibravatar Jeff King1-25/+1
Calling setlocale(LC_MESSAGES, ...) directly from http.c, without including <locale.h>, was causing compilation warnings. Move the helper function to gettext.c that already includes the header and where locale-related issues are handled. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-02-25Merge branch 'tc/missing-http-proxyauth'Libravatar Junio C Hamano1-0/+2
We did not check the curl library version before using CURLOPT_PROXYAUTH feature that may not exist. * tc/missing-http-proxyauth: http: support curl < 7.10.7
2015-02-24Merge branch 'jk/dumb-http-idx-fetch-fix' into maintLibravatar Junio C Hamano1-1/+1
A broken pack .idx file in the receiving repository prevented the dumb http transport from fetching a good copy of it from the other side. * jk/dumb-http-idx-fetch-fix: dumb-http: do not pass NULL path to parse_pack_index
2015-02-18Merge branch 'ye/http-accept-language'Libravatar Junio C Hamano1-0/+147
Using environment variable LANGUAGE and friends on the client side, HTTP-based transports now send Accept-Language when making requests. * ye/http-accept-language: http: add Accept-Language header if possible
2015-02-17Merge branch 'jk/dumb-http-idx-fetch-fix'Libravatar Junio C Hamano1-1/+1
A broken pack .idx file in the receiving repository prevented the dumb http transport from fetching a good copy of it from the other side. * jk/dumb-http-idx-fetch-fix: dumb-http: do not pass NULL path to parse_pack_index
2015-02-11Merge branch 'jc/unused-symbols'Libravatar Junio C Hamano1-32/+32
Mark file-local symbols as "static", and drop functions that nobody uses. * jc/unused-symbols: shallow.c: make check_shallow_file_for_update() static remote.c: make clear_cas_option() static urlmatch.c: make match_urls() static revision.c: make save_parents() and free_saved_parents() static line-log.c: make line_log_data_init() static pack-bitmap.c: make pack_bitmap_filename() static prompt.c: remove git_getpass() nobody uses http.c: make finish_active_slot() and handle_curl_result() static
2015-02-03http: support curl < 7.10.7Libravatar Tom G. Christensen1-0/+2
Commit dd61399 introduced support for http proxies that require authentication but it relies on the CURL_PROXYAUTH option which was added in curl 7.10.7. This makes sure proxy authentication is only enabled if libcurl can support it. Signed-off-by: Tom G. Christensen <tgc@statsbiblioteket.dk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-28http: add Accept-Language header if possibleLibravatar Yi EungJun1-0/+147
Add an Accept-Language header which indicates the user's preferred languages defined by $LANGUAGE, $LC_ALL, $LC_MESSAGES and $LANG. Examples: LANGUAGE= -> "" LANGUAGE=ko:en -> "Accept-Language: ko, en;q=0.9, *;q=0.1" LANGUAGE=ko LANG=en_US.UTF-8 -> "Accept-Language: ko, *;q=0.1" LANGUAGE= LANG=en_US.UTF-8 -> "Accept-Language: en-US, *;q=0.1" This gives git servers a chance to display remote error messages in the user's preferred language. Limit the number of languages to 1,000 because q-value must not be smaller than 0.001, and limit the length of Accept-Language header to 4,000 bytes for some HTTP servers which cannot accept such long header. Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-27dumb-http: do not pass NULL path to parse_pack_indexLibravatar Jeff King1-1/+1
Once upon a time, dumb http always fetched .idx files directly into their final location, and then checked their validity with parse_pack_index. This was refactored in commit 750ef42 (http-fetch: Use temporary files for pack-*.idx until verified, 2010-04-19), which uses the following logic: 1. If we have the idx already in place, see if it's valid (using parse_pack_index). If so, use it. 2. Otherwise, fetch the .idx to a tempfile, check that, and if so move it into place. 3. Either way, fetch the pack itself if necessary. However, it got step 1 wrong. We pass a NULL path parameter to parse_pack_index, so an existing .idx file always looks broken. Worse, we do not treat this broken .idx as an opportunity to re-fetch, but instead return an error, ignoring the pack entirely. This can lead to a dumb-http fetch failing to retrieve the necessary objects. This doesn't come up much in practice, because it must be a packfile that we found out about (and whose .idx we stored) during an earlier dumb-http fetch, but whose packfile we _didn't_ fetch. I.e., we did a partial clone of a repository, didn't need some packfiles, and now a followup fetch needs them. Discovery and tests by Charles Bailey <charles@hashpling.org>. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-15http.c: make finish_active_slot() and handle_curl_result() staticLibravatar Junio C Hamano1-32/+32
They used to be used directly by remote-curl.c for the smart-http protocol. But they got wrapped by run_one_slot() in beed336 (http: never use curl_easy_perform, 2014-02-18). Any future users are expected to follow that route. Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-07remote-curl: fall back to Basic auth if Negotiate failsLibravatar brian m. carlson1-0/+10
Apache servers using mod_auth_kerb can be configured to allow the user to authenticate either using Negotiate (using the Kerberos ticket) or Basic authentication (using the Kerberos password). Often, one will want to use Negotiate authentication if it is available, but fall back to Basic authentication if the ticket is missing or expired. However, libcurl will try very hard to use something other than Basic auth, even over HTTPS. If Basic and something else are offered, libcurl will never attempt to use Basic, even if the other option fails. Teach the HTTP client code to stop trying authentication mechanisms that don't use a password (currently Negotiate) after the first failure, since if they failed the first time, they will never succeed. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-14Merge branch 'da/include-compat-util-first-in-c'Libravatar Junio C Hamano1-0/+1
Code clean-up. * da/include-compat-util-first-in-c: cleanups: ensure that git-compat-util.h is included first
2014-09-15cleanups: ensure that git-compat-util.h is included firstLibravatar David Aguilar1-0/+1
CodingGuidelines states that the first #include in C files should be git-compat-util.h or another header file that includes it, such as cache.h or builtin.h. Signed-off-by: David Aguilar <davvid@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-11Merge branch 'br/http-init-fix'Libravatar Junio C Hamano1-5/+7
Code clean-up. * br/http-init-fix: http: style fixes for curl_multi_init error check http.c: die if curl_*_init fails
2014-08-21http: style fixes for curl_multi_init error checkLibravatar Jeff King1-4/+2
Unless there is a good reason, we should use die() rather than fprintf/exit. We can also shorten the message to match other curl init failures (and match our usual lowercase no-full-stop style). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20run-command: introduce CHILD_PROCESS_INITLibravatar René Scharfe1-2/+1
Most struct child_process variables are cleared using memset first after declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to initialize them statically instead. That's shorter, doesn't require a function call and is slightly more readable (especially given that we already have STRBUF_INIT, ARGV_ARRAY_INIT etc.). Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-18http.c: die if curl_*_init failsLibravatar Bernhard Reiter1-1/+5
Signed-off-by: Bernhard Reiter <ockham@raz.or.at> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-09Merge branch 'jk/skip-prefix'Libravatar Junio C Hamano1-2/+1
* jk/skip-prefix: http-push: refactor parsing of remote object names imap-send: use skip_prefix instead of using magic numbers use skip_prefix to avoid repeated calculations git: avoid magic number with skip_prefix fetch-pack: refactor parsing in get_ack fast-import: refactor parsing of spaces stat_opt: check extra strlen call daemon: use skip_prefix to avoid magic numbers fast-import: use skip_prefix for parsing input use skip_prefix to avoid repeating strings use skip_prefix to avoid magic numbers transport-helper: avoid reading past end-of-string fast-import: fix read of uninitialized argv memory apply: use skip_prefix instead of raw addition refactor skip_prefix to return a boolean avoid using skip_prefix as a boolean daemon: mark some strings as const parse_diff_color_slot: drop ofs parameter
2014-06-20use skip_prefix to avoid repeated calculationsLibravatar Jeff King1-2/+1
In some cases, we use starts_with to check for a prefix, and then use an already-calculated prefix length to advance a pointer past the prefix. There are no magic numbers or duplicated strings here, but we can still make the code simpler and more obvious by using skip_prefix. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-17http: fix charset detection of extract_content_type()Libravatar Yi EungJun1-2/+2
extract_content_type() could not extract a charset parameter if the parameter is not the first one and there is a whitespace and a following semicolon just before the parameter. For example: text/plain; format=fixed ;charset=utf-8 And it also could not handle correctly some other cases, such as: text/plain; charset=utf-8; format=fixed text/plain; some-param="a long value with ;semicolons;"; charset=utf-8 Thanks-to: Jeff King <peff@peff.net> Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27http: default text charset to iso-8859-1Libravatar Jeff King1-0/+3
This is specified by RFC 2616 as the default if no "charset" parameter is given. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27http: optionally extract charset parameter from content-typeLibravatar Jeff King1-4/+50
Since the previous commit, we now give a sanitized, shortened version of the content-type header to any callers who ask for it. This patch adds back a way for them to cleanly access specific parameters to the type. We could easily extract all parameters and make them available via a string_list, but: 1. That complicates the interface and memory management. 2. In practice, no planned callers care about anything except the charset. This patch therefore goes with the simplest thing, and we can expand or change the interface later if it becomes necessary. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27http: extract type/subtype portion of content-typeLibravatar Jeff King1-3/+35
When we get a content-type from curl, we get the whole header line, including any parameters, and without any normalization (like downcasing or whitespace) applied. If we later try to match it with strcmp() or even strcasecmp(), we may get false negatives. This could cause two visible behaviors: 1. We might fail to recognize a smart-http server by its content-type. 2. We might fail to relay text/plain error messages to users (especially if they contain a charset parameter). This patch teaches the http code to extract and normalize just the type/subtype portion of the string. This is technically passing out less information to the callers, who can no longer see the parameters. But none of the current callers cares, and a future patch will add back an easier-to-use method for accessing those parameters. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-14Merge branch 'mh/object-code-cleanup'Libravatar Junio C Hamano1-1/+1
* mh/object-code-cleanup: sha1_file.c: document a bunch of functions defined in the file sha1_file_name(): declare to return a const string find_pack_entry(): document last_found_pack replace_object: use struct members instead of an array
2014-02-24sha1_file_name(): declare to return a const stringLibravatar Michael Haggerty1-1/+1
Change the return value of sha1_file_name() to (const char *). (Callers have no business mucking about here.) Change callers accordingly, deleting a few superfluous temporary variables along the way. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-18http: never use curl_easy_performLibravatar Jeff King1-9/+15
We currently don't reuse http connections when fetching via the smart-http protocol. This is bad because the TCP handshake introduces latency, and especially because SSL connection setup may be non-trivial. We can fix it by consistently using curl's "multi" interface. The reason is rather complicated: Our http code has two ways of being used: queuing many "slots" to be fetched in parallel, or fetching a single request in a blocking manner. The parallel code is built on curl's "multi" interface. Most of the single-request code uses http_request, which is built on top of the parallel code (we just feed it one slot, and wait until it finishes). However, one could also accomplish the single-request scheme by avoiding curl's multi interface entirely and just using curl_easy_perform. This is simpler, and is used by post_rpc in the smart-http protocol. It does work to use the same curl handle in both contexts, as long as it is not at the same time. However, internally curl may not share all of the cached resources between both contexts. In particular, a connection formed using the "multi" code will go into a reuse pool connected to the "multi" object. Further requests using the "easy" interface will not be able to reuse that connection. The smart http protocol does ref discovery via http_request, which uses the "multi" interface, and then follows up with the "easy" interface for its rpc calls. As a result, we make two HTTP connections rather than reusing a single one. We could teach the ref discovery to use the "easy" interface. But it is only once we have done this discovery that we know whether the protocol will be smart or dumb. If it is dumb, then our further requests, which want to fetch objects in parallel, will not be able to reuse the same connection. Instead, this patch switches post_rpc to build on the parallel interface, which means that we use it consistently everywhere. It's a little more complicated to use, but since we have the infrastructure already, it doesn't add any code; we can just factor out the relevant bits from http_request. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-17Merge branch 'cc/starts-n-ends-with'Libravatar Junio C Hamano1-5/+5
Remove a few duplicate implementations of prefix/suffix comparison functions, and rename them to starts_with and ends_with. * cc/starts-n-ends-with: replace {pre,suf}fixcmp() with {starts,ends}_with() strbuf: introduce starts_with() and ends_with() builtin/remote: remove postfixcmp() and use suffixcmp() instead environment: normalize use of prefixcmp() by removing " != 0"
2013-12-05replace {pre,suf}fixcmp() with {starts,ends}_with()Libravatar Christian Couder1-5/+5
Leaving only the function definitions and declarations so that any new topic in flight can still make use of the old functions, replace existing uses of the prefixcmp() and suffixcmp() with new API functions. The change can be recreated by mechanically applying this: $ git grep -l -e prefixcmp -e suffixcmp -- \*.c | grep -v strbuf\\.c | xargs perl -pi -e ' s|!prefixcmp\(|starts_with\(|g; s|prefixcmp\(|!starts_with\(|g; s|!suffixcmp\(|ends_with\(|g; s|suffixcmp\(|!ends_with\(|g; ' on the result of preparatory changes in this series. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05Merge branch 'bc/http-100-continue'Libravatar Junio C Hamano1-0/+6
Issue "100 Continue" responses to help use of GSS-Negotiate authentication scheme over HTTP transport when needed. * bc/http-100-continue: remote-curl: fix large pushes with GSSAPI remote-curl: pass curl slot_results back through run_slot http: return curl's AUTHAVAIL via slot_results
2013-10-31http: return curl's AUTHAVAIL via slot_resultsLibravatar Jeff King1-0/+6
Callers of the http code may want to know which auth types were available for the previous request. But after finishing with the curl slot, they are not supposed to look at the curl handle again. We already handle returning other information via the slot_results struct; let's add a flag to check the available auth. Note that older versions of curl did not support this, so we simply return 0 (something like "-1" would be worse, as the value is a bitflag and we might accidentally set a flag). This is sufficient for the callers planned in this series, who only trigger some optional behavior if particular bits are set, and can live with a fake "no bits" answer. Signed-off-by: Jeff King <peff@peff.net>
2013-10-30Merge branch 'jk/http-auth-redirects'Libravatar Junio C Hamano1-29/+108
Handle the case where http transport gets redirected during the authorization request better. * jk/http-auth-redirects: http.c: Spell the null pointer as NULL remote-curl: rewrite base url from info/refs redirects remote-curl: store url as a strbuf remote-curl: make refs_url a strbuf http: update base URLs when we see redirects http: provide effective url to callers http: hoist credential request out of handle_curl_result http: refactor options to http_get_* http_request: factor out curlinfo_strbuf http_get_file: style fixes
2013-10-28Merge branch 'ew/keepalive'Libravatar Junio C Hamano1-0/+38
* ew/keepalive: http: use curl's tcp keepalive if available http: enable keepalive on TCP sockets
2013-10-24http.c: Spell the null pointer as NULLLibravatar Ramsay Jones1-1/+1
Commit 1bbcc224 ("http: refactor options to http_get_*", 28-09-2013) changed the type of final 'options' argument of the http_get_file() function from an int to an 'struct http_get_options' pointer. However, it neglected to update the (single) call site. Since this call was passing '0' to that argument, it was (correctly) being interpreted as a null pointer. Change to argument to NULL. Noticed by sparse. ("Using plain integer as NULL pointer") Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Acked-by: Jeff King <peff@peff.net> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-16http: use curl's tcp keepalive if availableLibravatar Jeff King1-4/+20
Commit a15d069 taught git to use curl's SOCKOPTFUNCTION hook to turn on TCP keepalives. However, modern versions of curl have a TCP_KEEPALIVE option, which can do this for us. As an added bonus, the curl code knows how to turn on keepalive for a much wider variety of platforms. The only downside to using this option is that not everybody has a new enough curl. Let's split our keepalive options into three conditionals: 1. With curl 7.25.0 and newer, we rely on curl to do it right. 2. With older curl that still knows SOCKOPTFUNCTION, we use the code from a15d069. 3. Otherwise, we are out of luck, and the call is a no-op. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-14http: update base URLs when we see redirectsLibravatar Jeff King1-0/+60
If a caller asks the http_get_* functions to go to a particular URL and we end up elsewhere due to a redirect, the effective_url field can tell us where we went. It would be nice to remember this redirect and short-cut further requests for two reasons: 1. It's more efficient. Otherwise we spend an extra http round-trip to the server for each subsequent request, just to get redirected. 2. If we end up with an http 401 and are going to ask for credentials, it is to feed them to the redirect target. If the redirect is an http->https upgrade, this means our credentials may be provided on the http leg, just to end up redirected to https. And if the redirect crosses server boundaries, then curl will drop the credentials entirely as it follows the redirect. However, it, it is not enough to simply record the effective URL we saw and use that for subsequent requests. We were originally fed a "base" url like: http://example.com/foo.git and we want to figure out what the new base is, even though the URLs we see may be: original: http://example.com/foo.git/info/refs effective: http://example.com/bar.git/info/refs Subsequent requests will not be for "info/refs", but for other paths relative to the base. We must ask the caller to pass in the original base, and we must pass the redirected base back to the caller (so that it can generate more URLs from it). Furthermore, we need to feed the new base to the credential code, so that requests to credential helpers (or to the user) match the URL we will be requesting. This patch teaches http_request_reauth to do this munging. Since it is the caller who cares about making more URLs, it seems at first glance that callers could simply check effective_url themselves and handle it. However, since we need to update the credential struct before the second re-auth request, we have to do it inside http_request_reauth. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>