summaryrefslogtreecommitdiff
path: root/http.c
AgeCommit message (Collapse)AuthorFilesLines
2014-08-20run-command: introduce CHILD_PROCESS_INITLibravatar René Scharfe1-2/+1
Most struct child_process variables are cleared using memset first after declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to initialize them statically instead. That's shorter, doesn't require a function call and is slightly more readable (especially given that we already have STRBUF_INIT, ARGV_ARRAY_INIT etc.). Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-09Merge branch 'jk/skip-prefix'Libravatar Junio C Hamano1-2/+1
* jk/skip-prefix: http-push: refactor parsing of remote object names imap-send: use skip_prefix instead of using magic numbers use skip_prefix to avoid repeated calculations git: avoid magic number with skip_prefix fetch-pack: refactor parsing in get_ack fast-import: refactor parsing of spaces stat_opt: check extra strlen call daemon: use skip_prefix to avoid magic numbers fast-import: use skip_prefix for parsing input use skip_prefix to avoid repeating strings use skip_prefix to avoid magic numbers transport-helper: avoid reading past end-of-string fast-import: fix read of uninitialized argv memory apply: use skip_prefix instead of raw addition refactor skip_prefix to return a boolean avoid using skip_prefix as a boolean daemon: mark some strings as const parse_diff_color_slot: drop ofs parameter
2014-06-20use skip_prefix to avoid repeated calculationsLibravatar Jeff King1-2/+1
In some cases, we use starts_with to check for a prefix, and then use an already-calculated prefix length to advance a pointer past the prefix. There are no magic numbers or duplicated strings here, but we can still make the code simpler and more obvious by using skip_prefix. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-17http: fix charset detection of extract_content_type()Libravatar Yi EungJun1-2/+2
extract_content_type() could not extract a charset parameter if the parameter is not the first one and there is a whitespace and a following semicolon just before the parameter. For example: text/plain; format=fixed ;charset=utf-8 And it also could not handle correctly some other cases, such as: text/plain; charset=utf-8; format=fixed text/plain; some-param="a long value with ;semicolons;"; charset=utf-8 Thanks-to: Jeff King <peff@peff.net> Signed-off-by: Yi EungJun <eungjun.yi@navercorp.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27http: default text charset to iso-8859-1Libravatar Jeff King1-0/+3
This is specified by RFC 2616 as the default if no "charset" parameter is given. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27http: optionally extract charset parameter from content-typeLibravatar Jeff King1-4/+50
Since the previous commit, we now give a sanitized, shortened version of the content-type header to any callers who ask for it. This patch adds back a way for them to cleanly access specific parameters to the type. We could easily extract all parameters and make them available via a string_list, but: 1. That complicates the interface and memory management. 2. In practice, no planned callers care about anything except the charset. This patch therefore goes with the simplest thing, and we can expand or change the interface later if it becomes necessary. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-05-27http: extract type/subtype portion of content-typeLibravatar Jeff King1-3/+35
When we get a content-type from curl, we get the whole header line, including any parameters, and without any normalization (like downcasing or whitespace) applied. If we later try to match it with strcmp() or even strcasecmp(), we may get false negatives. This could cause two visible behaviors: 1. We might fail to recognize a smart-http server by its content-type. 2. We might fail to relay text/plain error messages to users (especially if they contain a charset parameter). This patch teaches the http code to extract and normalize just the type/subtype portion of the string. This is technically passing out less information to the callers, who can no longer see the parameters. But none of the current callers cares, and a future patch will add back an easier-to-use method for accessing those parameters. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-03-14Merge branch 'mh/object-code-cleanup'Libravatar Junio C Hamano1-1/+1
* mh/object-code-cleanup: sha1_file.c: document a bunch of functions defined in the file sha1_file_name(): declare to return a const string find_pack_entry(): document last_found_pack replace_object: use struct members instead of an array
2014-02-24sha1_file_name(): declare to return a const stringLibravatar Michael Haggerty1-1/+1
Change the return value of sha1_file_name() to (const char *). (Callers have no business mucking about here.) Change callers accordingly, deleting a few superfluous temporary variables along the way. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-18http: never use curl_easy_performLibravatar Jeff King1-9/+15
We currently don't reuse http connections when fetching via the smart-http protocol. This is bad because the TCP handshake introduces latency, and especially because SSL connection setup may be non-trivial. We can fix it by consistently using curl's "multi" interface. The reason is rather complicated: Our http code has two ways of being used: queuing many "slots" to be fetched in parallel, or fetching a single request in a blocking manner. The parallel code is built on curl's "multi" interface. Most of the single-request code uses http_request, which is built on top of the parallel code (we just feed it one slot, and wait until it finishes). However, one could also accomplish the single-request scheme by avoiding curl's multi interface entirely and just using curl_easy_perform. This is simpler, and is used by post_rpc in the smart-http protocol. It does work to use the same curl handle in both contexts, as long as it is not at the same time. However, internally curl may not share all of the cached resources between both contexts. In particular, a connection formed using the "multi" code will go into a reuse pool connected to the "multi" object. Further requests using the "easy" interface will not be able to reuse that connection. The smart http protocol does ref discovery via http_request, which uses the "multi" interface, and then follows up with the "easy" interface for its rpc calls. As a result, we make two HTTP connections rather than reusing a single one. We could teach the ref discovery to use the "easy" interface. But it is only once we have done this discovery that we know whether the protocol will be smart or dumb. If it is dumb, then our further requests, which want to fetch objects in parallel, will not be able to reuse the same connection. Instead, this patch switches post_rpc to build on the parallel interface, which means that we use it consistently everywhere. It's a little more complicated to use, but since we have the infrastructure already, it doesn't add any code; we can just factor out the relevant bits from http_request. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-17Merge branch 'cc/starts-n-ends-with'Libravatar Junio C Hamano1-5/+5
Remove a few duplicate implementations of prefix/suffix comparison functions, and rename them to starts_with and ends_with. * cc/starts-n-ends-with: replace {pre,suf}fixcmp() with {starts,ends}_with() strbuf: introduce starts_with() and ends_with() builtin/remote: remove postfixcmp() and use suffixcmp() instead environment: normalize use of prefixcmp() by removing " != 0"
2013-12-05replace {pre,suf}fixcmp() with {starts,ends}_with()Libravatar Christian Couder1-5/+5
Leaving only the function definitions and declarations so that any new topic in flight can still make use of the old functions, replace existing uses of the prefixcmp() and suffixcmp() with new API functions. The change can be recreated by mechanically applying this: $ git grep -l -e prefixcmp -e suffixcmp -- \*.c | grep -v strbuf\\.c | xargs perl -pi -e ' s|!prefixcmp\(|starts_with\(|g; s|prefixcmp\(|!starts_with\(|g; s|!suffixcmp\(|ends_with\(|g; s|suffixcmp\(|!ends_with\(|g; ' on the result of preparatory changes in this series. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05Merge branch 'bc/http-100-continue'Libravatar Junio C Hamano1-0/+6
Issue "100 Continue" responses to help use of GSS-Negotiate authentication scheme over HTTP transport when needed. * bc/http-100-continue: remote-curl: fix large pushes with GSSAPI remote-curl: pass curl slot_results back through run_slot http: return curl's AUTHAVAIL via slot_results
2013-10-31http: return curl's AUTHAVAIL via slot_resultsLibravatar Jeff King1-0/+6
Callers of the http code may want to know which auth types were available for the previous request. But after finishing with the curl slot, they are not supposed to look at the curl handle again. We already handle returning other information via the slot_results struct; let's add a flag to check the available auth. Note that older versions of curl did not support this, so we simply return 0 (something like "-1" would be worse, as the value is a bitflag and we might accidentally set a flag). This is sufficient for the callers planned in this series, who only trigger some optional behavior if particular bits are set, and can live with a fake "no bits" answer. Signed-off-by: Jeff King <peff@peff.net>
2013-10-30Merge branch 'jk/http-auth-redirects'Libravatar Junio C Hamano1-29/+108
Handle the case where http transport gets redirected during the authorization request better. * jk/http-auth-redirects: http.c: Spell the null pointer as NULL remote-curl: rewrite base url from info/refs redirects remote-curl: store url as a strbuf remote-curl: make refs_url a strbuf http: update base URLs when we see redirects http: provide effective url to callers http: hoist credential request out of handle_curl_result http: refactor options to http_get_* http_request: factor out curlinfo_strbuf http_get_file: style fixes
2013-10-28Merge branch 'ew/keepalive'Libravatar Junio C Hamano1-0/+38
* ew/keepalive: http: use curl's tcp keepalive if available http: enable keepalive on TCP sockets
2013-10-24http.c: Spell the null pointer as NULLLibravatar Ramsay Jones1-1/+1
Commit 1bbcc224 ("http: refactor options to http_get_*", 28-09-2013) changed the type of final 'options' argument of the http_get_file() function from an int to an 'struct http_get_options' pointer. However, it neglected to update the (single) call site. Since this call was passing '0' to that argument, it was (correctly) being interpreted as a null pointer. Change to argument to NULL. Noticed by sparse. ("Using plain integer as NULL pointer") Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Acked-by: Jeff King <peff@peff.net> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-16http: use curl's tcp keepalive if availableLibravatar Jeff King1-4/+20
Commit a15d069 taught git to use curl's SOCKOPTFUNCTION hook to turn on TCP keepalives. However, modern versions of curl have a TCP_KEEPALIVE option, which can do this for us. As an added bonus, the curl code knows how to turn on keepalive for a much wider variety of platforms. The only downside to using this option is that not everybody has a new enough curl. Let's split our keepalive options into three conditionals: 1. With curl 7.25.0 and newer, we rely on curl to do it right. 2. With older curl that still knows SOCKOPTFUNCTION, we use the code from a15d069. 3. Otherwise, we are out of luck, and the call is a no-op. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-14http: update base URLs when we see redirectsLibravatar Jeff King1-0/+60
If a caller asks the http_get_* functions to go to a particular URL and we end up elsewhere due to a redirect, the effective_url field can tell us where we went. It would be nice to remember this redirect and short-cut further requests for two reasons: 1. It's more efficient. Otherwise we spend an extra http round-trip to the server for each subsequent request, just to get redirected. 2. If we end up with an http 401 and are going to ask for credentials, it is to feed them to the redirect target. If the redirect is an http->https upgrade, this means our credentials may be provided on the http leg, just to end up redirected to https. And if the redirect crosses server boundaries, then curl will drop the credentials entirely as it follows the redirect. However, it, it is not enough to simply record the effective URL we saw and use that for subsequent requests. We were originally fed a "base" url like: http://example.com/foo.git and we want to figure out what the new base is, even though the URLs we see may be: original: http://example.com/foo.git/info/refs effective: http://example.com/bar.git/info/refs Subsequent requests will not be for "info/refs", but for other paths relative to the base. We must ask the caller to pass in the original base, and we must pass the redirected base back to the caller (so that it can generate more URLs from it). Furthermore, we need to feed the new base to the credential code, so that requests to credential helpers (or to the user) match the URL we will be requesting. This patch teaches http_request_reauth to do this munging. Since it is the caller who cares about making more URLs, it seems at first glance that callers could simply check effective_url themselves and handle it. However, since we need to update the credential struct before the second re-auth request, we have to do it inside http_request_reauth. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14http: provide effective url to callersLibravatar Jeff King1-0/+4
When we ask curl to access a URL, it may follow one or more redirects to reach the final location. We have no idea this has happened, as curl takes care of the details and simply returns the final content to us. The final URL that we ended up with can be accessed via CURLINFO_EFFECTIVE_URL. Let's make that optionally available to callers of http_get_*, so that they can make further decisions based on the redirection. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14http: hoist credential request out of handle_curl_resultLibravatar Jeff King1-2/+4
When we are handling a curl response code in http_request or in the remote-curl RPC code, we use the handle_curl_result helper to translate curl's response into an easy-to-use code. When we see an HTTP 401, we do one of two things: 1. If we already had a filled-in credential, we mark it as rejected, and then return HTTP_NOAUTH to indicate to the caller that we failed. 2. If we didn't, then we ask for a new credential and tell the caller HTTP_REAUTH to indicate that they may want to try again. Rejecting in the first case makes sense; it is the natural result of the request we just made. However, prompting for more credentials in the second step does not always make sense. We do not know for sure that the caller is going to make a second request, and nor are we sure that it will be to the same URL. Logically, the prompt belongs not to the request we just finished, but to the request we are (maybe) about to make. In practice, it is very hard to trigger any bad behavior. Currently, if we make a second request, it will always be to the same URL (even in the face of redirects, because curl handles the redirects internally). And we almost always retry on HTTP_REAUTH these days. The one exception is if we are streaming a large RPC request to the server (e.g., a pushed packfile), in which case we cannot restart. It's extremely unlikely to see a 401 response at this stage, though, as we would typically have seen it when we sent a probe request, before streaming the data. This patch drops the automatic prompt out of case 2, and instead requires the caller to do it. This is a few extra lines of code, and the bug it fixes is unlikely to come up in practice. But it is conceptually cleaner, and paves the way for better handling of credentials across redirects. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-10-14http: enable keepalive on TCP socketsLibravatar Eric Wong1-0/+22
This is a follow up to commit e47a8583 (enable SO_KEEPALIVE for connected TCP sockets, 2011-12-06). Sockets may never receive notification of some link errors, causing "git fetch" or similar processes to hang forever. Enabling keepalive messages allows hung processes to error out after a few minutes/hours depending on the keepalive settings of the system. I noticed this problem with some non-interactive cronjobs getting hung when talking to HTTP servers. Signed-off-by: Eric Wong <normalperson@yhbt.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-30http: refactor options to http_get_*Libravatar Jeff King1-19/+25
Over time, the http_get_strbuf function has grown several optional parameters. We now have a bitfield with multiple boolean options, as well as an optional strbuf for returning the content-type of the response. And a future patch in this series is going to add another strbuf option. Treating these as separate arguments has a few downsides: 1. Most call sites need to add extra NULLs and 0s for the options they aren't interested in. 2. The http_get_* functions are actually wrappers around 2 layers of low-level implementation functions. We have to pass these options through individually. 3. The http_get_strbuf wrapper learned these options, but nobody bothered to do so for http_get_file, even though it is backed by the same function that does understand the options. Let's consolidate the options into a single struct. For the common case of the default options, we'll allow callers to simply pass a NULL for the options struct. The resulting code is often a few lines longer, but it ends up being easier to read (and to change as we add new options, since we do not need to update each call site). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-30http_request: factor out curlinfo_strbufLibravatar Jeff King1-7/+14
When we retrieve the content-type of an http response, curl gives us a pointer to internal storage, which we then copy into a strbuf. Let's factor out the get-and-copy routine, which can be used for getting other curl info. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-30http_get_file: style fixesLibravatar Jeff King1-2/+2
Besides being ugly, the extra parentheses are idiomatic for suppressing compiler warnings when we are assigning within a conditional. We aren't doing that here, and they just confuse the reader. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-09-09Merge branch 'jc/url-match'Libravatar Junio C Hamano1-3/+13
Allow section.<urlpattern>.var configuration variables to be treated as a "virtual" section.var given a URL, and use the mechanism to enhance http.* configuration variables. This is a reroll of Kyle J. McKay's work. * jc/url-match: builtin/config.c: compilation fix config: "git config --get-urlmatch" parses section.<url>.key builtin/config: refactor collect_config() config: parse http.<url>.<variable> using urlmatch config: add generic callback wrapper to parse section.<url>.key config: add helper to normalize and match URLs http.c: fix parsing of http.sslCertPasswordProtected variable
2013-08-05config: parse http.<url>.<variable> using urlmatchLibravatar Kyle J. McKay1-1/+12
Use the urlmatch_config_entry() to wrap the underlying http_options() two-level variable parser in order to set http.<variable> to the value with the most specific URL in the configuration. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Kyle J. McKay <mackyle@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-31http.c: fix parsing of http.sslCertPasswordProtected variableLibravatar Junio C Hamano1-2/+1
The existing code triggers only when the configuration variable is set to true. Once the variable is set to true in a more generic configuration file (e.g. ~/.gitconfig), it cannot be overriden to false in the repository specific one (e.g. .git/config). Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-30http: add http.savecookies option to write out HTTP cookiesLibravatar Dave Borowitz1-0/+7
HTTP servers may send Set-Cookie headers in a response and expect them to be set on subsequent requests. By default, libcurl behavior is to store such cookies in memory and reuse them across requests within a single session. However, it may also make sense, depending on the server and the cookies, to store them across sessions. Provide users an option to enable this behavior, writing cookies out to the same file specified in http.cookiefile. Signed-off-by: Dave Borowitz <dborowitz@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-27Merge branch 'bc/http-keep-memory-given-to-curl'Libravatar Junio C Hamano1-3/+9
Older cURL wanted piece of memory we call it with to be stable, but we updated the auth material after handing it to a call. * bc/http-keep-memory-given-to-curl: http.c: don't rewrite the user:passwd string multiple times
2013-06-19http.c: don't rewrite the user:passwd string multiple timesLibravatar Brandon Casey1-3/+9
Curl older than 7.17 (RHEL 4.X provides 7.12 and RHEL 5.X provides 7.15) requires that we manage any strings that we pass to it as pointers. So, we really shouldn't be modifying this strbuf after we have passed it to curl. Our interaction with curl is currently safe (before or after this patch) since the pointer that is passed to curl is never invalidated; it is repeatedly rewritten with the same sequence of characters but the strbuf functions never need to allocate a larger string, so the same memory buffer is reused. This "guarantee" of safety is somewhat subtle and could be overlooked by someone who may want to add a more complex handling of the username and password. So, let's stop modifying this strbuf after we have passed it to curl, but also leave a note to describe the assumptions that have been made about username/password lifetime and to draw attention to the code. Signed-off-by: Brandon Casey <drafnel@gmail.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-19Merge branch 'mv/ssl-ftp-curl'Libravatar Junio C Hamano1-0/+10
Does anybody really use commit walkers over (s)ftp? * mv/ssl-ftp-curl: Support FTP-over-SSL/TLS for regular FTP
2013-04-16http: set curl FAILONERROR each time we select a handleLibravatar Jeff King1-1/+1
Because we reuse curl handles for multiple requests, the setup of a handle happens in two stages: stable, global setup and per-request setup. The lifecycle of a handle is something like: 1. get_curl_handle; do basic global setup that will last through the whole program (e.g., setting the user agent, ssl options, etc) 2. get_active_slot; set up a per-request baseline (e.g., clearing the read/write functions, making it a GET request, etc) 3. perform the request with curl_*_perform functions 4. goto step 2 to perform another request Breaking it down this way means we can avoid doing global setup from step (1) repeatedly, but we still finish step (2) with a predictable baseline setup that callers can rely on. Until commit 6d052d7 (http: add HTTP_KEEP_ERROR option, 2013-04-05), setting curl's FAILONERROR option was a global setup; we never changed it. However, 6d052d7 introduced an option where some requests might turn off FAILONERROR. Later requests using the same handle would have the option unexpectedly turned off, which meant they would not notice http failures at all. This could easily be seen in the test-suite for the "half-auth" cases of t5541 and t5551. The initial requests turned off FAILONERROR, which meant it was erroneously off for the rpc POST. That worked fine for a successful request, but meant that we failed to react properly to the HTTP 401 (instead, we treated whatever the server handed us as a successful message body). The solution is simple: now that FAILONERROR is a per-request setting, we move it to get_active_slot to make sure it is reset for each request. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-12Support FTP-over-SSL/TLS for regular FTPLibravatar Modestas Vainius1-0/+10
Add a boolean http.sslTry option which allows to enable AUTH SSL/TLS and encrypted data transfers when connecting via regular FTP protocol. Default is false since it might trigger certificate verification errors on misconfigured servers. Signed-off-by: Modestas Vainius <modestas@vainius.eu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06http: drop http_error functionLibravatar Jeff King1-5/+0
This function is a single-liner and is only called from one place. Just inline it, which makes the code more obvious. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06http: re-word http error messageLibravatar Jeff King1-1/+1
When we report an http error code, we say something like: error: The requested URL reported failure: 403 Forbidden while accessing http://example.com/repo.git Everything between "error:" and "while" is written by curl, and the resulting sentence is hard to read (especially because there is no punctuation between curl's sentence and the remainder of ours). Instead, let's re-order this to give better flow: error: unable to access 'http://example.com/repo.git: The requested URL reported failure: 403 Forbidden This is still annoyingly long, but at least reads more clearly left to right. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06http: simplify http_error helper functionLibravatar Jeff King1-7/+4
This helper function should really be a one-liner that prints an error message, but it has ended up unnecessarily complicated: 1. We call error() directly when we fail to start the curl request, so we must later avoid printing a duplicate error in http_error(). It would be much simpler in this case to just stuff the error message into our usual curl_errorstr buffer rather than printing it ourselves. This means that http_error does not even have to care about curl's exit value (the interesting part is in the errorstr buffer already). 2. We return the "ret" value passed in to us, but none of the callers actually cares about our return value. We can just drop this entirely. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06http: add HTTP_KEEP_ERROR optionLibravatar Jeff King1-0/+37
We currently set curl's FAILONERROR option, which means that any http failures are reported as curl errors, and the http body content from the server is thrown away. This patch introduces a new option to http_get_strbuf which specifies that the body content from a failed http response should be placed in the destination strbuf, where it can be accessed by the caller. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20pkt-line: move LARGE_PACKET_MAX definition from sidebandLibravatar Jeff King1-0/+1
Having the packet sizes defined near the packet read/write functions makes more sense. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-06http_request: reset "type" strbuf before addingLibravatar Jeff King1-0/+1
Callers may pass us a strbuf which we use to record the content-type of the response. However, we simply appended to it rather than overwriting its contents, meaning that cruft in the strbuf gave us a bogus type. E.g., the multiple requests triggered by http_request could yield a type like "text/plainapplication/x-git-receive-pack-advertisement". Reported-by: Michael Schubert <mschub@elegosoft.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04Verify Content-Type from smart HTTP serversLibravatar Shawn Pearce1-9/+22
Before parsing a suspected smart-HTTP response verify the returned Content-Type matches the standard. This protects a client from attempting to process a payload that smells like a smart-HTTP server response. JGit has been doing this check on all responses since the dawn of time. I mistakenly failed to include it in git-core when smart HTTP was introduced. At the time I didn't know how to get the Content-Type from libcurl. I punted, meant to circle back and fix this, and just plain forgot about it. Signed-off-by: Shawn Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-10Merge branch 'rb/http-cert-cred-no-username-prompt' into maintLibravatar Junio C Hamano1-0/+1
* rb/http-cert-cred-no-username-prompt: http.c: Avoid username prompt for certifcate credentials
2012-12-21http.c: Avoid username prompt for certifcate credentialsLibravatar Rene Bredlau1-0/+1
If sslCertPasswordProtected is set to true do not ask for username to decrypt rsa key. This question is pointless, the key is only protected by a password. Internaly the username is simply set to "". Signed-off-by: Rene Bredlau <git@unrelated.de> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-11-09Merge branch 'sz/maint-curl-multi-timeout'Libravatar Jeff King1-0/+12
Sometimes curl_multi_timeout() function suggested a wrong timeout value when there is no file descriptors to wait on and the http transport ended up sleeping for minutes in select(2) system call. Detect this and reduce the wait timeout in such a case. * sz/maint-curl-multi-timeout: Fix potential hang in https handshake
2012-10-29Merge branch 'jk/maint-http-init-not-in-result-handler'Libravatar Jeff King1-4/+2
Further clean-up to the http codepath that picks up results after cURL library is done with one request slot. * jk/maint-http-init-not-in-result-handler: http: do not set up curl auth after a 401 remote-curl: do not call run_slot repeatedly
2012-10-19Fix potential hang in https handshakeLibravatar Stefan Zager1-0/+12
It has been observed that curl_multi_timeout may return a very long timeout value (e.g., 294 seconds and some usec) just before curl_multi_fdset returns no file descriptors for reading. The upshot is that select() will hang for a long time -- long enough for an https handshake to be dropped. The observed behavior is that the git command will hang at the terminal and never transfer any data. This patch is a workaround for a probable bug in libcurl. The bug only seems to manifest around a very specific set of circumstances: - curl version (from curl/curlver.h): #define LIBCURL_VERSION_NUM 0x071307 - git-remote-https running on an ubuntu-lucid VM. - Connecting through squid proxy running on another VM. Interestingly, the problem doesn't manifest if a host connects through squid proxy running on localhost; only if the proxy is on a separate VM (not sure if the squid host needs to be on a separate physical machine). That would seem to suggest that this issue is timing-sensitive. This patch is more or less in line with a recommendation in the curl docs about how to behave when curl_multi_fdset doesn't return and file descriptors: http://curl.haxx.se/libcurl/c/curl_multi_fdset.html Signed-off-by: Stefan Zager <szager@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-17Merge branch 'jk/maint-http-half-auth-push' into maintLibravatar Junio C Hamano1-4/+3
* jk/maint-http-half-auth-push: http: fix segfault in handle_curl_result
2012-10-16Merge branch 'jk/maint-http-half-auth-push'Libravatar Junio C Hamano1-4/+3
Fixes a regression in maint-1.7.11 (v1.7.11.7), maint (v1.7.12.1) and master (v1.8.0-rc0). * jk/maint-http-half-auth-push: http: fix segfault in handle_curl_result
2012-10-12http: do not set up curl auth after a 401Libravatar Jeff King1-4/+2
When we get an http 401, we prompt for credentials and put them in our global credential struct. We also feed them to the curl handle that produced the 401, with the intent that they will be used on a retry. When the code was originally introduced in commit 42653c0, this was a necessary step. However, since dfa1725, we always feed our global credential into every curl handle when we initialize the slot with get_active_slot. So every further request already feeds the credential to curl. Moreover, accessing the slot here is somewhat dubious. After the slot has produced a response, we don't actually control it any more. If we are using curl_multi, it may even have been re-initialized to handle a different request. It just so happens that we will reuse the curl handle within the slot in such a case, and that because we only keep one global credential, it will be the one we want. So the current code is not buggy, but it is misleading. By cleaning it up, we can remove the slot argument entirely from handle_curl_result, making it much more obvious that slots should not be accessed after they are marked as finished. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-12http: fix segfault in handle_curl_resultLibravatar Jeff King1-4/+3
When we create an http active_request_slot, we can set its "results" pointer back to local storage. The http code will fill in the details of how the request went, and we can access those details even after the slot has been cleaned up. Commit 8809703 (http: factor out http error code handling) switched us from accessing our local results struct directly to accessing it via the "results" pointer of the slot. That means we're accessing the slot after it has been marked as finished, defeating the whole purpose of keeping the results storage separate. Most of the time this doesn't matter, as finishing the slot does not actually clean up the pointer. However, when using curl's multi interface with the dumb-http revision walker, we might actually start a new request before handing control back to the original caller. In that case, we may reuse the slot, zeroing its results pointer, and leading the original caller to segfault while looking for its results inside the slot. Instead, we need to pass a pointer to our local results storage to the handle_curl_result function, rather than relying on the pointer in the slot struct. This matches what the original code did before the refactoring (which did not use a separate function, and therefore just accessed the results struct directly). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>