summaryrefslogtreecommitdiff
path: root/http.c
AgeCommit message (Collapse)AuthorFilesLines
2013-09-09Merge branch 'jc/url-match'Libravatar Junio C Hamano1-3/+13
Allow section.<urlpattern>.var configuration variables to be treated as a "virtual" section.var given a URL, and use the mechanism to enhance http.* configuration variables. This is a reroll of Kyle J. McKay's work. * jc/url-match: builtin/config.c: compilation fix config: "git config --get-urlmatch" parses section.<url>.key builtin/config: refactor collect_config() config: parse http.<url>.<variable> using urlmatch config: add generic callback wrapper to parse section.<url>.key config: add helper to normalize and match URLs http.c: fix parsing of http.sslCertPasswordProtected variable
2013-08-05config: parse http.<url>.<variable> using urlmatchLibravatar Kyle J. McKay1-1/+12
Use the urlmatch_config_entry() to wrap the underlying http_options() two-level variable parser in order to set http.<variable> to the value with the most specific URL in the configuration. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Kyle J. McKay <mackyle@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-31http.c: fix parsing of http.sslCertPasswordProtected variableLibravatar Junio C Hamano1-2/+1
The existing code triggers only when the configuration variable is set to true. Once the variable is set to true in a more generic configuration file (e.g. ~/.gitconfig), it cannot be overriden to false in the repository specific one (e.g. .git/config). Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-30http: add http.savecookies option to write out HTTP cookiesLibravatar Dave Borowitz1-0/+7
HTTP servers may send Set-Cookie headers in a response and expect them to be set on subsequent requests. By default, libcurl behavior is to store such cookies in memory and reuse them across requests within a single session. However, it may also make sense, depending on the server and the cookies, to store them across sessions. Provide users an option to enable this behavior, writing cookies out to the same file specified in http.cookiefile. Signed-off-by: Dave Borowitz <dborowitz@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-27Merge branch 'bc/http-keep-memory-given-to-curl'Libravatar Junio C Hamano1-3/+9
Older cURL wanted piece of memory we call it with to be stable, but we updated the auth material after handing it to a call. * bc/http-keep-memory-given-to-curl: http.c: don't rewrite the user:passwd string multiple times
2013-06-19http.c: don't rewrite the user:passwd string multiple timesLibravatar Brandon Casey1-3/+9
Curl older than 7.17 (RHEL 4.X provides 7.12 and RHEL 5.X provides 7.15) requires that we manage any strings that we pass to it as pointers. So, we really shouldn't be modifying this strbuf after we have passed it to curl. Our interaction with curl is currently safe (before or after this patch) since the pointer that is passed to curl is never invalidated; it is repeatedly rewritten with the same sequence of characters but the strbuf functions never need to allocate a larger string, so the same memory buffer is reused. This "guarantee" of safety is somewhat subtle and could be overlooked by someone who may want to add a more complex handling of the username and password. So, let's stop modifying this strbuf after we have passed it to curl, but also leave a note to describe the assumptions that have been made about username/password lifetime and to draw attention to the code. Signed-off-by: Brandon Casey <drafnel@gmail.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-19Merge branch 'mv/ssl-ftp-curl'Libravatar Junio C Hamano1-0/+10
Does anybody really use commit walkers over (s)ftp? * mv/ssl-ftp-curl: Support FTP-over-SSL/TLS for regular FTP
2013-04-16http: set curl FAILONERROR each time we select a handleLibravatar Jeff King1-1/+1
Because we reuse curl handles for multiple requests, the setup of a handle happens in two stages: stable, global setup and per-request setup. The lifecycle of a handle is something like: 1. get_curl_handle; do basic global setup that will last through the whole program (e.g., setting the user agent, ssl options, etc) 2. get_active_slot; set up a per-request baseline (e.g., clearing the read/write functions, making it a GET request, etc) 3. perform the request with curl_*_perform functions 4. goto step 2 to perform another request Breaking it down this way means we can avoid doing global setup from step (1) repeatedly, but we still finish step (2) with a predictable baseline setup that callers can rely on. Until commit 6d052d7 (http: add HTTP_KEEP_ERROR option, 2013-04-05), setting curl's FAILONERROR option was a global setup; we never changed it. However, 6d052d7 introduced an option where some requests might turn off FAILONERROR. Later requests using the same handle would have the option unexpectedly turned off, which meant they would not notice http failures at all. This could easily be seen in the test-suite for the "half-auth" cases of t5541 and t5551. The initial requests turned off FAILONERROR, which meant it was erroneously off for the rpc POST. That worked fine for a successful request, but meant that we failed to react properly to the HTTP 401 (instead, we treated whatever the server handed us as a successful message body). The solution is simple: now that FAILONERROR is a per-request setting, we move it to get_active_slot to make sure it is reset for each request. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-12Support FTP-over-SSL/TLS for regular FTPLibravatar Modestas Vainius1-0/+10
Add a boolean http.sslTry option which allows to enable AUTH SSL/TLS and encrypted data transfers when connecting via regular FTP protocol. Default is false since it might trigger certificate verification errors on misconfigured servers. Signed-off-by: Modestas Vainius <modestas@vainius.eu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06http: drop http_error functionLibravatar Jeff King1-5/+0
This function is a single-liner and is only called from one place. Just inline it, which makes the code more obvious. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06http: re-word http error messageLibravatar Jeff King1-1/+1
When we report an http error code, we say something like: error: The requested URL reported failure: 403 Forbidden while accessing http://example.com/repo.git Everything between "error:" and "while" is written by curl, and the resulting sentence is hard to read (especially because there is no punctuation between curl's sentence and the remainder of ours). Instead, let's re-order this to give better flow: error: unable to access 'http://example.com/repo.git: The requested URL reported failure: 403 Forbidden This is still annoyingly long, but at least reads more clearly left to right. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06http: simplify http_error helper functionLibravatar Jeff King1-7/+4
This helper function should really be a one-liner that prints an error message, but it has ended up unnecessarily complicated: 1. We call error() directly when we fail to start the curl request, so we must later avoid printing a duplicate error in http_error(). It would be much simpler in this case to just stuff the error message into our usual curl_errorstr buffer rather than printing it ourselves. This means that http_error does not even have to care about curl's exit value (the interesting part is in the errorstr buffer already). 2. We return the "ret" value passed in to us, but none of the callers actually cares about our return value. We can just drop this entirely. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-06http: add HTTP_KEEP_ERROR optionLibravatar Jeff King1-0/+37
We currently set curl's FAILONERROR option, which means that any http failures are reported as curl errors, and the http body content from the server is thrown away. This patch introduces a new option to http_get_strbuf which specifies that the body content from a failed http response should be placed in the destination strbuf, where it can be accessed by the caller. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20pkt-line: move LARGE_PACKET_MAX definition from sidebandLibravatar Jeff King1-0/+1
Having the packet sizes defined near the packet read/write functions makes more sense. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-06http_request: reset "type" strbuf before addingLibravatar Jeff King1-0/+1
Callers may pass us a strbuf which we use to record the content-type of the response. However, we simply appended to it rather than overwriting its contents, meaning that cruft in the strbuf gave us a bogus type. E.g., the multiple requests triggered by http_request could yield a type like "text/plainapplication/x-git-receive-pack-advertisement". Reported-by: Michael Schubert <mschub@elegosoft.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04Verify Content-Type from smart HTTP serversLibravatar Shawn Pearce1-9/+22
Before parsing a suspected smart-HTTP response verify the returned Content-Type matches the standard. This protects a client from attempting to process a payload that smells like a smart-HTTP server response. JGit has been doing this check on all responses since the dawn of time. I mistakenly failed to include it in git-core when smart HTTP was introduced. At the time I didn't know how to get the Content-Type from libcurl. I punted, meant to circle back and fix this, and just plain forgot about it. Signed-off-by: Shawn Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-10Merge branch 'rb/http-cert-cred-no-username-prompt' into maintLibravatar Junio C Hamano1-0/+1
* rb/http-cert-cred-no-username-prompt: http.c: Avoid username prompt for certifcate credentials
2012-12-21http.c: Avoid username prompt for certifcate credentialsLibravatar Rene Bredlau1-0/+1
If sslCertPasswordProtected is set to true do not ask for username to decrypt rsa key. This question is pointless, the key is only protected by a password. Internaly the username is simply set to "". Signed-off-by: Rene Bredlau <git@unrelated.de> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-11-09Merge branch 'sz/maint-curl-multi-timeout'Libravatar Jeff King1-0/+12
Sometimes curl_multi_timeout() function suggested a wrong timeout value when there is no file descriptors to wait on and the http transport ended up sleeping for minutes in select(2) system call. Detect this and reduce the wait timeout in such a case. * sz/maint-curl-multi-timeout: Fix potential hang in https handshake
2012-10-29Merge branch 'jk/maint-http-init-not-in-result-handler'Libravatar Jeff King1-4/+2
Further clean-up to the http codepath that picks up results after cURL library is done with one request slot. * jk/maint-http-init-not-in-result-handler: http: do not set up curl auth after a 401 remote-curl: do not call run_slot repeatedly
2012-10-19Fix potential hang in https handshakeLibravatar Stefan Zager1-0/+12
It has been observed that curl_multi_timeout may return a very long timeout value (e.g., 294 seconds and some usec) just before curl_multi_fdset returns no file descriptors for reading. The upshot is that select() will hang for a long time -- long enough for an https handshake to be dropped. The observed behavior is that the git command will hang at the terminal and never transfer any data. This patch is a workaround for a probable bug in libcurl. The bug only seems to manifest around a very specific set of circumstances: - curl version (from curl/curlver.h): #define LIBCURL_VERSION_NUM 0x071307 - git-remote-https running on an ubuntu-lucid VM. - Connecting through squid proxy running on another VM. Interestingly, the problem doesn't manifest if a host connects through squid proxy running on localhost; only if the proxy is on a separate VM (not sure if the squid host needs to be on a separate physical machine). That would seem to suggest that this issue is timing-sensitive. This patch is more or less in line with a recommendation in the curl docs about how to behave when curl_multi_fdset doesn't return and file descriptors: http://curl.haxx.se/libcurl/c/curl_multi_fdset.html Signed-off-by: Stefan Zager <szager@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-17Merge branch 'jk/maint-http-half-auth-push' into maintLibravatar Junio C Hamano1-4/+3
* jk/maint-http-half-auth-push: http: fix segfault in handle_curl_result
2012-10-16Merge branch 'jk/maint-http-half-auth-push'Libravatar Junio C Hamano1-4/+3
Fixes a regression in maint-1.7.11 (v1.7.11.7), maint (v1.7.12.1) and master (v1.8.0-rc0). * jk/maint-http-half-auth-push: http: fix segfault in handle_curl_result
2012-10-12http: do not set up curl auth after a 401Libravatar Jeff King1-4/+2
When we get an http 401, we prompt for credentials and put them in our global credential struct. We also feed them to the curl handle that produced the 401, with the intent that they will be used on a retry. When the code was originally introduced in commit 42653c0, this was a necessary step. However, since dfa1725, we always feed our global credential into every curl handle when we initialize the slot with get_active_slot. So every further request already feeds the credential to curl. Moreover, accessing the slot here is somewhat dubious. After the slot has produced a response, we don't actually control it any more. If we are using curl_multi, it may even have been re-initialized to handle a different request. It just so happens that we will reuse the curl handle within the slot in such a case, and that because we only keep one global credential, it will be the one we want. So the current code is not buggy, but it is misleading. By cleaning it up, we can remove the slot argument entirely from handle_curl_result, making it much more obvious that slots should not be accessed after they are marked as finished. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-10-12http: fix segfault in handle_curl_resultLibravatar Jeff King1-4/+3
When we create an http active_request_slot, we can set its "results" pointer back to local storage. The http code will fill in the details of how the request went, and we can access those details even after the slot has been cleaned up. Commit 8809703 (http: factor out http error code handling) switched us from accessing our local results struct directly to accessing it via the "results" pointer of the slot. That means we're accessing the slot after it has been marked as finished, defeating the whole purpose of keeping the results storage separate. Most of the time this doesn't matter, as finishing the slot does not actually clean up the pointer. However, when using curl's multi interface with the dumb-http revision walker, we might actually start a new request before handing control back to the original caller. In that case, we may reuse the slot, zeroing its results pointer, and leading the original caller to segfault while looking for its results inside the slot. Instead, we need to pass a pointer to our local results storage to the handle_curl_result function, rather than relying on the pointer in the slot struct. This matches what the original code did before the refactoring (which did not use a separate function, and therefore just accessed the results struct directly). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20Enable info/refs gzip decompression in HTTP clientLibravatar Shawn O. Pearce1-0/+1
Some HTTP servers try to use gzip compression on the /info/refs request to save transfer bandwidth. Repositories with many tags may find the /info/refs request can be gzipped to be 50% of the original size due to the few but often repeated bytes used (hex SHA-1 and commonly digits in tag names). For most HTTP requests enable "Accept-Encoding: gzip" ensuring the /info/refs payload can use this encoding format. Only request gzip encoding from servers. Although deflate is supported by libcurl, most servers have standardized on gzip encoding for compression as that is what most browsers support. Asking for deflate increases request sizes by a few bytes, but is unlikely to ever be used by a server. Disable the Accept-Encoding header on probe RPCs as response bodies are supposed to be exactly 4 bytes long, "0000". The HTTP headers requesting and indicating compression use more space than the data transferred in the body. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-12Merge branch 'maint-1.7.11' into maintLibravatar Junio C Hamano1-25/+30
2012-09-12Merge branch 'jk/maint-http-half-auth-push' into maint-1.7.11Libravatar Junio C Hamano1-23/+28
Pushing to smart HTTP server with recent Git fails without having the username in the URL to force authentication, if the server is configured to allow GET anonymously, while requiring authentication for POST. * jk/maint-http-half-auth-push: http: prompt for credentials on failed POST http: factor out http error code handling t: test http access to "half-auth" repositories t: test basic smart-http authentication t/lib-httpd: recognize */smart/* repos as smart-http t/lib-httpd: only route auth/dumb to dumb repos t5550: factor out http auth setup t5550: put auth-required repo in auth/dumb
2012-08-27http: factor out http error code handlingLibravatar Jeff King1-23/+28
Most of our http requests go through the http_request() interface, which does some nice post-processing on the results. In particular, it handles prompting for missing credentials as well as approving and rejecting valid or invalid credentials. Unfortunately, it only handles GET requests. Making it handle POSTs would be quite complex, so let's pull result handling code into its own function so that it can be reused from the POST code paths. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-23http.c: don't use curl_easy_strerror prior to curl-7.12.0Libravatar Joachim Schmitz1-0/+2
Reverts be22d92 (http: avoid empty error messages for some curl errors, 2011-09-05) on platforms with older versions of libcURL where the function is not available. Signed-off-by: Joachim Schmitz <jojo@schmitz-digital.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-06-03http: get default user-agent from git_user_agentLibravatar Jeff King1-1/+2
This means we will respect the GIT_USER_AGENT build-time configuration and run-time environment variable. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-30remove superfluous newlines in error messagesLibravatar Pete Wyckoff1-1/+1
The error handling routines add a newline. Remove the duplicate ones in error messages. Signed-off-by: Pete Wyckoff <pw@padd.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-14http: use newer curl options for setting credentialsLibravatar Jeff King1-2/+11
We give the username and password to curl by sticking them in a buffer of the form "user:pass" and handing the result to CURLOPT_USERPWD. Since curl 7.19.1, there is a split mechanism, where you can specify each element individually. This has the advantage that a username can contain a ":" character. It also is less code for us, since we can hand our strings over to curl directly. And since curl 7.17.0 and higher promise to copy the strings for us, we we don't even have to worry about memory ownership issues. Unfortunately, we have to keep the ugly code for old curl around, but as it is now nicely #if'd out, we can easily get rid of it when we decide that 7.19.1 is "old enough". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-14http: clean up leak in init_curl_http_authLibravatar Jeff King1-3/+3
When we have a credential to give to curl, we must copy it into a "user:pass" buffer and then hand the buffer to curl. Old versions of curl did not copy the buffer, and we were expected to keep it valid. Newer versions of curl will copy the buffer. Our solution was to use a strbuf and detach it, giving ownership of the resulting buffer to curl. However, this meant that we were leaking the buffer on newer versions of curl, since curl was just copying it and throwing away the string we passed. Furthermore, when we replaced a credential (e.g., because our original one was rejected), we were also leaking on both old and new versions of curl. This got even worse in the last patch, which started replacing the credential (and thus leaking) on every http request. Instead, let's use a static buffer to make the ownership more clear and less leaky. We already keep a static "struct credential", so we are only handling a single credential at a time, anyway. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-10fix http auth with multiple curl handlesLibravatar Jeff King1-0/+2
HTTP authentication is currently handled by get_refs and fetch_ref, but not by fetch_object, fetch_pack or fetch_alternates. In the single-threaded case, this is not an issue, since get_refs is always called first. It recognigzes the 401 and prompts the user for credentials, which will then be used subsequently. If the curl multi interface is used, however, only the multi handle used by get_refs will have credentials configured. Requests made by other handles fail with an authentication error. Fix this by setting CURLOPT_USERPWD whenever a slot is requested. Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-28correct spelling: an URL -> a URLLibravatar Jim Meyering1-1/+1
Signed-off-by: Jim Meyering <meyering@redhat.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-02http: support proxies that require authenticationLibravatar Nelson Benitez Leon1-1/+3
When the proxy server specified by the http.proxy configuration or the http_proxy environment variable requires authentication, git failed to connect to the proxy, because we did not configure the cURL handle with CURLOPT_PROXYAUTH. When a proxy is in use, and you tell git that the proxy requires authentication by having username in the http.proxy configuration, an extra request needs to be made to the proxy to find out what authentication method it supports, as this patch uses CURLAUTH_ANY to let the library pick the most secure method supported by the proxy server. The extra round-trip adds extra latency, but relieves the user from the burden to configure a specific authentication method. If it becomes problem, a later patch could add a configuration option to specify what method to use, but let's start simple for the time being. Signed-off-by: Nelson Benitez Leon <nbenitezl@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-19Merge branch 'jk/maint-push-over-dav'Libravatar Junio C Hamano1-1/+7
* jk/maint-push-over-dav: http-push: enable "proactive auth" t5540: test DAV push with authentication Conflicts: http.c
2011-12-19Merge branch 'jk/credentials'Libravatar Junio C Hamano1-88/+25
* jk/credentials: t: add test harness for external credential helpers credentials: add "store" helper strbuf: add strbuf_add*_urlencode Makefile: unix sockets may not available on some platforms credentials: add "cache" helper docs: end-user documentation for the credential subsystem credential: make relevance of http path configurable credential: add credential.*.username credential: apply helper config http: use credential API to get passwords credential: add function for parsing url components introduce credentials API t5550: fix typo test-lib: add test_config_global variant Conflicts: strbuf.c
2011-12-13http-push: enable "proactive auth"Libravatar Jeff King1-1/+7
Before commit 986bbc08, git was proactive about asking for http passwords. It assumed that if you had a username in your URL, you would also want a password, and asked for it before making any http requests. However, this could interfere with the use of .netrc (see 986bbc08 for details). And it was also unnecessary, since the http fetching code had learned to recognize an HTTP 401 and prompt the user then. Furthermore, the proactive prompt could interfere with the usage of .netrc (see 986bbc08 for details). Unfortunately, the http push-over-DAV code never learned to recognize HTTP 401, and so was broken by this change. This patch does a quick fix of re-enabling the "proactive auth" strategy only for http-push, leaving the dumb http fetch and smart-http as-is. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-11http: use credential API to get passwordsLibravatar Jeff King1-88/+25
This patch converts the http code to use the new credential API, both for http authentication as well as for getting certificate passwords. Most of the code change is simply variable naming (the passwords are now contained inside the credential struct) or deletion of obsolete code (the credential code handles URL parsing and prompting for us). The behavior should be the same, with one exception: the credential code will prompt with a description based on the credential components. Therefore, the old prompt of: Username for 'example.com': Password for 'example.com': now looks like: Username for 'https://example.com/repo.git': Password for 'https://user@example.com/repo.git': Note that we include more information in each line, specifically: 1. We now include the protocol. While more noisy, this is an important part of knowing what you are accessing (especially if you care about http vs https). 2. We include the username in the password prompt. This is not a big deal when you have just been prompted for it, but the username may also come from the remote's URL (and after future patches, from configuration or credential helpers). In that case, it's a nice reminder of the user for which you're giving the password. 3. We include the path component of the URL. In many cases, the user won't care about this and it's simply noise (i.e., they'll use the same credential for a whole site). However, that is part of a larger question, which is whether path components should be part of credential context, both for prompting and for lookup by storage helpers. That issue will be addressed as a whole in a future patch. Similarly, for unlocking certificates, we used to say: Certificate Password for 'example.com': and we now say: Password for 'cert:///path/to/certificate': Showing the path to the client certificate makes more sense, as that is what you are unlocking, not "example.com". Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-12-05Merge branch 'mf/curl-select-fdset'Libravatar Junio C Hamano1-25/+21
* mf/curl-select-fdset: http: drop "local" member from request struct http.c: Rely on select instead of tracking whether data was received http.c: Use timeout suggested by curl instead of fixed 50ms timeout http.c: Use curl_multi_fdset to select on curl fds instead of just sleeping
2011-11-15http: remove unused function hex()Libravatar Ramkumar Ramachandra1-8/+0
Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04http: drop "local" member from request structLibravatar Jeff King1-6/+0
This is a FILE pointer in the case that we are sending our output to a file. We originally used it to run ftell() to determine whether data had been written to our file during our last call to curl. However, as of the last patch, we no longer care about that flag anymore. All uses of this struct member are now just book-keeping that can go away. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04http.c: Rely on select instead of tracking whether data was receivedLibravatar Mika Fischer1-15/+1
Since now select is used with the file descriptors of the http connections, tracking whether data was received recently (and trying to read more in that case) is no longer necessary. Instead, always call select and rely on it to return as soon as new data can be read. Signed-off-by: Mika Fischer <mika.fischer@zoopnet.de> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04http.c: Use timeout suggested by curl instead of fixed 50ms timeoutLibravatar Mika Fischer1-2/+18
Recent versions of curl can suggest a period of time the library user should sleep and try again, when curl is blocked on reading or writing (or connecting). Use this timeout instead of always sleeping for 50ms. Signed-off-by: Mika Fischer <mika.fischer@zoopnet.de> Helped-by: Daniel Stenberg <daniel@haxx.se> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04http.c: Use curl_multi_fdset to select on curl fds instead of just sleepingLibravatar Mika Fischer1-3/+3
Instead of sleeping unconditionally for a 50ms, when no data can be read from the http connection(s), use curl_multi_fdset() to obtain the actual file descriptors of the open connections and use them in the select call. This way, the 50ms sleep is interrupted when new data arrives. Signed-off-by: Mika Fischer <mika.fischer@zoopnet.de> Helped-by: Daniel Stenberg <daniel@haxx.se> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-04http: don't always prompt for passwordLibravatar Stefan Naewe1-4/+3
When a username is already specified at the beginning of any HTTP transaction (e.g. "git push https://user@hosting.example.com/project.git" or "git ls-remote https://user@hosting.example.com/project.git"), the code interactively asks for a password before calling into the libcurl library. It is very likely that the reason why user included the username in the URL is because the user knows that it would require authentication to access the resource. Asking for the password upfront would save one roundtrip to get a 401 response, getting the password and then retrying the request. This is a reasonable optimization. HOWEVER. This is done even when $HOME/.netrc might have a corresponding entry to access the site, or the site does not require authentication to access the resource after all. But neither condition can be determined until we call into libcurl library (we do not read and parse $HOME/.netrc ourselves). In these cases, the user is forced to respond to the password prompt, only to give a password that is not used in the HTTP transaction. If the password is in $HOME/.netrc, an empty input would later let the libcurl layer to pick up the password from there, and if the resource does not require authentication, any input would be taken and then discarded without getting used. It is wasteful to ask this unused information to the end user. Reduce the confusion by not trying to optimize for this case and always incur roundtrip penalty. An alternative might be to document this and keep this round-trip optimization as-is. Signed-off-by: Stefan Naewe <stefan.naewe@gmail.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-17Merge branch 'jk/http-auth'Libravatar Junio C Hamano1-40/+53
* jk/http-auth: http_init: accept separate URL parameter http: use hostname in credential description http: retry authentication failures for all http requests remote-curl: don't retry auth failures with dumb protocol improve httpd auth tests url: decode buffers that are not NUL-terminated
2011-10-15http_init: accept separate URL parameterLibravatar Jeff King1-4/+4
The http_init function takes a "struct remote". Part of its initialization procedure is to look at the remote's url and grab some auth-related parameters. However, using the url included in the remote is: - wrong; the remote-curl helper may have a separate, unrelated URL (e.g., from remote.*.pushurl). Looking at the remote's configured url is incorrect. - incomplete; http-fetch doesn't have a remote, so passes NULL. So http_init never gets to see the URL we are actually going to use. - cumbersome; http-push has a similar problem to http-fetch, but actually builds a fake remote just to pass in the URL. Instead, let's just add a separate URL parameter to http_init, and all three callsites can pass in the appropriate information. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>