Age | Commit message (Collapse) | Author | Files | Lines |
|
We give the username and password to curl by sticking them
in a buffer of the form "user:pass" and handing the result
to CURLOPT_USERPWD. Since curl 7.19.1, there is a split
mechanism, where you can specify each element individually.
This has the advantage that a username can contain a ":"
character. It also is less code for us, since we can hand
our strings over to curl directly. And since curl 7.17.0 and
higher promise to copy the strings for us, we we don't even
have to worry about memory ownership issues.
Unfortunately, we have to keep the ugly code for old curl
around, but as it is now nicely #if'd out, we can easily get
rid of it when we decide that 7.19.1 is "old enough".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When we have a credential to give to curl, we must copy it
into a "user:pass" buffer and then hand the buffer to curl.
Old versions of curl did not copy the buffer, and we were
expected to keep it valid. Newer versions of curl will copy
the buffer.
Our solution was to use a strbuf and detach it, giving
ownership of the resulting buffer to curl. However, this
meant that we were leaking the buffer on newer versions of
curl, since curl was just copying it and throwing away the
string we passed. Furthermore, when we replaced a
credential (e.g., because our original one was rejected), we
were also leaking on both old and new versions of curl.
This got even worse in the last patch, which started
replacing the credential (and thus leaking) on every http
request.
Instead, let's use a static buffer to make the ownership
more clear and less leaky. We already keep a static "struct
credential", so we are only handling a single credential at
a time, anyway.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
HTTP authentication is currently handled by get_refs and fetch_ref, but
not by fetch_object, fetch_pack or fetch_alternates. In the
single-threaded case, this is not an issue, since get_refs is always
called first. It recognigzes the 401 and prompts the user for
credentials, which will then be used subsequently.
If the curl multi interface is used, however, only the multi handle used
by get_refs will have credentials configured. Requests made by other
handles fail with an authentication error.
Fix this by setting CURLOPT_USERPWD whenever a slot is requested.
Signed-off-by: Clemens Buchacher <drizzd@aon.at>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Jim Meyering <meyering@redhat.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When the proxy server specified by the http.proxy configuration or the
http_proxy environment variable requires authentication, git failed to
connect to the proxy, because we did not configure the cURL handle with
CURLOPT_PROXYAUTH.
When a proxy is in use, and you tell git that the proxy requires
authentication by having username in the http.proxy configuration, an
extra request needs to be made to the proxy to find out what
authentication method it supports, as this patch uses CURLAUTH_ANY to let
the library pick the most secure method supported by the proxy server.
The extra round-trip adds extra latency, but relieves the user from the
burden to configure a specific authentication method. If it becomes
problem, a later patch could add a configuration option to specify what
method to use, but let's start simple for the time being.
Signed-off-by: Nelson Benitez Leon <nbenitezl@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* jk/maint-push-over-dav:
http-push: enable "proactive auth"
t5540: test DAV push with authentication
Conflicts:
http.c
|
|
* jk/credentials:
t: add test harness for external credential helpers
credentials: add "store" helper
strbuf: add strbuf_add*_urlencode
Makefile: unix sockets may not available on some platforms
credentials: add "cache" helper
docs: end-user documentation for the credential subsystem
credential: make relevance of http path configurable
credential: add credential.*.username
credential: apply helper config
http: use credential API to get passwords
credential: add function for parsing url components
introduce credentials API
t5550: fix typo
test-lib: add test_config_global variant
Conflicts:
strbuf.c
|
|
Before commit 986bbc08, git was proactive about asking for
http passwords. It assumed that if you had a username in
your URL, you would also want a password, and asked for it
before making any http requests.
However, this could interfere with the use of .netrc (see
986bbc08 for details). And it was also unnecessary, since
the http fetching code had learned to recognize an HTTP 401
and prompt the user then. Furthermore, the proactive prompt
could interfere with the usage of .netrc (see 986bbc08 for
details).
Unfortunately, the http push-over-DAV code never learned to
recognize HTTP 401, and so was broken by this change. This
patch does a quick fix of re-enabling the "proactive auth"
strategy only for http-push, leaving the dumb http fetch and
smart-http as-is.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This patch converts the http code to use the new credential
API, both for http authentication as well as for getting
certificate passwords.
Most of the code change is simply variable naming (the
passwords are now contained inside the credential struct)
or deletion of obsolete code (the credential code handles
URL parsing and prompting for us).
The behavior should be the same, with one exception: the
credential code will prompt with a description based on the
credential components. Therefore, the old prompt of:
Username for 'example.com':
Password for 'example.com':
now looks like:
Username for 'https://example.com/repo.git':
Password for 'https://user@example.com/repo.git':
Note that we include more information in each line,
specifically:
1. We now include the protocol. While more noisy, this is
an important part of knowing what you are accessing
(especially if you care about http vs https).
2. We include the username in the password prompt. This is
not a big deal when you have just been prompted for it,
but the username may also come from the remote's URL
(and after future patches, from configuration or
credential helpers). In that case, it's a nice
reminder of the user for which you're giving the
password.
3. We include the path component of the URL. In many
cases, the user won't care about this and it's simply
noise (i.e., they'll use the same credential for a
whole site). However, that is part of a larger
question, which is whether path components should be
part of credential context, both for prompting and for
lookup by storage helpers. That issue will be addressed
as a whole in a future patch.
Similarly, for unlocking certificates, we used to say:
Certificate Password for 'example.com':
and we now say:
Password for 'cert:///path/to/certificate':
Showing the path to the client certificate makes more sense,
as that is what you are unlocking, not "example.com".
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* mf/curl-select-fdset:
http: drop "local" member from request struct
http.c: Rely on select instead of tracking whether data was received
http.c: Use timeout suggested by curl instead of fixed 50ms timeout
http.c: Use curl_multi_fdset to select on curl fds instead of just sleeping
|
|
Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This is a FILE pointer in the case that we are sending our
output to a file. We originally used it to run ftell() to
determine whether data had been written to our file during
our last call to curl. However, as of the last patch, we no
longer care about that flag anymore. All uses of this struct
member are now just book-keeping that can go away.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Since now select is used with the file descriptors of the http connections,
tracking whether data was received recently (and trying to read more in
that case) is no longer necessary. Instead, always call select and rely on
it to return as soon as new data can be read.
Signed-off-by: Mika Fischer <mika.fischer@zoopnet.de>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Recent versions of curl can suggest a period of time the library user
should sleep and try again, when curl is blocked on reading or writing
(or connecting). Use this timeout instead of always sleeping for 50ms.
Signed-off-by: Mika Fischer <mika.fischer@zoopnet.de>
Helped-by: Daniel Stenberg <daniel@haxx.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Instead of sleeping unconditionally for a 50ms, when no data can be read
from the http connection(s), use curl_multi_fdset() to obtain the actual
file descriptors of the open connections and use them in the select call.
This way, the 50ms sleep is interrupted when new data arrives.
Signed-off-by: Mika Fischer <mika.fischer@zoopnet.de>
Helped-by: Daniel Stenberg <daniel@haxx.se>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When a username is already specified at the beginning of any HTTP
transaction (e.g. "git push https://user@hosting.example.com/project.git"
or "git ls-remote https://user@hosting.example.com/project.git"), the code
interactively asks for a password before calling into the libcurl library.
It is very likely that the reason why user included the username in the
URL is because the user knows that it would require authentication to
access the resource. Asking for the password upfront would save one
roundtrip to get a 401 response, getting the password and then retrying
the request. This is a reasonable optimization.
HOWEVER.
This is done even when $HOME/.netrc might have a corresponding entry to
access the site, or the site does not require authentication to access the
resource after all. But neither condition can be determined until we call
into libcurl library (we do not read and parse $HOME/.netrc ourselves). In
these cases, the user is forced to respond to the password prompt, only to
give a password that is not used in the HTTP transaction. If the password
is in $HOME/.netrc, an empty input would later let the libcurl layer to
pick up the password from there, and if the resource does not require
authentication, any input would be taken and then discarded without
getting used. It is wasteful to ask this unused information to the end
user.
Reduce the confusion by not trying to optimize for this case and always
incur roundtrip penalty. An alternative might be to document this and keep
this round-trip optimization as-is.
Signed-off-by: Stefan Naewe <stefan.naewe@gmail.com>
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* jk/http-auth:
http_init: accept separate URL parameter
http: use hostname in credential description
http: retry authentication failures for all http requests
remote-curl: don't retry auth failures with dumb protocol
improve httpd auth tests
url: decode buffers that are not NUL-terminated
|
|
The http_init function takes a "struct remote". Part of its
initialization procedure is to look at the remote's url and
grab some auth-related parameters. However, using the url
included in the remote is:
- wrong; the remote-curl helper may have a separate,
unrelated URL (e.g., from remote.*.pushurl). Looking at
the remote's configured url is incorrect.
- incomplete; http-fetch doesn't have a remote, so passes
NULL. So http_init never gets to see the URL we are
actually going to use.
- cumbersome; http-push has a similar problem to
http-fetch, but actually builds a fake remote just to
pass in the URL.
Instead, let's just add a separate URL parameter to
http_init, and all three callsites can pass in the
appropriate information.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Until now, a request for an http password looked like:
Username:
Password:
Now it will look like:
Username for 'example.com':
Password for 'example.com':
Picked-from: Jeff King <peff@peff.net>
Signed-off-by: Michael J Gruber <git@drmicha.warpmail.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* jn/maint-http-error-message:
http: avoid empty error messages for some curl errors
http: remove extra newline in error message
|
|
When asked to fetch over SSL without a valid
/etc/ssl/certs/ca-certificates.crt file, "git fetch" writes
error: while accessing https://github.com/torvalds/linux.git/info/refs
which is a little disconcerting. Better to fall back to
curl_easy_strerror(result) when the error string is empty, like the
curl utility does:
error: Problem with the SSL CA cert (path? access rights?) while
accessing https://github.com/torvalds/linux.git/info/refs
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
There is no need for a blank line between the detailed error message
and the later "fatal: HTTP request failed" notice. Keep the newline
written by error() itself and eliminate the extra one.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* rc/maint-http-wrong-free:
Makefile: some changes for http-related flag documentation
http.c: fix an invalid free()
Conflicts:
Makefile
|
|
Remove a free() on the static buffer returned by sha1_file_name().
While we're at it, replace xmalloc() calls on the structs
http_(object|pack)_request with xcalloc() so that pointers in the
structs get initialized to NULL. That way, free()'s are safe - for
example, a free() on the url string member when aborting.
This fixes an invalid free().
Reported-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com>
Helped-by: Jeff King peff@peff.net
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit 42653c0 (Prompt for a username when an HTTP request
401s, 2010-04-01) changed http_get_strbuf to prompt for
credentials when we receive a 401, but didn't touch
http_get_file. The latter is called only for dumb http;
while it's usually the case that people don't use
authentication on top of dumb http, there is no reason not
to allow both types of requests to use this feature.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The url_decode function needs only minor tweaks to handle
arbitrary buffers. Let's do those tweaks, which cleans up an
unreadable mess of temporary strings in http.c.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If the config option http.cookiefile is set, pass this file to libCURL using
the CURLOPT_COOKIEFILE option. This is similar to calling curl with the -b
option. This allows git http authorization with authentication mechanisms
that use cookies, such as SAML Enhanced Client or Proxy (ECP) used by
Shibboleth.
To use SAML/ECP, the user needs to request a session cookie with their own ECP
code. See for example:
<https://wiki.shibboleth.net/confluence/display/SHIB2/ECP>
Once the cookie file has been created, it can be passed to git with, e.g.
git config --global http.cookiefile "/home/dbrown/.curlcookies"
libCURL will then pass the appropriate session cookies to the git http server.
Signed-off-by: Duncan Brown <duncan.brown@ligo.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* sp/maint-clear-postfields:
http: clear POSTFIELDS when initializing a slot
|
|
Yes, these don't match perfectly with the void* first parameter of the
fread/fwrite in the standard library, but they do match the curl
expected method signature. This is needed when a refactor passes a
curl_write_callback around, which would otherwise give incorrect
parameter warnings.
Signed-off-by: Dan McGee <dpmcgee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* sp/maint-clear-postfields:
http: clear POSTFIELDS when initializing a slot
|
|
After posting a short request using CURLOPT_POSTFIELDS, if the slot
is reused for posting a large payload, the slot ends up having both
POSTFIELDS (which now points at a random garbage) and READFUNCTION,
in which case the curl library tries to use the stale POSTFIELDS.
Clear it as part of the general slot initialization in get_active_slot().
Heavylifting-by: Shawn Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Acked-by: Shawn Pearce <spearce@spearce.org>
|
|
* tc/http-urls-ends-with-slash:
http-fetch: rework url handling
http-push: add trailing slash at arg-parse time, instead of later on
http-push: check path length before using it
http-push: Normalise directory names when pushing to some WebDAV servers
http-backend: use end_url_with_slash()
url: add str wrapper for end_url_with_slash()
shift end_url_with_slash() from http.[ch] to url.[ch]
t5550-http-fetch: add test for http-fetch
t5550-http-fetch: add missing '&&'
|
|
* gc/http-with-non-ascii-username-url:
Fix username and password extraction from HTTP URLs
t5550: test HTTP authentication and userinfo decoding
Conflicts:
t/lib-httpd/apache.conf
|
|
This allows non-http/curl users to access it too (eg. http-backend.c).
Update include headers in end_url_with_slash() users too.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Change the authentification initialisation to percent-decode username
and password for HTTP URLs.
Signed-off-by: Gabriel Corona <gabriel.corona@enst-bretagne.fr>
Acked-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
For a long time (29508e1 "Isolate shared HTTP request functionality", Fri
Nov 18 11:02:58 2005), we've followed HTTP redirects with
CURLOPT_FOLLOWLOCATION.
However, when the remote HTTP server returns a redirect the default
libcurl action is to change a POST request into a GET request while
following the redirect, but the remote http backend does not expect
that.
Fix this by telling libcurl to always keep the request as type POST with
CURLOPT_POSTREDIR.
For users of libcurl older than 7.19.1, use CURLOPT_POST301 instead,
which only follows 301s instead of both 301s and 302s.
Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Some firewalls restrict HTTP connections based on the clients user agent. This
commit provides the user the ability to modify the user agent string via either
a new config option (http.useragent) or by an environment variable
(GIT_HTTP_USER_AGENT).
Relevant documentation is added to Documentation/config.txt.
Signed-off-by: Spencer E. Olson <olsonse@umich.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* sp/maint-dumb-http-pack-reidx:
http.c::new_http_pack_request: do away with the temp variable filename
http-fetch: Use temporary files for pack-*.idx until verified
http-fetch: Use index-pack rather than verify-pack to check packs
Allow parse_pack_index on temporary files
Extract verify_pack_index for reuse from verify_pack
Introduce close_pack_index to permit replacement
http.c: Remove unnecessary strdup of sha1_to_hex result
http.c: Don't store destination name in request structures
http.c: Drop useless != NULL test in finish_http_pack_request
http.c: Tiny refactoring of finish_http_pack_request
t5550-http-fetch: Use subshell for repository operations
http.c: Remove bad free of static block
|
|
* rc/maint-curl-helper:
remote-curl: ensure that URLs have a trailing slash
http: make end_url_with_slash() public
t5541-http-push: add test for URLs with trailing slash
Conflicts:
remote-curl.c
|
|
Now that the temporary variable char *filename is only used in one
place, do away with it and just call sha1_pack_name() directly.
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Verify that a downloaded pack-*.idx file is consistent and valid
as an index file before we rename it into its final destination.
This prevents a corrupt index file from later being treated as a
usable file, confusing readers.
Check that we do not have the pack index file before invoking
fetch_pack_index(); that way, we can do without the has_pack_index()
check in fetch_pack_index().
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
To ensure we don't leave a corrupt pack file positioned as though
it were a valid pack file, run index-pack on the temporary pack
before we rename it to its final name. If index-pack crashes out
when it discovers file corruption (e.g. GitHub's error HTML at the
end of the file), simply delete the temporary files to cleanup.
By waiting until the pack has been validated before we move it
to its final name, we eliminate a race condition where another
concurrent reader might try to access the pack at the same time
that we are still trying to verify its not corrupt.
Switching from verify-pack to index-pack is a change in behavior,
but it should turn out better for users. The index-pack algorithm
tries to minimize disk seeks, as well as the number of times any
given object is inflated, by organizing its work along delta chains.
The verify-pack logic does not attempt to do this, thrashing the
delta base cache and the filesystem cache.
By recreating the index file locally, we also can automatically
upgrade from a v1 pack table of contents to v2. This makes the
CRC32 data available for use during later repacks, even if the
server didn't have them on hand.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Acked-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The easiest way to verify a pack index is to open it through the
standard parse_pack_index function, permitting the header check
to happen when the file is mapped. However, the dumb HTTP client
needs to verify a pack index before its moved into its proper file
name within the objects/pack directory, to prevent a corrupt index
from being made available. So permit the caller to specify the
exact path of the index file.
For now we're still using the final destination name within the
sole call site in http.c, but eventually we will start to parse
the temporary path instead.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Most of the time the dumb HTTP transport is run without the verbose
flag set, so we only need the result of sha1_to_hex(sha1) once, to
construct the pack URL. Don't bother with an unnecessary malloc,
copy, free chain of this buffer.
If verbose is set, we'll format the SHA-1 twice now. But this
tiny extra CPU time spent is nothing compared to the slowdown that
is usually imposed by the verbose messages being sent to the tty,
and is entirely trivial compared to the latency involved with the
remote HTTP server sending something as big as a pack file.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Acked-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The destination name within the object store is easily computed
on demand, reusing a static buffer held by sha1_file.c. We don't
need to copy the entire path into the request structure for safe
keeping, when it can be easily reformatted after the download has
been completed.
This reduces the size of the per-request structure, and removes
yet another PATH_MAX based limit.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The test preq->packfile != NULL is always true. If packfile was
actually NULL when entering this function the ftell() above would
crash out with a SIGSEGV, resulting in never reaching this point.
Simplify the code by just removing the conditional.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Always remove the struct packed_git from the active list, even
if the rename of the temporary file fails.
While we are here, simplify the code a bit by using a common
local variable name ("p") to hold the relevant packed_git.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The filename variable here is pointing to a block of memory that
was allocated by sha1_file.c and is also held in a static variable
scoped within the sha1_pack_name() function. Doing a free() here is
returning that memory to the allocator while we might still try to
reuse it on a subsequent sha1_pack_name() invocation. That's not
acceptable, so don't free it.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When an HTTP request returns a 401, Git will currently fail with a
confusing message saying that it got a 401, which is not very
descriptive.
Currently if a user wants to use Git over HTTP, they have to use one
URL with the username in the URL (e.g. "http://user@host.com/repo.git")
for write access and another without the username for unauthenticated
read access (unless they want to be prompted for the password each
time). However, since the HTTP servers will return a 401 if an action
requires authentication, we can prompt for username and password if we
see this, allowing us to use a single URL for both purposes.
This patch changes http_request to prompt for the username and password,
then return HTTP_REAUTH so http_get_strbuf can try again. If it gets
a 401 even when a user/pass is supplied, http_request will now return
HTTP_NOAUTH which remote_curl can then use to display a more
intelligent error message that is less confusing.
Signed-off-by: Scott Chacon <schacon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|