summaryrefslogtreecommitdiff
path: root/t/t5551-http-fetch.sh
AgeCommit message (Collapse)AuthorFilesLines
2014-01-02use distinct username/password for http auth testsLibravatar Jeff King1-3/+3
The httpd server we set up to test git's http client code knows about a single account, in which both the username and password are "user@host" (the unusual use of the "@" here is to verify that we handle the character correctly when URL escaped). This means that we may miss a certain class of errors in which the username and password are mixed up internally by git. We can make our tests more robust by having distinct values for the username and password. In addition to tweaking the server passwd file and the client URL, we must teach the "askpass" harness to accept multiple values. As a bonus, this makes the setup of some tests more obvious; when we are expecting git to ask only about the password, we can seed the username askpass response with a bogus value. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-11-04Merge branch 'jk/wrap-perl-used-in-tests'Libravatar Junio C Hamano1-1/+1
* jk/wrap-perl-used-in-tests: t: use perl instead of "$PERL_PATH" where applicable t: provide a perl() function which uses $PERL_PATH
2013-10-30Merge branch 'jk/http-auth-redirects'Libravatar Junio C Hamano1-0/+11
Handle the case where http transport gets redirected during the authorization request better. * jk/http-auth-redirects: http.c: Spell the null pointer as NULL remote-curl: rewrite base url from info/refs redirects remote-curl: store url as a strbuf remote-curl: make refs_url a strbuf http: update base URLs when we see redirects http: provide effective url to callers http: hoist credential request out of handle_curl_result http: refactor options to http_get_* http_request: factor out curlinfo_strbuf http_get_file: style fixes
2013-10-29t: use perl instead of "$PERL_PATH" where applicableLibravatar Jeff King1-1/+1
As of the last commit, we can use "perl" instead of "$PERL_PATH" when running tests, as the former is now a function which uses the latter. As the shorter "perl" is easier on the eyes, let's switch to using it everywhere. This is not quite a mechanical s/$PERL_PATH/perl/ replacement, though. There are some places where we invoke perl from a script we generate on the fly, and those scripts do not have access to our internal shell functions. The result can be double-checked by running: ln -s /bin/false bin-wrappers/perl make test which continues to pass even after this patch. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-14remote-curl: rewrite base url from info/refs redirectsLibravatar Jeff King1-0/+11
For efficiency and security reasons, an earlier commit in this series taught http_get_* to re-write the base url based on redirections we saw while making a specific request. This commit wires that option into the info/refs request, meaning that a redirect from http://example.com/foo.git/info/refs to https://example.com/bar.git/info/refs will behave as if "https://example.com/bar.git" had been provided to git in the first place. The tests bear some explanation. We introduce two new hierearchies into the httpd test config: 1. Requests to /smart-redir-limited will work only for the initial info/refs request, but not any subsequent requests. As a result, we can confirm whether the client is re-rooting its requests after the initial contact, since otherwise it will fail (it will ask for "repo.git/git-upload-pack", which is not redirected). 2. Requests to smart-redir-auth will redirect, and require auth after the redirection. Since we are using the redirected base for further requests, we also update the credential struct, in order not to mislead the user (or credential helpers) about which credential is needed. We can therefore check the GIT_ASKPASS prompts to make sure we are prompting for the new location. Because we have neither multiple servers nor https support in our test setup, we can only redirect between paths, meaning we need to turn on credential.useHttpPath to see the difference. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2013-08-05t5551: Remove header from curl cookie fileLibravatar Brian Gernhardt1-4/+2
The URL included in the header appears to vary from curl version to curl version. Since we only care about the final few lines, only test them. However, make sure the blank line after the header is still included to make sure there are no extra cookie lines. Signed-off-by: Brian Gernhardt <brian@gernhardtsoftware.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-30http: add http.savecookies option to write out HTTP cookiesLibravatar Dave Borowitz1-0/+18
HTTP servers may send Set-Cookie headers in a response and expect them to be set on subsequent requests. By default, libcurl behavior is to store such cookies in memory and reuse them across requests within a single session. However, it may also make sense, depending on the server and the cookies, to store them across sessions. Provide users an option to enable this behavior, writing cookies out to the same file specified in http.cookiefile. Signed-off-by: Dave Borowitz <dborowitz@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-06-02Merge branch 'jc/t5551-posix-sed-bre'Libravatar Junio C Hamano1-2/+6
POSIX fix for a test script. * jc/t5551-posix-sed-bre: t5551: do not use unportable sed '\+'
2013-05-12t5551: do not use unportable sed '\+'Libravatar Junio C Hamano1-2/+6
The set-up step to prepare a repository with 50000 tags used a non-porable '\+' to match one-or-more. The error was not caught because the next test that uses that repository did not even bother to check if these expected tags were actually cloned to the resulting repository. Fix the sed construct to use BRE and update the "clone" test that wanted to test cloning from such a repository with many refs to check the resulting repository. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-18Merge branch 'jk/http-dumb-namespaces'Libravatar Junio C Hamano1-0/+24
Allow smart-capable HTTP servers to be restricted via the GIT_NAMESPACE mechanism when talking with commit-walker clients (they already do so when talking with smart HTTP clients). * jk/http-dumb-namespaces: http-backend: respect GIT_NAMESPACE with dumb clients
2013-04-18Merge branch 'jc/push-2.0-default-to-simple' (early part)Libravatar Junio C Hamano1-0/+1
Adjust our tests for upcoming migration of the default value for the "push.default" configuration variable to "simple" from "mixed". * 'jc/push-2.0-default-to-simple' (early part): t5570: do not assume the "matching" push is the default t5551: do not assume the "matching" push is the default t5550: do not assume the "matching" push is the default t9401: do not assume the "matching" push is the default t9400: do not assume the "matching" push is the default t7406: do not assume the "matching" push is the default t5531: do not assume the "matching" push is the default t5519: do not assume the "matching" push is the default t5517: do not assume the "matching" push is the default t5516: do not assume the "matching" push is the default t5505: do not assume the "matching" push is the default t5404: do not assume the "matching" push is the default
2013-04-09http-backend: respect GIT_NAMESPACE with dumb clientsLibravatar John Koleszar1-0/+24
Filter the list of refs returned via the dumb HTTP protocol according to the active namespace, consistent with other clients of the upload-pack service. Signed-off-by: John Koleszar <jkoleszar@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-03t5551: do not assume the "matching" push is the defaultLibravatar Brian Gernhardt1-0/+1
Signed-off-by: Brian Gernhardt <brian@gernhardtsoftware.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04t5551: fix expected error outputLibravatar Junio C Hamano1-2/+1
We should probably get rid of the check of message instead, but in the meantime this should do. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-04Verify Content-Type from smart HTTP serversLibravatar Shawn Pearce1-0/+6
Before parsing a suspected smart-HTTP response verify the returned Content-Type matches the standard. This protects a client from attempting to process a payload that smells like a smart-HTTP server response. JGit has been doing this check on all responses since the dawn of time. I mistakenly failed to include it in git-core when smart HTTP was introduced. At the time I didn't know how to get the Content-Type from libcurl. I punted, meant to circle back and fix this, and just plain forgot about it. Signed-off-by: Shawn Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-11-20Merge branch 'jk/maint-http-half-auth-fetch'Libravatar Junio C Hamano1-0/+15
Fixes fetch from servers that ask for auth only during the actual packing phase. This is not really a recommended configuration, but it cleans up the code at the same time. * jk/maint-http-half-auth-fetch: remote-curl: retry failed requests for auth even with gzip remote-curl: hoist gzip buffer size to top of post_rpc
2012-10-31remote-curl: retry failed requests for auth even with gzipLibravatar Jeff King1-0/+15
Commit b81401c taught the post_rpc function to retry the http request after prompting for credentials. However, it did not handle two cases: 1. If we have a large request, we do not retry. That's OK, since we would have sent a probe (with retry) already. 2. If we are gzipping the request, we do not retry. That was considered OK, because the intended use was for push (e.g., listing refs is OK, but actually pushing objects is not), and we never gzip on push. This patch teaches post_rpc to retry even a gzipped request. This has two advantages: 1. It is possible to configure a "half-auth" state for fetching, where the set of refs and their sha1s are advertised, but one cannot actually fetch objects. This is not a recommended configuration, as it leaks some information about what is in the repository (e.g., an attacker can try brute-forcing possible content in your repository and checking whether it matches your branch sha1). However, it can be slightly more convenient, since a no-op fetch will not require a password at all. 2. It future-proofs us should we decide to ever gzip more requests. Signed-off-by: Jeff King <peff@peff.net>
2012-09-29Merge branch 'jk/smart-http-switch'Libravatar Junio C Hamano1-0/+12
Allows users to turn off smart-http when talking to dumb-only servers. * jk/smart-http-switch: remote-curl: let users turn off smart http remote-curl: rename is_http variable
2012-09-21remote-curl: let users turn off smart httpLibravatar Jeff King1-0/+12
Usually there is no need for users to specify whether an http remote is smart or dumb; the protocol is designed so that a single initial request is made, and the client can determine the server's capability from the response. However, some misconfigured dumb-only servers may not like the initial request by a smart client, as it contains a query string. Until recently, commit 703e6e7 worked around this by making a second request. However, that commit was recently reverted due to its side effect of masking the initial request's error code. Since git has had that workaround for several years, we don't know exactly how many such misconfigured servers are out there. The reversion of 703e6e7 assumes they are rare enough not to worry about. Still, that reversion leaves somebody who does run into such a server with no escape hatch at all. Let's give them an environment variable they can tweak to perform the "dumb" request. This is intentionally not a documented interface. It's overly simple and is really there for debugging in case somebody does complain about git not working with their server. A real user-facing interface would entail a per-remote or per-URL config variable. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-20Enable info/refs gzip decompression in HTTP clientLibravatar Shawn O. Pearce1-1/+2
Some HTTP servers try to use gzip compression on the /info/refs request to save transfer bandwidth. Repositories with many tags may find the /info/refs request can be gzipped to be 50% of the original size due to the few but often repeated bytes used (hex SHA-1 and commonly digits in tag names). For most HTTP requests enable "Accept-Encoding: gzip" ensuring the /info/refs payload can use this encoding format. Only request gzip encoding from servers. Although deflate is supported by libcurl, most servers have standardized on gzip encoding for compression as that is what most browsers support. Asking for deflate increases request sizes by a few bytes, but is unlikely to ever be used by a server. Disable the Accept-Encoding header on probe RPCs as response bodies are supposed to be exactly 4 bytes long, "0000". The HTTP headers requesting and indicating compression use more space than the data transferred in the body. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-09-12Merge branch 'jk/maint-http-half-auth-push' into maint-1.7.11Libravatar Junio C Hamano1-0/+20
Pushing to smart HTTP server with recent Git fails without having the username in the URL to force authentication, if the server is configured to allow GET anonymously, while requiring authentication for POST. * jk/maint-http-half-auth-push: http: prompt for credentials on failed POST http: factor out http error code handling t: test http access to "half-auth" repositories t: test basic smart-http authentication t/lib-httpd: recognize */smart/* repos as smart-http t/lib-httpd: only route auth/dumb to dumb repos t5550: factor out http auth setup t5550: put auth-required repo in auth/dumb
2012-08-27t: test http access to "half-auth" repositoriesLibravatar Jeff King1-0/+9
Some sites set up http access to repositories such that fetching is anonymous and unauthenticated, but pushing is authenticated. While there are multiple ways to do this, the technique advertised in the git-http-backend manpage is to block access to locations matching "/git-receive-pack$". Let's emulate that advice in our test setup, which makes it clear that this advice does not actually work. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-27t: test basic smart-http authenticationLibravatar Jeff King1-0/+11
We do not currently test authentication over smart-http at all. In theory, it should work exactly as it does for dumb http (which we do test). It does indeed work for these simple tests, but this patch lays the groundwork for more complex tests in future patches. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-08-04tests: Introduce test_seqLibravatar Michał Kiedrowicz1-1/+1
Jeff King wrote: The seq command is GNU-ism, and is missing at least in older BSD releases and their derivatives, not to mention antique commercial Unixes. We already purged it in b3431bc (Don't use seq in tests, not everyone has it, 2007-05-02), but a few new instances have crept in. They went unnoticed because they are in scripts that are not run by default. Replace them with test_seq that is implemented with a Perl snippet (proposed by Jeff). This is better than inlining this snippet everywhere it's needed because it's easier to read and it's easier to change the implementation (e.g. to C) if we ever decide to remove Perl from the test suite. Note that test_seq is not a complete replacement for seq(1). It just has what we need now, in addition that it makes it possible for us to do something like "test_seq a m" if we wanted to in the future. There are also many places that do `for i in 1 2 3 ...` but I'm not sure if it's worth converting them to test_seq. That would introduce running more processes of Perl. Signed-off-by: Michał Kiedrowicz <michal.kiedrowicz@gmail.com> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-06-24tests: enclose $PERL_PATH in double quotesLibravatar Junio C Hamano1-1/+1
Otherwise it will be split at a space after "Program" when it is set to "\\Program Files\perl" or something silly like that. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-06-12t: Replace 'perl' by $PERL_PATHLibravatar Vincent van Ravesteijn1-1/+1
GIT-BUILD-OPTIONS defines PERL_PATH to be used in the test suite. Only a few tests already actually use this variable when perl is needed. The other test just call 'perl' and it might happen that the wrong perl interpreter is used. This becomes problematic on Windows, when the perl interpreter that is compiled and installed on the Windows system is used, because this perl interpreter might introduce some unexpected LF->CRLF conversions. This patch makes sure that $PERL_PATH is used everywhere in the test suite and that the correct perl interpreter is used. Signed-off-by: Vincent van Ravesteijn <vfr@lyx.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-10remote-curl: main test case for the OS command line overflowLibravatar Ivan Todoroski1-0/+31
This is main test case for the original problem that triggered this patch series. We create a repo with 50k tags and then test whether git-clone over the smart HTTP protocol succeeds. Note that we construct the repo in a slightly different way than the original script used to reproduce the problem. This is because the original script just created 50k tags all pointing to the same commit, so if there was a bug where remote-curl.c was not passing all the refs to fetch-pack we wouldn't know. The clone would succeed even if only one tag was passed, because all the other tags were pointing at the same SHA and would be considered present. Instead we create a repo with 50k independent (dangling) commits and then tag each of those commits with a unique tag. This way if one of the tags is not given to fetch-pack, later stages of the clone would complain about it. This allows us to test both that the command line overflow was fixed, as well as that it was fixed in a way that doesn't leave out any of the refs. Signed-off-by: Ivan Todoroski <grnch@gmx.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-09-27smart-http: Don't change POST to GET when following redirectLibravatar Tay Ray Chuan1-0/+8
For a long time (29508e1 "Isolate shared HTTP request functionality", Fri Nov 18 11:02:58 2005), we've followed HTTP redirects with CURLOPT_FOLLOWLOCATION. However, when the remote HTTP server returns a redirect the default libcurl action is to change a POST request into a GET request while following the redirect, but the remote http backend does not expect that. Fix this by telling libcurl to always keep the request as type POST with CURLOPT_POSTREDIR. For users of libcurl older than 7.19.1, use CURLOPT_POST301 instead, which only follows 301s instead of both 301s and 302s. Signed-off-by: Andreas Schwab <schwab@linux-m68k.org> Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-06-25tests: Skip tests in a way that makes sense under TAPLibravatar Ævar Arnfjörð Bjarmason1-1/+1
SKIP messages are now part of the TAP plan. A TAP harness now knows why a particular test was skipped and can report that information. The non-TAP harness built into Git's test-lib did nothing special with these messages, and is unaffected by these changes. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-12remote-curl: Fix Accept header for smart HTTP connectionsLibravatar Shawn O. Pearce1-1/+1
We actually expect to see an application/x-git-upload-pack-result but we lied and said we Accept *-response. This was a typo on my part when I was writing the code. Fortunately the wrong Accept header had no real impact, as the deployed git-http-backend servers were not testing the Accept header before they returned their content. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-09t5551-http-fetch: Work around broken Accept header in libcurlLibravatar Shawn O. Pearce1-0/+3
Unfortunately at least one version of libcurl has a bug causing it to include "Accept: */*" in the same POST request where we have already asked for "Accept: application/x-git-upload-pack-response". This is a bug in libcurl, not Git, or our test vector. The application has explicitly asked the server for a single content type, but libcurl has mistakenly also told the server the client application will accept */*, which is any content type. Based on the libcurl change log, this "Accept: */*" header bug may have been fixed in version 7.18.1 released March 30, 2008: http://curl.haxx.se/changes.html#7_18_1 Rather than require users to upgrade libcurl we change the test vector to trim this line out of the 2nd request. Reported-by: Tarmigan <tarmigan+git@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-09t5551-http-fetch: Work around some libcurl versionsLibravatar Shawn O. Pearce1-4/+4
Some versions of libcurl report their output when GIT_CURL_VERBOSE is set differently than other versions do. At least one variant (version unknown but likely pre-7.18.1) reports the POST payload to stderr, and omits the blank line after each HTTP request/response. We clip these lines out of the stderr output now before doing the compare, so we aren't surprised by this trivial difference. Reported-by: Tarmigan <tarmigan+git@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04test smart http fetch and pushLibravatar Shawn O. Pearce1-0/+102
The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>