summaryrefslogtreecommitdiff
path: root/remote-curl.c
AgeCommit message (Collapse)AuthorFilesLines
2010-02-05Merge branch 'sp/maint-push-sideband' into sp/push-sidebandLibravatar Junio C Hamano1-3/+4
* sp/maint-push-sideband: receive-pack: Send hook output over side band #2 receive-pack: Wrap status reports inside side-band-64k receive-pack: Refactor how capabilities are shown to the client send-pack: demultiplex a sideband stream with status data run-command: support custom fd-set in async run-command: Allow stderr to be a caller supplied pipe Update git fsck --full short description to mention packs Conflicts: run-command.c
2010-02-05run-command: support custom fd-set in asyncLibravatar Erik Faye-Lund1-3/+4
This patch adds the possibility to supply a set of non-0 file descriptors for async process communication instead of the default-created pipe. Additionally, we now support bi-directional communiction with the async procedure, by giving the async function both read and write file descriptors. To retain compatiblity and similar "API feel" with start_command, we require start_async callers to set .out = -1 to get a readable file descriptor. If either of .in or .out is 0, we supply no file descriptor to the async process. [sp: Note: Erik started this patch, and a huge bulk of it is his work. All bugs were introduced later by Shawn.] Signed-off-by: Erik Faye-Lund <kusmabite@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-21Merge branch 'maint'Libravatar Junio C Hamano1-2/+16
* maint: merge-recursive: do not return NULL only to cause segfault retry request without query when info/refs?query fails
2010-01-21retry request without query when info/refs?query failsLibravatar Tay Ray Chuan1-2/+16
When "info/refs" is a static file and not behind a CGI handler, some servers may not handle a GET request for it with a query string appended (eg. "?foo=bar") properly. If such a request fails, retry it sans the query string. In addition, ensure that the "smart" http protocol is not used (a service has to be specified with "?service=<service name>" to be conformant). Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Reported-and-tested-by: Yaroslav Halchenko <debian@onerussian.com> Acked-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-20Merge branch 'jc/symbol-static'Libravatar Junio C Hamano1-1/+1
* jc/symbol-static: date.c: mark file-local function static Replace parse_blob() with an explanatory comment symlinks.c: remove unused functions object.c: remove unused functions strbuf.c: remove unused function sha1_file.c: remove unused function mailmap.c: remove unused function utf8.c: mark file-local function static submodule.c: mark file-local function static quote.c: mark file-local function static remote-curl.c: mark file-local function static read-cache.c: mark file-local functions static parse-options.c: mark file-local function static entry.c: mark file-local function static http.c: mark file-local functions static pretty.c: mark file-local function static builtin-rev-list.c: mark file-local function static bisect.c: mark file-local function static
2010-01-12Merge branch 'maint'Libravatar Junio C Hamano1-1/+1
* maint: remote-curl: Fix Accept header for smart HTTP connections grep: -L should show empty files rebase--interactive: Ignore comments and blank lines in peek_next_command
2010-01-12remote-curl: Fix Accept header for smart HTTP connectionsLibravatar Shawn O. Pearce1-1/+1
We actually expect to see an application/x-git-upload-pack-result but we lied and said we Accept *-response. This was a typo on my part when I was writing the code. Fortunately the wrong Accept header had no real impact, as the deployed git-http-backend servers were not testing the Accept header before they returned their content. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-12remote-curl.c: mark file-local function staticLibravatar Junio C Hamano1-1/+1
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-12-01Allow curl to rewind the RPC read bufferLibravatar Martin Storsjö1-0/+30
When using multi-pass authentication methods, the curl library may need to rewind the read buffers used for providing data to HTTP POST, if data has been output before a 401 error is received. This is needed only when the first request (when the multi-pass authentication method isn't initialized and hasn't received its challenge yet) for a certain curl session is a chunked HTTP POST. As long as the current rpc read buffer is the first one, we're able to rewind without need for additional buffering. The curl library currently starts sending data without waiting for a response to the Expect: 100-continue header, due to a bug in curl that exists up to curl version 7.19.7. If the HTTP server doesn't handle Expect: 100-continue headers properly (e.g. Lighttpd), the library has to start sending data without knowing if the request will be successfully authenticated. In this case, this rewinding solution is not sufficient - the whole request will be sent before the 401 error is received. Signed-off-by: Martin Storsjo <martin@martin.st> Acked-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-23remote-curl.c: fix rpc_out()Libravatar Tay Ray Chuan1-1/+1
Remove the extraneous semicolon (';') at the end of the if statement that allowed the code in its block to execute regardless of the condition. This fixes pushing to a smart http backend with chunked encoding. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-22Disable CURLOPT_NOBODY before enabling CURLOPT_PUT and CURLOPT_POSTLibravatar Martin Storsjö1-1/+1
This works around a bug in curl versions up to 7.19.4, where disabling the CURLOPT_NOBODY option sets the internal state incorrectly considering that CURLOPT_PUT was enabled earlier. The bug is discussed at http://curl.haxx.se/bug/view.cgi?id=2727981 and is corrected in the latest version of curl in CVS. This bug usually has no impact on git, but may surface if using multi-pass authentication methods. Signed-off-by: Martin Storsjo <martin@martin.st> Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-20Merge branch 'sp/smart-http'Libravatar Junio C Hamano1-44/+715
* sp/smart-http: (37 commits) http-backend: Let gcc check the format of more printf-type functions. http-backend: Fix access beyond end of string. http-backend: Fix bad treatment of uintmax_t in Content-Length t5551-http-fetch: Work around broken Accept header in libcurl t5551-http-fetch: Work around some libcurl versions http-backend: Protect GIT_PROJECT_ROOT from /../ requests Git-aware CGI to provide dumb HTTP transport http-backend: Test configuration options http-backend: Use http.getanyfile to disable dumb HTTP serving test smart http fetch and push http tests: use /dumb/ URL prefix set httpd port before sourcing lib-httpd t5540-http-push: remove redundant fetches Smart HTTP fetch: gzip requests Smart fetch over HTTP: client side Smart push over HTTP: client side Discover refs via smart HTTP server when available http-backend: more explict LocationMatch http-backend: add example for gitweb on same URL http-backend: use mod_alias instead of mod_rewrite ... Conflicts: .gitignore remote-curl.c
2009-11-04Smart HTTP fetch: gzip requestsLibravatar Shawn O. Pearce1-0/+50
The upload-pack requests are mostly plain text and they compress rather well. Deflating them with Content-Encoding: gzip can easily drop the size of the request by 50%, reducing the amount of data to transfer as we negotiate the common commits. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Smart fetch over HTTP: client sideLibravatar Shawn O. Pearce1-4/+65
The git-remote-curl backend detects if the remote server supports the git-upload-pack service, and if so, runs git-fetch-pack locally in a pipe to generate the want/have commands. The advertisements from the server that were obtained during the discovery are passed into git-fetch-pack before the POST request starts, permitting server capability discovery and enablement. Common objects that are discovered are appended onto the request as have lines and are sent again on the next request. This allows the remote side to reinitialize its in-memory list of common objects during the next request. Because all requests are relatively short, below git-remote-curl's 1 MiB buffer limit, requests will use the standard Content-Length header and be valid HTTP/1.0 POST requests. This makes the fetch client more tolerant of proxy servers which don't support HTTP/1.1 or the chunked transfer encoding. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Smart push over HTTP: client sideLibravatar Shawn O. Pearce1-2/+233
The git-remote-curl backend detects if the remote server supports the git-receive-pack service, and if so, runs git-send-pack in a pipe to dump the command and pack data as a single POST request. The advertisements from the server that were obtained during the discovery are passed into git-send-pack before the POST request starts. This permits git-send-pack to operate largely unmodified. For smaller packs (those under 1 MiB) a HTTP/1.0 POST with a Content-Length is used, permitting interaction with any server. The 1 MiB limit is arbitrary, but is sufficent to fit most deltas created by human authors against text sources with the occasional small binary file (e.g. few KiB icon image). The configuration option http.postBuffer can be used to increase (or shink) this buffer if the default is not sufficient. For larger packs which cannot be spooled entirely into the helper's memory space (due to http.postBuffer being too small), the POST request requires HTTP/1.1 and sets "Transfer-Encoding: chunked". This permits the client to upload an unknown amount of data in one HTTP transaction without needing to pregenerate the entire pack file locally. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Discover refs via smart HTTP server when availableLibravatar Shawn O. Pearce1-17/+131
Instead of loading the cached info/refs, try to use the smart HTTP version when the server supports it. Since the smart variant is actually the pkt-line stream from the start of either upload-pack or receive-pack we need to parse these through get_remote_heads, which requires a background thread to feed its pipe. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-03Allow curl helper to work without a local repositoryLibravatar Daniel Barkalow1-1/+4
It's okay to use the curl helper without a local repository, so long as you don't use "fetch". There aren't any git programs that would try to use it, and it doesn't make sense to try it (since there's nowhere to write the results), but we may as well be clear. Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30Move WebDAV HTTP push under remote-curlLibravatar Shawn O. Pearce1-12/+85
The remote helper interface now supports the push capability, which can be used to ask the implementation to push one or more specs to the remote repository. For remote-curl we implement this by calling the existing WebDAV based git-http-push executable. Internally the helper interface uses the push_refs transport hook so that the complexity of the refspec parsing and matching can be reused between remote implementations. When possible however the helper protocol uses source ref name rather than the source SHA-1, thereby allowing the helper to access this name if it is useful. >From Clemens Buchacher <drizzd@aon.at>: update http tests according to remote-curl capabilities o Pushing packed refs is now fixed. o The transport helper fails if refs are already up-to-date. Add a test for that. o The transport helper will notice if refs are already up-to-date. We therefore need to update server info in the unpacked-refs test. o The transport helper will purge deleted branches automatically. o Use a variable ($ORIG_HEAD) instead of full SHA-1 name. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> CC: Mike Hommey <mh@glandium.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30remote-helpers: Support custom transport optionsLibravatar Shawn O. Pearce1-1/+73
Some transports, like the native pack transport implemented by fetch-pack, support useful features like depth or include tags. These should be exposed if the underlying helper knows how to use them. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30remote-helpers: Fetch more than one ref in a batchLibravatar Shawn O. Pearce1-11/+77
Some network protocols (e.g. native git://) are able to fetch more than one ref at a time and reduce the overall transfer cost by combining the requests into a single exchange. Instead of feeding each fetch request one at a time to the helper, feed all of them at once so the helper can decide whether or not it should batch them. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30remote-curl: Refactor walker initializationLibravatar Shawn O. Pearce1-10/+14
We will need the walker, url and remote in other functions as the code grows larger to support smart HTTP. Extract this out into a set of globals we can easily reference once configured. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-14clone: Supply the right commit hash to post-checkout when -b is usedLibravatar Björn Steinbrink1-0/+1
When we use -b <branch>, we may checkout something else than what the remote's HEAD references, but we still used remote_head to supply the new ref value to the post-checkout hook, which is wrong. So instead of using remote_head to find the value to be passed to the post-checkout hook, we have to use our_head_points_at, which is always correctly setup, even if -b is not used. This also fixes a segfault when "clone -b <branch>" is used with a remote repo that doesn't have a valid HEAD, as in such a case remote_head is NULL, but we still tried to access it. Reported-by: Devin Cofer <ranguvar@archlinux.us> Signed-off-by: Björn Steinbrink <B.Steinbrink@gmx.de> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-13remote-curl: add missing initialization of argv0_pathLibravatar Johannes Sixt1-0/+1
All programs, in particular also the stand-alone programs (non-builtins) must call git_extract_argv0_path(argv[0]) in order to help builds that derive the installation prefix at runtime, such as the MinGW build. Without this call, the program segfaults (or raises an assertion failure). Signed-off-by: Johannes Sixt <j6t@kdbg.org> Tested-by: Michael Wookey <michaelwookey@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-08-05Use an external program to implement fetching with curlLibravatar Daniel Barkalow1-0/+139
Use the transport native helper mechanism to fetch by http (and ftp, etc). Signed-off-by: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>