summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2009-11-04http-backend: Use http.getanyfile to disable dumb HTTP servingLibravatar Shawn O. Pearce2-6/+36
Some repository owners may wish to enable smart HTTP, but disallow dumb content serving. Disallowing dumb serving might be because the owners want to rely upon reachability to control which objects clients may access from the repository, or they just want to encourage clients to use the more bandwidth efficient transport. If http.getanyfile is set to false the backend CGI will return with '403 Forbidden' when an object file is accessed by a dumb client. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04test smart http fetch and pushLibravatar Shawn O. Pearce5-2/+219
The top level directory "/smart/" of the test Apache server is mapped through our git-http-backend CGI, but uses the same underlying repository space as the server's document root. This is the most simple installation possible. Server logs are checked to verify the client has accessed only the smart URLs during the test. During fetch testing the headers are also logged from libcurl to ensure we are making a reasonably sane HTTP request, and getting back reasonably sane response headers from the CGI. When validating the request headers used during smart fetch we munge away the actual Content-Length and replace it with the placeholder "xxx". This avoids unnecessary varability in the test caused by an unrelated change in the requested capabilities in the first want line of the request. However, we still want to look for and verify that Content-Length was used, because smaller payloads should be using Content-Length and not "Transfer-Encoding: chunked". When validating the server response headers we must discard both Content-Length and Transfer-Encoding, as Apache2 can use either format to return our response. During development of this test I observed Apache returning both forms, depending on when the processes got CPU time. If our CGI returned the pack data quickly, Apache just buffered the whole thing and returned a Content-Length. If our CGI took just a bit too long to complete, Apache flushed its buffer and instead used "Transfer-Encoding: chunked". Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04http tests: use /dumb/ URL prefixLibravatar Shawn O. Pearce3-8/+13
To clarify what part of the HTTP transprot is being tested we change the URLs used by existing tests to include /dumb/ at the start, indicating they use the non-Git aware code paths. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04set httpd port before sourcing lib-httpdLibravatar Clemens Buchacher1-4/+3
If LIB_HTTPD_PORT is not set already, lib-httpd will set it to the default 8111. Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04t5540-http-push: remove redundant fetchesLibravatar Tay Ray Chuan1-2/+0
Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Smart HTTP fetch: gzip requestsLibravatar Shawn O. Pearce1-0/+50
The upload-pack requests are mostly plain text and they compress rather well. Deflating them with Content-Encoding: gzip can easily drop the size of the request by 50%, reducing the amount of data to transfer as we negotiate the common commits. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Smart fetch over HTTP: client sideLibravatar Shawn O. Pearce3-22/+160
The git-remote-curl backend detects if the remote server supports the git-upload-pack service, and if so, runs git-fetch-pack locally in a pipe to generate the want/have commands. The advertisements from the server that were obtained during the discovery are passed into git-fetch-pack before the POST request starts, permitting server capability discovery and enablement. Common objects that are discovered are appended onto the request as have lines and are sent again on the next request. This allows the remote side to reinitialize its in-memory list of common objects during the next request. Because all requests are relatively short, below git-remote-curl's 1 MiB buffer limit, requests will use the standard Content-Length header and be valid HTTP/1.0 POST requests. This makes the fetch client more tolerant of proxy servers which don't support HTTP/1.1 or the chunked transfer encoding. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Smart push over HTTP: client sideLibravatar Shawn O. Pearce8-15/+374
The git-remote-curl backend detects if the remote server supports the git-receive-pack service, and if so, runs git-send-pack in a pipe to dump the command and pack data as a single POST request. The advertisements from the server that were obtained during the discovery are passed into git-send-pack before the POST request starts. This permits git-send-pack to operate largely unmodified. For smaller packs (those under 1 MiB) a HTTP/1.0 POST with a Content-Length is used, permitting interaction with any server. The 1 MiB limit is arbitrary, but is sufficent to fit most deltas created by human authors against text sources with the occasional small binary file (e.g. few KiB icon image). The configuration option http.postBuffer can be used to increase (or shink) this buffer if the default is not sufficient. For larger packs which cannot be spooled entirely into the helper's memory space (due to http.postBuffer being too small), the POST request requires HTTP/1.1 and sets "Transfer-Encoding: chunked". This permits the client to upload an unknown amount of data in one HTTP transaction without needing to pregenerate the entire pack file locally. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Discover refs via smart HTTP server when availableLibravatar Shawn O. Pearce1-17/+131
Instead of loading the cached info/refs, try to use the smart HTTP version when the server supports it. Since the smart variant is actually the pkt-line stream from the start of either upload-pack or receive-pack we need to parse these through get_remote_heads, which requires a background thread to feed its pipe. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04http-backend: more explict LocationMatchLibravatar Mark Lodato1-1/+1
In the git-http-backend examples, only match git-receive-pack within /git/. Signed-off-by: Mark Lodato <lodatom@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04http-backend: add example for gitweb on same URLLibravatar Mark Lodato1-0/+33
In the git-http-backend documentation, add an example of how to set up gitweb and git-http-backend on the same URL by using a series of mod_alias commands. Signed-off-by: Mark Lodato <lodatom@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04http-backend: use mod_alias instead of mod_rewriteLibravatar Mark Lodato1-7/+3
In the git-http-backend documentation, use mod_alias exlusively, instead of using a combination of mod_alias and mod_rewrite. This makes the example slightly shorted and a bit more clear. Signed-off-by: Mark Lodato <lodatom@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04http-backend: reword some documentationLibravatar Mark Lodato1-4/+8
Clarify some of the git-http-backend documentation, particularly: * In the Description, state that smart/dumb HTTP fetch and smart HTTP push are supported, state that authenticated clients allow push, and remove the note that this is only suited for read-only updates. * At the start of Examples, state explicitly what URL is mapping to what location on disk. Signed-off-by: Mark Lodato <lodatom@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04http-backend: add GIT_PROJECT_ROOT environment varLibravatar Mark Lodato2-25/+39
Add a new environment variable, GIT_PROJECT_ROOT, to override the method of using PATH_TRANSLATED to find the git repository on disk. This makes it much easier to configure the web server, especially when the web server's DocumentRoot does not contain the git repositories, which is the usual case. Signed-off-by: Mark Lodato <lodatom@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Smart fetch and push over HTTP: server sideLibravatar Shawn O. Pearce2-4/+359
Requests for $GIT_URL/git-receive-pack and $GIT_URL/git-upload-pack are forwarded to the corresponding backend process by directly executing it and leaving stdin and stdout connected to the invoking web server. Prior to starting the backend process the HTTP response headers are sent, thereby freeing the backend from needing to know about the HTTP protocol. Requests that are encoded with Content-Encoding: gzip are automatically inflated before being streamed into the backend. This is primarily useful for the git-upload-pack backend, which receives highly repetitive text data from clients that easily compresses to 50% of its original size. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Add stateless RPC options to upload-pack, receive-packLibravatar Shawn O. Pearce2-10/+56
When --stateless-rpc is passed as a command line parameter to upload-pack or receive-pack the programs now assume they may perform only a single read-write cycle with stdin and stdout. This fits with the HTTP POST request processing model where a program may read the request, write a response, and must exit. When --advertise-refs is passed as a command line parameter only the initial ref advertisement is output, and the program exits immediately. This fits with the HTTP GET request model, where no request content is received but a response must be produced. HTTP headers and/or environment are not processed here, but instead are assumed to be handled by the program invoking either service backend. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-11-04Git-aware CGI to provide dumb HTTP transportLibravatar Shawn O. Pearce4-0/+396
The git-http-backend CGI can be configured into any Apache server using ScriptAlias, such as with the following configuration: LoadModule cgi_module /usr/libexec/apache2/mod_cgi.so LoadModule alias_module /usr/libexec/apache2/mod_alias.so ScriptAlias /git/ /usr/libexec/git-core/git-http-backend/ Repositories are accessed via the translated PATH_INFO. The CGI is backwards compatible with the dumb client, allowing all older HTTP clients to continue to download repositories which are managed by the CGI. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30remote-helpers: return successfully if everything up-to-dateLibravatar Clemens Buchacher2-1/+3
Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30Move WebDAV HTTP push under remote-curlLibravatar Shawn O. Pearce6-54/+287
The remote helper interface now supports the push capability, which can be used to ask the implementation to push one or more specs to the remote repository. For remote-curl we implement this by calling the existing WebDAV based git-http-push executable. Internally the helper interface uses the push_refs transport hook so that the complexity of the refspec parsing and matching can be reused between remote implementations. When possible however the helper protocol uses source ref name rather than the source SHA-1, thereby allowing the helper to access this name if it is useful. >From Clemens Buchacher <drizzd@aon.at>: update http tests according to remote-curl capabilities o Pushing packed refs is now fixed. o The transport helper fails if refs are already up-to-date. Add a test for that. o The transport helper will notice if refs are already up-to-date. We therefore need to update server info in the unpacked-refs test. o The transport helper will purge deleted branches automatically. o Use a variable ($ORIG_HEAD) instead of full SHA-1 name. Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Clemens Buchacher <drizzd@aon.at> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> CC: Mike Hommey <mh@glandium.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30remote-helpers: Support custom transport optionsLibravatar Shawn O. Pearce3-2/+198
Some transports, like the native pack transport implemented by fetch-pack, support useful features like depth or include tags. These should be exposed if the underlying helper knows how to use them. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30remote-helpers: Fetch more than one ref in a batchLibravatar Shawn O. Pearce3-26/+115
Some network protocols (e.g. native git://) are able to fetch more than one ref at a time and reduce the overall transfer cost by combining the requests into a single exchange. Instead of feeding each fetch request one at a time to the helper, feed all of them at once so the helper can decide whether or not it should batch them. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30fetch: Allow transport -v -v -v to set verbosity to 3Libravatar Shawn O. Pearce2-2/+2
Helpers might want a higher level of verbosity than just +1 (the porcelain default setting) and +2 (-v -v). Expand the field to allow verbosity in the range -1..3. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30remote-curl: Refactor walker initializationLibravatar Shawn O. Pearce1-10/+14
We will need the walker, url and remote in other functions as the code grows larger to support smart HTTP. Extract this out into a set of globals we can easily reference once configured. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> CC: Daniel Barkalow <barkalow@iabervon.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30Add multi_ack_detailed capability to fetch-pack/upload-packLibravatar Shawn O. Pearce2-22/+50
When multi_ack_detailed is enabled the ACK continue messages returned by the remote upload-pack are broken out to describe the different states within the peer. This permits the client to better understand the server's in-memory state. The fetch-pack/upload-pack protocol now looks like: NAK --------------------------------- Always sent in response to "done" if there was no common base selected from the "have" lines (or no have lines were sent). * no multi_ack or multi_ack_detailed: Sent when the client has sent a pkt-line flush ("0000") and the server has not yet found a common base object. * either multi_ack or multi_ack_detailed: Always sent in response to a pkt-line flush. ACK %s ----------------------------------- * no multi_ack or multi_ack_detailed: Sent in response to "have" when the object exists on the remote side and is therefore an object in common between the peers. The argument is the SHA-1 of the common object. * either multi_ack or multi_ack_detailed: Sent in response to "done" if there are common objects. The argument is the last SHA-1 determined to be common. ACK %s continue ----------------------------------- * multi_ack only: Sent in response to "have". The remote side wants the client to consider this object as common, and immediately stop transmitting additional "have" lines for objects that are reachable from it. The reason the client should stop is not given, but is one of the two cases below available under multi_ack_detailed. ACK %s common ----------------------------------- * multi_ack_detailed only: Sent in response to "have". Both sides have this object. Like with "ACK %s continue" above the client should stop sending have lines reachable for objects from the argument. ACK %s ready ----------------------------------- * multi_ack_detailed only: Sent in response to "have". The client should stop transmitting objects which are reachable from the argument, and send "done" soon to get the objects. If the remote side has the specified object, it should first send an "ACK %s common" message prior to sending "ACK %s ready". Clients may still submit additional "have" lines if there are more side branches for the client to explore that might be added to the common set and reduce the number of objects to transfer. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30Move "get_ack()" back to fetch-packLibravatar Shawn O. Pearce3-22/+21
In 41cb7488 Linus moved this function to connect.c for reuse inside of the git-clone-pack command. That was 2005, but in 2006 Junio retired git-clone-pack in commit efc7fa53. Since then the only caller has been fetch-pack. Since this ACK/NAK exchange is only used by the fetch-pack/upload-pack protocol we should move it back to be a private detail of fetch-pack. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30fetch-pack: Use a strbuf to compose the want listLibravatar Shawn O. Pearce3-25/+39
This change is being offered as a refactoring to make later commits in the smart HTTP series easier. By changing the enabled capabilities to be formatted in a strbuf it is easier to add a new capability to the set of supported capabilities. By formatting the want portion of the request into a strbuf and writing it as a whole block we can later decide to hold onto the req_buf (instead of releasing it) to recycle in stateless communications. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30pkt-line: Make packet_read_line easier to debugLibravatar Shawn O. Pearce1-1/+1
When there is an error parsing the 4 byte length component we now display it as part of the die message, this may hint as to what data was misunderstood by the application. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30pkt-line: Add strbuf based functionsLibravatar Shawn O. Pearce2-12/+76
These routines help to work with pkt-line values inside of a strbuf, permitting simple formatting of buffered network messages. Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-30http-push: fix check condition on http.c::finish_http_pack_request()Libravatar Tay Ray Chuan1-1/+1
Check that http.c::finish_http_pack_request() returns 0 (for success). Signed-off-by: Tay Ray Chuan <rctay89@gmail.com> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-14Merge branch 'pv/maint-add-p-no-exclude'Libravatar Junio C Hamano2-1/+15
* pv/maint-add-p-no-exclude: git-add--interactive: never skip files included in index
2009-10-14Merge branch 'maint'Libravatar Junio C Hamano2-0/+13
* maint: sha1_file: Fix infinite loop when pack is corrupted
2009-10-14sha1_file: Fix infinite loop when pack is corruptedLibravatar Shawn O. Pearce2-0/+13
Some types of corruption to a pack may confuse the deflate stream which stores an object. In Andy's reported case a 36 byte region of the pack was overwritten, leading to what appeared to be a valid deflate stream that was trying to produce a result larger than our allocated output buffer could accept. Z_BUF_ERROR is returned from inflate() if either the input buffer needs more input bytes, or the output buffer has run out of space. Previously we only considered the former case, as it meant we needed to move the stream's input buffer to the next window in the pack. We now abort the loop if inflate() returns Z_BUF_ERROR without consuming the entire input buffer it was given, or has filled the entire output buffer but has not yet returned Z_STREAM_END. Either state is a clear indicator that this loop is not working as expected, and should not continue. This problem cannot occur with loose objects as we open the entire loose object as a single buffer and treat Z_BUF_ERROR as an error. Reported-by: Andy Isaacson <adi@hexapodia.org> Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Acked-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-14Merge branch 'maint'Libravatar Junio C Hamano3-2/+11
* maint: change throughput display units with fast links clone: Supply the right commit hash to post-checkout when -b is used remote-curl: add missing initialization of argv0_path
2009-10-14change throughput display units with fast linksLibravatar Nicolas Pitre1-1/+7
Switch to MiB/s when the connection is fast enough (i.e. on a LAN). Signed-off-by: Nicolas Pitre <nico@fluxnic.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-14clone: Supply the right commit hash to post-checkout when -b is usedLibravatar Björn Steinbrink2-1/+3
When we use -b <branch>, we may checkout something else than what the remote's HEAD references, but we still used remote_head to supply the new ref value to the post-checkout hook, which is wrong. So instead of using remote_head to find the value to be passed to the post-checkout hook, we have to use our_head_points_at, which is always correctly setup, even if -b is not used. This also fixes a segfault when "clone -b <branch>" is used with a remote repo that doesn't have a valid HEAD, as in such a case remote_head is NULL, but we still tried to access it. Reported-by: Devin Cofer <ranguvar@archlinux.us> Signed-off-by: Björn Steinbrink <B.Steinbrink@gmx.de> Acked-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-13remote-curl: add missing initialization of argv0_pathLibravatar Johannes Sixt1-0/+1
All programs, in particular also the stand-alone programs (non-builtins) must call git_extract_argv0_path(argv[0]) in order to help builds that derive the installation prefix at runtime, such as the MinGW build. Without this call, the program segfaults (or raises an assertion failure). Signed-off-by: Johannes Sixt <j6t@kdbg.org> Tested-by: Michael Wookey <michaelwookey@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-13Merge branch 'maint'Libravatar Junio C Hamano1-1/+2
* maint: git-stash documentation: mention default options for 'list'
2009-10-13Merge branch 'maint-1.6.4' into maintLibravatar Junio C Hamano1-1/+2
* maint-1.6.4: git-stash documentation: mention default options for 'list'
2009-10-12Let --decorate show HEAD positionLibravatar Thomas Rast3-2/+3
'git log --graph --oneline --decorate --all' is a useful way to get a general overview of the repository state, similar to 'gitk --all'. Let it indicate the position of HEAD by loading that ref too, so that the --decorate code can see it. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-12git-stash documentation: mention default options for 'list'Libravatar Miklos Vajna1-1/+2
Signed-off-by: Miklos Vajna <vmiklos@frugalware.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-12bash completion: complete refs for git-grepLibravatar Thomas Rast1-1/+2
Before the --, always attempt ref completion. This helps with entering the <treeish> arguments to git-grep. As a bonus, you can work around git-grep's current lack of --all by hitting M-*, ugly as the resulting command line may be. Strictly speaking, completing the regular expression argument (or option argument) makes no sense. However, we cannot prevent _all_ completion (it will fall back to filenames), so we dispense with any additional complication to detect whether the user still has to enter a regular expression. Signed-off-by: Thomas Rast <trast@student.ethz.ch> Acked-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-11diff.c: stylefixLibravatar Felipe Contreras1-1/+1
Essentially; s/type* /type */ as per the coding guidelines. Signed-off-by: Felipe Contreras <felipe.contreras@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-11Documentation: add 'git replace' to main git manpageLibravatar SZEDER Gábor1-0/+1
Signed-off-by: SZEDER Gábor <szeder@ira.uka.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-10git-add--interactive: never skip files included in indexLibravatar Pauli Virtanen2-1/+15
Make "git add -p" to not skip files that are in index even if they are excluded (by .gitignore etc.). This fixes the contradictory behavior that "git status" and "git commit -a" listed such files as modified, but "git add -p FILENAME" ignored them. Signed-off-by: Pauli Virtanen <pav@iki.fi> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-10GIT 1.6.5Libravatar Junio C Hamano3-7/+6
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-10git-svn: hide find_parent_branch output in double quiet modeLibravatar Simon Arlott1-7/+12
Hide find_parent_branch logging when -qq is specified. This eliminates more unnecessary output when run from cron, e.g.: Found possible branch point: http://undernet-ircu.svn.sourceforge.net/svnroot/undernet-ircu/ircu2/trunk => http://undernet-ircu.svn.sourceforge.net/svnroot/undernet-ircu/ircu2/branches/authz, 1919 Found branch parent: (authz) ea061d76aea985dc0208d36fa5e0b2249b698557 Following parent with do_switch Successfully followed parent Acked-by: Eric Wong <normalperson@yhbt.net> Signed-off-by: Simon Arlott <simon@fire.lp0.eu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-09Documentation: clone: clarify discussion of initial branchLibravatar Jonathan Nieder1-2/+3
When saying the initial branch is equal to the currently active remote branch, it is probably intended that the branch heads point to the same commit. Maybe it would be more useful to a new user to emphasize that the tree contents and history are the same. More important, probably, is that this new branch is set up so that "git pull" merges changes from the corresponding remote branch. The next paragraph addresses that directly. What the reader needs to know to begin with is that (1) the initial branch is your own; if you do not pull, it won't get updated, and that (2) the initial branch starts out at the same commit as the upstream. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-10-09Merge branch 'rs/maint-archive-prefix'Libravatar Junio C Hamano2-3/+16
* rs/maint-archive-prefix: Git archive and trailing "/" in prefix
2009-10-09Merge branch 'fc/mutt-alias'Libravatar Junio C Hamano1-1/+1
* fc/mutt-alias: send-email: fix mutt regex for grouped aliases
2009-10-09Merge branch 'ef/msvc-noreturn'Libravatar Junio C Hamano3-9/+11
* ef/msvc-noreturn: add NORETURN_PTR for function pointers increase portability of NORETURN declarations