summaryrefslogtreecommitdiff
path: root/send-pack.c
AgeCommit message (Collapse)AuthorFilesLines
2016-08-10Merge branch 'rs/use-strbuf-addstr' into maintLibravatar Junio C Hamano1-1/+1
* rs/use-strbuf-addstr: use strbuf_addstr() instead of strbuf_addf() with "%s" use strbuf_addstr() for adding constant strings to a strbuf
2016-08-01use strbuf_addstr() for adding constant strings to a strbufLibravatar René Scharfe1-1/+1
Replace uses of strbuf_addf() for adding strings with more lightweight strbuf_addstr() calls. In http-push.c it becomes easier to see what's going on without having to verfiy that the definition of PROPFIND_ALL_REQUEST doesn't contain any format specifiers. Signed-off-by: Rene Scharfe <l.s.r@web.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-06-08send-pack: use buffered I/O to talk to pack-objectsLibravatar Jeff King1-17/+16
We start a pack-objects process and then write all of the positive and negative sha1s to it over a pipe. We do so by formatting each item into a fixed-size buffer and then writing each individually. This has two drawbacks: 1. There's some manual computation of the buffer size, which is not immediately obvious is correct (though it is). 2. We write() once per sha1, which means a lot more system calls than are necessary. We can solve both by wrapping the pipe descriptor in a stdio handle; this is the same technique used by upload-pack when serving fetches. Note that we can also simplify and improve the error handling here. The original detected a single write error and broke out of the loop (presumably to avoid writing the error message over and over), but never actually acted on seeing an error; we just fed truncated input and took whatever pack-objects returned. In practice, this probably didn't matter, as the likely errors would be caused by pack-objects dying (and we'd probably just die with SIGPIPE anyway). But we can easily make this simpler and more robust; the stdio handle keeps an error flag, which we can check at the end. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-20send-pack: isolate sigpipe in demuxer threadLibravatar Jeff King1-0/+1
If we get an error from pack-objects, we may exit send_pack() early, before reading the server's status response. In such a case, we may racily see SIGPIPE from our async demuxer (which is trying to write that status back to us), and we'd prefer to continue pushing the error up the call stack, rather than taking down the whole process with signal death. This is safe to do because our demuxer just calls recv_sideband, whose data writes are all done with write_or_die(), which will notice SIGPIPE. We do also write sideband 2 to stderr, and we would no longer die on SIGPIPE there (if it were piped in the first place, and if the piped program went away). But that's probably a good thing, as it likewise should not abort the push process at all (neither immediately by signal, nor eventually by reporting failure back to the main thread). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-04-20send-pack: close demux pipe before finishing async processLibravatar Jeff King1-2/+4
This fixes a deadlock on the client side when pushing a large number of refs from a corrupted repo. There's a reproduction script below, but let's start with a human-readable explanation. The client side of a push goes something like this: 1. Start an async process to demux sideband coming from the server. 2. Run pack-objects to send the actual pack, and wait for its status via finish_command(). 3. If pack-objects failed, abort immediately. 4. If pack-objects succeeded, read the per-ref status from the server, which is actually coming over a pipe from the demux process started in step 1. We run finish_async() to wait for and clean up the demux process in two places. In step 3, if we see an error, we want it to end early. And after step 4, it should be done writing any data and we are just cleaning it up. Let's focus on the error case first. We hand the output descriptor to the server over to pack-objects. So by the time it has returned an error to us, it has closed the descriptor and the server has gotten EOF. The server will mark all refs as failed with "unpacker error" and send us back the status for each (followed by EOF). This status goes to the demuxer thread, which relays it over a pipe to the main thread. But the main thread never even tries reading the status. It's trying to bail because of the pack-objects error, and is waiting for the demuxer thread to finish. If there are a small number of refs, that's OK; the demuxer thread writes into the pipe buffer, sees EOF from the server, and quits. But if there are a large number of refs, it may block on write() back to the main thread, leading to a deadlock (the main thread is waiting for the demuxer to finish, the demuxer is waiting for the main thread to read). We can break this deadlock by closing the pipe between the demuxer and the main thread before calling finish_async(). Then the demuxer gets a write() error and exits. The non-error case usually just works, because we will have read all of the data from the other side. We do close demux.out already, but we only do so _after_ calling finish_async(). This is OK because there shouldn't be any more data coming from the server. But technically we've only read to a flush packet, and a broken or malicious server could be sending more cruft. In such a case, we would hit the same deadlock. Closing the pipe first doesn't affect the normal case, and means that for a cruft-sending server, we'll notice a write() error rather than deadlocking. Note that when write() sees this error, we'll actually deliver SIGPIPE to the thread, which will take down the whole process (unless we're compiled with NO_PTHREADS). This isn't ideal, but it's an improvement over the status quo, which is deadlocking. And SIGPIPE handling in async threads is a bigger problem that we can deal with separately. A simple reproduction for the error case is below. It's technically racy (we could exit the main process and take down the async thread with us before it even reads the status), though in practice it seems to fail pretty consistently. git init repo && cd repo && # make some commits; we need two so we can simulate corruption # in the history later. git commit --allow-empty -m one && one=$(git rev-parse HEAD) && git commit --allow-empty -m two && two=$(git rev-parse HEAD) && # now make a ton of refs; our goal here is to overflow the pipe buffer # when reporting the ref status, which will cause the demuxer to block # on write() for i in $(seq 20000); do echo "create refs/heads/this-is-a-really-long-branch-name-$i $two" done | git update-ref --stdin && # now make a corruption in the history such that pack-objects will fail rm -vf .git/objects/$(echo $one | sed 's}..}&/}') && # and then push the result git init --bare dst.git && git push --mirror dst.git Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-20Convert struct ref to use object_id.Libravatar brian m. carlson1-8/+8
Use struct object_id in three fields in struct ref and convert all the necessary places that use it. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Jeff King <peff@peff.net>
2015-08-31Merge branch 'db/push-sign-if-asked'Libravatar Junio C Hamano1-7/+36
The client side codepaths in "git push" have been cleaned up and the user can request to perform an optional "signed push", i.e. sign only when the other end accepts signed push. * db/push-sign-if-asked: push: add a config option push.gpgSign for default signed pushes push: support signing pushes iff the server supports it builtin/send-pack.c: use parse_options API config.c: rename git_config_maybe_bool_text and export it as git_parse_maybe_bool transport: remove git_transport_options.push_cert gitremote-helpers.txt: document pushcert option Documentation/git-send-pack.txt: document --signed Documentation/git-send-pack.txt: wrap long synopsis line Documentation/git-push.txt: document when --signed may fail
2015-08-19push: support signing pushes iff the server supports itLibravatar Dave Borowitz1-7/+36
Add a new flag --sign=true (or --sign=false), which means the same thing as the original --signed (or --no-signed). Give it a third value --sign=if-asked to tell push and send-pack to send a push certificate if and only if the server advertised a push cert nonce. If not, warn the user that their push may not be as secure as they thought. Signed-off-by: Dave Borowitz <dborowitz@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-05Merge branch 'bc/object-id'Libravatar Junio C Hamano1-1/+1
Identify parts of the code that knows that we use SHA-1 hash to name our objects too much, and use (1) symbolic constants instead of hardcoded 20 as byte count and/or (2) use struct object_id instead of unsigned char [20] for object names. * bc/object-id: apply: convert threeway_stage to object_id patch-id: convert to use struct object_id commit: convert parts to struct object_id diff: convert struct combine_diff_path to object_id bulk-checkin.c: convert to use struct object_id zip: use GIT_SHA1_HEXSZ for trailers archive.c: convert to use struct object_id bisect.c: convert leaf functions to use struct object_id define utility functions for object IDs define a structure for object IDs
2015-04-20Merge branch 'jc/push-cert'Libravatar Junio C Hamano1-0/+23
The "git push --signed" protocol extension did not limit what the "nonce" that is a server-chosen string can contain or how long it can be, which was unnecessarily lax. Limit both the length and the alphabet to a reasonably small space that can still have enough entropy. * jc/push-cert: push --signed: tighten what the receiving end can ask to sign
2015-04-02Merge branch 'sb/atomic-push'Libravatar Junio C Hamano1-1/+1
* sb/atomic-push: send-pack: unify error messages for unsupported capabilities
2015-04-02push --signed: tighten what the receiving end can ask to signLibravatar Junio C Hamano1-0/+23
Instead of blindly trusting the receiving side to give us a sensible nonce to sign, limit the length (max 256 bytes) and the alphabet (alnum and a few selected punctuations, enough to encode in base64) that can be used in nonce. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-04-02send-pack: unify error messages for unsupported capabilitiesLibravatar Ralf Thielow1-1/+1
If --signed is not supported, the error message names the remote "receiving end". If --atomic is not supported, the error message names the remote "server". Unify the naming to "receiving end" as we're in the context of "push". Signed-off-by: Ralf Thielow <ralf.thielow@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-03-13commit: convert parts to struct object_idLibravatar brian m. carlson1-1/+1
Convert struct commit_graft and necessary local parts of commit.c. Also, convert several constants based on the hex length of an SHA-1 to use GIT_SHA1_HEXSZ, and move several magic constants into variables for readability. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-02-11Merge branch 'sb/atomic-push'Libravatar Junio C Hamano1-8/+57
"git push" has been taught a "--atomic" option that makes push to update more than one ref an "all-or-none" affair. * sb/atomic-push: Document receive.advertiseatomic t5543-atomic-push.sh: add basic tests for atomic pushes push.c: add an --atomic argument send-pack.c: add --atomic command line argument send-pack: rename ref_update_to_be_sent to check_to_send_update receive-pack.c: negotiate atomic push support receive-pack.c: add execute_commands_atomic function receive-pack.c: move transaction handling in a central place receive-pack.c: move iterating over all commands outside execute_commands receive-pack.c: die instead of error in case of possible future bug receive-pack.c: shorten the execute_commands loop over all commands
2015-01-07send-pack.c: add --atomic command line argumentLibravatar Ronnie Sahlberg1-2/+47
This adds support to send-pack to negotiate and use atomic pushes iff the server supports it. Atomic pushes are activated by a new command line flag --atomic. In order to do this we also need to change the semantics for send_pack() slightly. The existing send_pack() function actually doesn't send all the refs back to the server when multiple refs are involved, for example when using --all. Several of the failure modes for pushes can already be detected locally in the send_pack client based on the information from the initial server side list of all the refs as generated by receive-pack. Any such refs that we thus know would fail to push are thus pruned from the list of refs we send to the server to update. For atomic pushes, we have to deal thus with both failures that are detected locally as well as failures that are reported back from the server. In order to do so we treat all local failures as push failures too. We introduce a new status code REF_STATUS_ATOMIC_PUSH_FAILED so we can flag all refs that we would normally have tried to push to the server but we did not due to local failures. This is to improve the error message back to the end user to flag that "these refs failed to update since the atomic push operation failed." Signed-off-by: Ronnie Sahlberg <sahlberg@google.com> Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-01-07send-pack: rename ref_update_to_be_sent to check_to_send_updateLibravatar Stefan Beller1-7/+11
This renames ref_update_to_be_sent to check_to_send_update and inverts the meaning of the return value. Having the return value inverted we can have different values for the error codes. This is useful in a later patch when we want to know if we hit the CHECK_REF_STATUS_REJECTED case. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-29pack-objects: use --objects-edge-aggressive for shallow reposLibravatar brian m. carlson1-0/+3
When fetching into or pushing from a shallow repository, we want to aggressively mark edges as uninteresting, since this decreases the pack size. However, aggressively marking edges can negatively affect performance on large non-shallow repositories with lots of refs. Teach pack-objects a --shallow option to indicate that we're pushing from or fetching into a shallow repository. Use --objects-edge-aggressive only for shallow repositories and otherwise use --objects-edge, which performs better in the general case. Update the callers to pass the --shallow option when they are dealing with a shallow repository. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-08Merge branch 'jc/push-cert'Libravatar Junio C Hamano1-47/+154
Allow "git push" request to be signed, so that it can be verified and audited, using the GPG signature of the person who pushed, that the tips of branches at a public repository really point the commits the pusher wanted to, without having to "trust" the server. * jc/push-cert: (24 commits) receive-pack::hmac_sha1(): copy the entire SHA-1 hash out signed push: allow stale nonce in stateless mode signed push: teach smart-HTTP to pass "git push --signed" around signed push: fortify against replay attacks signed push: add "pushee" header to push certificate signed push: remove duplicated protocol info send-pack: send feature request on push-cert packet receive-pack: GPG-validate push certificates push: the beginning of "git push --signed" pack-protocol doc: typofix for PKT-LINE gpg-interface: move parse_signature() to where it should be gpg-interface: move parse_gpg_output() to where it should be send-pack: clarify that cmds_sent is a boolean send-pack: refactor inspecting and resetting status and sending commands send-pack: rename "new_refs" to "need_pack_data" receive-pack: factor out capability string generation send-pack: factor out capability string generation send-pack: always send capabilities send-pack: refactor decision to send update per ref send-pack: move REF_STATUS_REJECT_NODELETE logic a bit higher ...
2014-09-17signed push: fortify against replay attacksLibravatar Junio C Hamano1-4/+14
In order to prevent a valid push certificate for pushing into an repository from getting replayed in a different push operation, send a nonce string from the receive-pack process and have the signer include it in the push certificate. The receiving end uses an HMAC hash of the path to the repository it serves and the current time stamp, hashed with a secret seed (the secret seed does not have to be per-repository but can be defined in /etc/gitconfig) to generate the nonce, in order to ensure that a random third party cannot forge a nonce that looks like it originated from it. The original nonce is exported as GIT_PUSH_CERT_NONCE for the hooks to examine and match against the value on the "nonce" header in the certificate to notice a replay, but returned "nonce" header in the push certificate is examined by receive-pack and the result is exported as GIT_PUSH_CERT_NONCE_STATUS, whose value would be "OK" if the nonce recorded in the certificate matches what we expect, so that the hooks can more easily check. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15signed push: add "pushee" header to push certificateLibravatar Junio C Hamano1-0/+5
Record the URL of the intended recipient for a push (after anonymizing it if it has authentication material) on a new "pushee URL" header. Because the networking configuration (SSH-tunnels, proxies, etc.) on the pushing user's side varies, the receiving repository may not know the single canonical URL all the pushing users would refer it as (besides, many sites allow pushing over ssh://host/path and https://host/path protocols to the same repository but with different local part of the path). So this value may not be reliably used for replay-attack prevention purposes, but this will still serve as a human readable hint to identify the repository the certificate refers to. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15signed push: remove duplicated protocol infoLibravatar Junio C Hamano1-1/+1
With the interim protocol, we used to send the update commands even though we already send a signed copy of the same information when push certificate is in use. Update the send-pack/receive-pack pair not to do so. The notable thing on the receive-pack side is that it makes sure that there is no command sent over the traditional protocol packet outside the push certificate. Otherwise a pusher can claim to be pushing one set of ref updates in the signed certificate while issuing commands to update unrelated refs, and such an update will evade later audits. Finally, start documenting the protocol. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15send-pack: send feature request on push-cert packetLibravatar Junio C Hamano1-5/+8
We would want to update the interim protocol so that we do not send the usual update commands when the push certificate feature is in use, as the same information is in the certificate. Once that happens, the push-cert packet may become the only protocol command, but then there is no packet to put the feature request behind, like we always did. As we have prepared the receiving end that understands the push-cert feature to accept the feature request on the first protocol packet (other than "shallow ", which was an unfortunate historical mistake that has to come before everything else), we can give the feature request on the push-cert packet instead of the first update protocol packet, in preparation for the next step to actually update to the final protocol. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15push: the beginning of "git push --signed"Libravatar Junio C Hamano1-0/+64
While signed tags and commits assert that the objects thusly signed came from you, who signed these objects, there is not a good way to assert that you wanted to have a particular object at the tip of a particular branch. My signing v2.0.1 tag only means I want to call the version v2.0.1, and it does not mean I want to push it out to my 'master' branch---it is likely that I only want it in 'maint', so the signature on the object alone is insufficient. The only assurance to you that 'maint' points at what I wanted to place there comes from your trust on the hosting site and my authentication with it, which cannot easily audited later. Introduce a mechanism that allows you to sign a "push certificate" (for the lack of better name) every time you push, asserting that what object you are pushing to update which ref that used to point at what other object. Think of it as a cryptographic protection for ref updates, similar to signed tags/commits but working on an orthogonal axis. The basic flow based on this mechanism goes like this: 1. You push out your work with "git push --signed". 2. The sending side learns where the remote refs are as usual, together with what protocol extension the receiving end supports. If the receiving end does not advertise the protocol extension "push-cert", an attempt to "git push --signed" fails. Otherwise, a text file, that looks like the following, is prepared in core: certificate version 0.1 pusher Junio C Hamano <gitster@pobox.com> 1315427886 -0700 7339ca65... 21580ecb... refs/heads/master 3793ac56... 12850bec... refs/heads/next The file begins with a few header lines, which may grow as we gain more experience. The 'pusher' header records the name of the signer (the value of user.signingkey configuration variable, falling back to GIT_COMMITTER_{NAME|EMAIL}) and the time of the certificate generation. After the header, a blank line follows, followed by a copy of the protocol message lines. Each line shows the old and the new object name at the tip of the ref this push tries to update, in the way identical to how the underlying "git push" protocol exchange tells the ref updates to the receiving end (by recording the "old" object name, the push certificate also protects against replaying). It is expected that new command packet types other than the old-new-refname kind will be included in push certificate in the same way as would appear in the plain vanilla command packets in unsigned pushes. The user then is asked to sign this push certificate using GPG, formatted in a way similar to how signed tag objects are signed, and the result is sent to the other side (i.e. receive-pack). In the protocol exchange, this step comes immediately before the sender tells what the result of the push should be, which in turn comes before it sends the pack data. 3. When the receiving end sees a push certificate, the certificate is written out as a blob. The pre-receive hook can learn about the certificate by checking GIT_PUSH_CERT environment variable, which, if present, tells the object name of this blob, and make the decision to allow or reject this push. Additionally, the post-receive hook can also look at the certificate, which may be a good place to log all the received certificates for later audits. Because a push certificate carry the same information as the usual command packets in the protocol exchange, we can omit the latter when a push certificate is in use and reduce the protocol overhead. This however is not included in this patch to make it easier to review (in other words, the series at this step should never be released without the remainder of the series, as it implements an interim protocol that will be incompatible with the final one). As such, the documentation update for the protocol is left out of this step. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15send-pack: clarify that cmds_sent is a booleanLibravatar Junio C Hamano1-3/+4
We use it to make sure that the feature request is sent only once on the very first request packet (ignoring the "shallow " line, which was an unfortunate mistake we cannot retroactively fix with existing receive-pack already deployed in the field) and we set it to "true" with cmds_sent++, not because we care about the actual number of updates sent but because it is merely an idiomatic way. Set it explicitly to one to clarify that the code that uses this variable only cares about its zero-ness. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15send-pack: refactor inspecting and resetting status and sending commandsLibravatar Junio C Hamano1-19/+30
The main loop over remote_refs list inspects the ref status to see if we need to generate pack data (i.e. a delete-only push does not need to send any additional data), resets it to "expecting the status report" state, and formats the actual update commands to be sent. Split the former two out of the main loop, as it will become conditional in later steps. Besides, we should have code that does real thing here, before the "Finally, tell the other end!" part ;-) Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15send-pack: rename "new_refs" to "need_pack_data"Libravatar Junio C Hamano1-4/+3
The variable counts how many non-deleting command is being sent, but is only checked with 0-ness to decide if we need to send the pack data. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15send-pack: factor out capability string generationLibravatar Junio C Hamano1-8/+13
A run of 'var ? " var" : ""' fed to a long printf string in a deeply nested block was hard to read. Move it outside the loop and format it into a strbuf. As an added bonus, the trick to add "agent=<agent-name>" by using two conditionals is replaced by a more readable version. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15send-pack: always send capabilitiesLibravatar Junio C Hamano1-3/+1
We tried to avoid sending one extra byte, NUL and nothing behind it to signal there is no protocol capabilities being sent, on the first command packet on the wire, but it just made the code look ugly. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15send-pack: refactor decision to send update per refLibravatar Junio C Hamano1-15/+21
A new helper function ref_update_to_be_sent() decides for each ref if the update is to be sent based on the status previously set by set_ref_status_for_push() and also if this is a mirrored push. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-09-15send-pack: move REF_STATUS_REJECT_NODELETE logic a bit higherLibravatar Junio C Hamano1-5/+10
20e8b465 (refactor ref status logic for pushing, 2010-01-08) restructured the code to set status for each ref to be pushed, but did not quite go far enough. We inspect the status set earlier by set_refs_status_for_push() and then perform yet another update to the status of a ref with an otherwise OK status to be deleted to mark it with REF_STATUS_REJECT_NODELETE when the protocol tells us never to delete. Split the latter into a separate loop that comes before we enter the per-ref loop. This way we would have one less condition to check in the main loop. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-20run-command: introduce CHILD_PROCESS_INITLibravatar René Scharfe1-2/+1
Most struct child_process variables are cleared using memset first after declaration. Provide a macro, CHILD_PROCESS_INIT, that can be used to initialize them statically instead. That's shorter, doesn't require a function call and is slightly more readable (especially given that we already have STRBUF_INIT, ARGV_ARRAY_INIT etc.). Helped-by: Johannes Sixt <j6t@kdbg.org> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-01-17Merge branch 'nd/shallow-clone'Libravatar Junio C Hamano1-4/+23
Fetching from a shallow-cloned repository used to be forbidden, primarily because the codepaths involved were not carefully vetted and we did not bother supporting such usage. This attempts to allow object transfer out of a shallow-cloned repository in a controlled way (i.e. the receiver become a shallow repository with truncated history). * nd/shallow-clone: (31 commits) t5537: fix incorrect expectation in test case 10 shallow: remove unused code send-pack.c: mark a file-local function static git-clone.txt: remove shallow clone limitations prune: clean .git/shallow after pruning objects clone: use git protocol for cloning shallow repo locally send-pack: support pushing from a shallow clone via http receive-pack: support pushing to a shallow clone via http smart-http: support shallow fetch/clone remote-curl: pass ref SHA-1 to fetch-pack as well send-pack: support pushing to a shallow clone receive-pack: allow pushes that update .git/shallow connected.c: add new variant that runs with --shallow-file add GIT_SHALLOW_FILE to propagate --shallow-file to subprocesses receive/send-pack: support pushing from a shallow clone receive-pack: reorder some code in unpack() fetch: add --update-shallow to accept refs that update .git/shallow upload-pack: make sure deepening preserves shallow roots fetch: support fetching from a shallow repository clone: support remote shallow repository ...
2014-01-06send-pack.c: mark a file-local function staticLibravatar Ramsay Jones1-1/+1
Commit f2c681cf ("send-pack: support pushing from a shallow clone via http", 05-12-2013) adds the 'advertise_shallow_grafts_buf' function as an external symbol. Noticed by sparse. ("'advertise_shallow_grafts_buf' was not declared. Should it be static?") Signed-off-by: Ramsay Jones <ramsay@ramsay1.demon.co.uk> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-17Merge branch 'cc/starts-n-ends-with'Libravatar Junio C Hamano1-2/+2
Remove a few duplicate implementations of prefix/suffix comparison functions, and rename them to starts_with and ends_with. * cc/starts-n-ends-with: replace {pre,suf}fixcmp() with {starts,ends}_with() strbuf: introduce starts_with() and ends_with() builtin/remote: remove postfixcmp() and use suffixcmp() instead environment: normalize use of prefixcmp() by removing " != 0"
2013-12-10send-pack: support pushing from a shallow clone via httpLibravatar Nguyễn Thái Ngọc Duy1-2/+17
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10receive/send-pack: support pushing from a shallow cloneLibravatar Nguyễn Thái Ngọc Duy1-0/+3
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-10remote.h: replace struct extra_have_objects with struct sha1_arrayLibravatar Nguyễn Thái Ngọc Duy1-3/+4
The latter can do everything the former can and is used in many more places. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-12-05replace {pre,suf}fixcmp() with {starts,ends}_with()Libravatar Christian Couder1-2/+2
Leaving only the function definitions and declarations so that any new topic in flight can still make use of the old functions, replace existing uses of the prefixcmp() and suffixcmp() with new API functions. The change can be recreated by mechanically applying this: $ git grep -l -e prefixcmp -e suffixcmp -- \*.c | grep -v strbuf\\.c | xargs perl -pi -e ' s|!prefixcmp\(|starts_with\(|g; s|prefixcmp\(|!starts_with\(|g; s|!suffixcmp\(|ends_with\(|g; s|suffixcmp\(|!ends_with\(|g; ' on the result of preparatory changes in this series. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-11-25send-pack: don't send a thin pack to a server which doesn't support itLibravatar Carlos Martín Nieto1-0/+2
Up to now git has assumed that all servers are able to fix thin packs. This is however not always the case. Document the 'no-thin' capability and prevent send-pack from generating a thin pack if the server advertises it. Signed-off-by: Carlos Martín Nieto <cmn@elego.de> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-10-30Merge branch 'jl/pack-transfer-avoid-double-close'Libravatar Junio C Hamano1-0/+4
The codepath that send_pack() calls pack_objects() mistakenly closed the same file descriptor twice, leading to potentially closing a wrong file descriptor that was opened in the meantime. * jl/pack-transfer-avoid-double-close: Clear fd after closing to avoid double-close error
2013-10-23Clear fd after closing to avoid double-close errorLibravatar Jens Lindstrom1-0/+4
In send_pack(), clear the fd passed to pack_objects() by setting it to -1, since pack_objects() closes the fd (via a call to run_command()). Likewise, in get_pack(), clear the fd passed to run_command(). Not doing so risks having git_transport_push(), caller of send_pack(), closing the fd again, possibly incorrectly closing some other open file; or similarly with fetch_refs_from_pack(), indirect caller of get_pack(). Signed-off-by: Jens Lindström <jl@opera.com> Acked-by: Jeff King <peff@peff.net> Acked-by: Duy Nguyen <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-22push --force-with-lease: tie it all togetherLibravatar Junio C Hamano1-0/+1
This teaches the deepest part of the callchain for "git push" (and "git send-pack") to enforce "the old value of the ref must be this, otherwise fail this push" (aka "compare-and-swap" / "--lockref"). Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-07-08cache.h: move remote/connect API out of itLibravatar Junio C Hamano1-0/+1
The definition of "struct ref" in "cache.h", a header file so central to the system, always confused me. This structure is not about the local ref used by sha1-name API to name local objects. It is what refspecs are expanded into, after finding out what refs the other side has, to define what refs are updated after object transfer succeeds to what values. It belongs to "remote.h" together with "struct refspec". While we are at it, also move the types and functions related to the Git transport connection to a new header file connect.h Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20pkt-line: provide a LARGE_PACKET_MAX static bufferLibravatar Jeff King1-4/+3
Most of the callers of packet_read_line just read into a static 1000-byte buffer (callers which handle arbitrary binary data already use LARGE_PACKET_MAX). This works fine in practice, because: 1. The only variable-sized data in these lines is a ref name, and refs tend to be a lot shorter than 1000 characters. 2. When sending ref lines, git-core always limits itself to 1000 byte packets. However, the only limit given in the protocol specification in Documentation/technical/protocol-common.txt is LARGE_PACKET_MAX; the 1000 byte limit is mentioned only in pack-protocol.txt, and then only describing what we write, not as a specific limit for readers. This patch lets us bump the 1000-byte limit to LARGE_PACKET_MAX. Even though git-core will never write a packet where this makes a difference, there are two good reasons to do this: 1. Other git implementations may have followed protocol-common.txt and used a larger maximum size. We don't bump into it in practice because it would involve very long ref names. 2. We may want to increase the 1000-byte limit one day. Since packets are transferred before any capabilities, it's difficult to do this in a backwards-compatible way. But if we bump the size of buffer the readers can handle, eventually older versions of git will be obsolete enough that we can justify bumping the writers, as well. We don't have plans to do this anytime soon, but there is no reason not to start the clock ticking now. Just bumping all of the reading bufs to LARGE_PACKET_MAX would waste memory. Instead, since most readers just read into a temporary buffer anyway, let's provide a single static buffer that all callers can use. We can further wrap this detail away by having the packet_read_line wrapper just use the buffer transparently and return a pointer to the static storage. That covers most of the cases, and the remaining ones already read into their own LARGE_PACKET_MAX buffers. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20pkt-line: teach packet_read_line to chomp newlinesLibravatar Jeff King1-5/+1
The packets sent during ref negotiation are all terminated by newline; even though the code to chomp these newlines is short, we end up doing it in a lot of places. This patch teaches packet_read_line to auto-chomp the trailing newline; this lets us get rid of a lot of inline chomping code. As a result, some call-sites which are not reading line-oriented data (e.g., when reading chunks of packfiles alongside sideband) transition away from packet_read_line to the generic packet_read interface. This patch converts all of the existing callsites. Since the function signature of packet_read_line does not change (but its behavior does), there is a possibility of new callsites being introduced in later commits, silently introducing an incompatibility. However, since a later patch in this series will change the signature, such a commit would have to be merged directly into this commit, not to the tip of the series; we can therefore ignore the issue. This is an internal cleanup and should produce no change of behavior in the normal case. However, there is one corner case to note. Callers of packet_read_line have never been able to tell the difference between a flush packet ("0000") and an empty packet ("0004"), as both cause packet_read_line to return a length of 0. Readers treat them identically, even though Documentation/technical/protocol-common.txt says we must not; it also says that implementations should not send an empty pkt-line. By stripping out the newline before the result gets to the caller, we will now treat the newline-only packet ("0005\n") the same as an empty packet, which in turn gets treated like a flush packet. In practice this doesn't matter, as neither empty nor newline-only packets are part of git's protocols (at least not for the line-oriented bits, and readers who are not expecting line-oriented packets will be calling packet_read directly, anyway). But even if we do decide to care about the distinction later, it is orthogonal to this patch. The right place to tighten would be to stop treating empty packets as flush packets, and this change does not make doing so any harder. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20pkt-line: drop safe_write functionLibravatar Jeff King1-1/+1
This is just write_or_die by another name. The one distinction is that write_or_die will treat EPIPE specially by suppressing error messages. That's fine, as we die by SIGPIPE anyway (and in the off chance that it is disabled, write_or_die will simulate it). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-20send-pack: prefer prefixcmp over memcmp in receive_statusLibravatar Jeff King1-5/+4
This code predates prefixcmp, so it used memcmp along with static sizes. Replacing these memcmps with prefixcmp makes the code much more readable, and the lack of static sizes will make refactoring it in future patches simpler. Note that we used to be unnecessarily liberal in parsing the "unpack" status line, and would accept "unpack ok\njunk". No version of git has ever produced that, and it violates the BNF in Documentation/technical/pack-protocol.txt. Let's take this opportunity to tighten the check by converting the prefix comparison into a strcmp. While we're in the area, let's also fix a vague error message that does not follow our usual conventions (it writes directly to stderr and does not use the "error:" prefix). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-24push: introduce REJECT_FETCH_FIRST and REJECT_NEEDS_FORCELibravatar Junio C Hamano1-0/+2
When we push to update an existing ref, if: * the object at the tip of the remote is not a commit; or * the object we are pushing is not a commit, it won't be correct to suggest to fetch, integrate and push again, as the old and new objects will not "merge". We should explain that the push must be forced when there is a non-committish object is involved in such a case. If we do not have the current object at the tip of the remote, we do not even know that object, when fetched, is something that can be merged. In such a case, suggesting to pull first just like non-fast-forward case may not be technically correct, but in practice, most such failures are seen when you try to push your work to a branch without knowing that somebody else already pushed to update the same branch since you forked, so "pull first" would work as a suggestion most of the time. And if the object at the tip is not a commit, "pull first" will fail, without making any permanent damage. As a side effect, it also makes the error message the user will get during the next "push" attempt easier to understand, now the user is aware that a non-commit object is involved. In these cases, the current code already rejects such a push on the client end, but we used the same error and advice messages as the ones used when rejecting a non-fast-forward push, i.e. pull from there and integrate before pushing again. Introduce new rejection reasons and reword the messages appropriately. [jc: with help by Peff on message details] Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-12-02push: require force for refs under refs/tags/Libravatar Chris Rorvick1-0/+1
References are allowed to update from one commit-ish to another if the former is an ancestor of the latter. This behavior is oriented to branches which are expected to move with commits. Tag references are expected to be static in a repository, though, thus an update to something under refs/tags/ should be rejected unless the update is forced. Signed-off-by: Chris Rorvick <chris@rorvick.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>