summaryrefslogtreecommitdiff
path: root/fetch-pack.c
AgeCommit message (Collapse)AuthorFilesLines
2020-07-06Merge branch 'bc/sha-256-part-2'Libravatar Junio C Hamano1-0/+14
SHA-256 migration work continues. * bc/sha-256-part-2: (44 commits) remote-testgit: adapt for object-format bundle: detect hash algorithm when reading refs t5300: pass --object-format to git index-pack t5704: send object-format capability with SHA-256 t5703: use object-format serve option t5702: offer an object-format capability in the test t/helper: initialize the repository for test-sha1-array remote-curl: avoid truncating refs with ls-remote t1050: pass algorithm to index-pack when outside repo builtin/index-pack: add option to specify hash algorithm remote-curl: detect algorithm for dumb HTTP by size builtin/ls-remote: initialize repository based on fetch t5500: make hash independent serve: advertise object-format capability for protocol v2 connect: parse v2 refs with correct hash algorithm connect: pass full packet reader when parsing v2 refs Documentation/technical: document object-format for protocol v2 t1302: expect repo format version 1 for SHA-256 builtin/show-index: provide options to determine hash algo t5302: modernize test formatting ...
2020-06-25Merge branch 'jt/cdn-offload'Libravatar Junio C Hamano1-16/+121
The "fetch/clone" protocol has been updated to allow the server to instruct the clients to grab pre-packaged packfile(s) in addition to the packed object data coming over the wire. * jt/cdn-offload: upload-pack: fix a sparse '0 as NULL pointer' warning upload-pack: send part of packfile response as uri fetch-pack: support more than one pack lockfile upload-pack: refactor reading of pack-objects out Documentation: add Packfile URIs design doc Documentation: order protocol v2 sections http-fetch: support fetching packfiles by URL http-fetch: refactor into function http: refactor finish_http_pack_request() http: use --stdin when indexing dumb HTTP pack
2020-06-10upload-pack: send part of packfile response as uriLibravatar Jonathan Tan1-4/+108
Teach upload-pack to send part of its packfile response as URIs. An administrator may configure a repository with one or more "uploadpack.blobpackfileuri" lines, each line containing an OID, a pack hash, and a URI. A client may configure fetch.uriprotocols to be a comma-separated list of protocols that it is willing to use to fetch additional packfiles - this list will be sent to the server. Whenever an object with one of those OIDs would appear in the packfile transmitted by upload-pack, the server may exclude that object, and instead send the URI. The client will then download the packs referred to by those URIs before performing the connectivity check. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-06-10fetch-pack: support more than one pack lockfileLibravatar Jonathan Tan1-14/+15
Whenever a fetch results in a packfile being downloaded, a .keep file is generated, so that the packfile can be preserved (from, say, a running "git repack") until refs are written referring to the contents of the packfile. In a subsequent patch, a successful fetch using protocol v2 may result in more than one .keep file being generated. Therefore, teach fetch_pack() and the transport mechanism to support multiple .keep files. Implementation notes: - builtin/fetch-pack.c normally does not generate .keep files, and thus is unaffected by this or future changes. However, it has an undocumented "--lock-pack" feature, used by remote-curl.c when implementing the "fetch" remote helper command. In keeping with the remote helper protocol, only one "lock" line will ever be written; the rest will result in warnings to stderr. However, in practice, warnings will never be written because the remote-curl.c "fetch" is only used for protocol v0/v1 (which will not generate multiple .keep files). (Protocol v2 uses the "stateless-connect" command, not the "fetch" command.) - connected.c has an optimization in that connectivity checks on a ref need not be done if the target object is in a pack known to be self-contained and connected. If there are multiple packfiles, this optimization can no longer be done. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-27fetch-pack: parse and advertise the object-format capabilityLibravatar brian m. carlson1-0/+12
Parse the server's object-format capability and respond accordingly, dying if there is a mismatch. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-27fetch-pack: detect when the server doesn't support our hashLibravatar brian m. carlson1-0/+2
Detect when the server doesn't support our hash algorithm and abort. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-24stateless-connect: send response end packetLibravatar Denton Liu1-0/+13
Currently, remote-curl acts as a proxy and blindly forwards packets between an HTTP server and fetch-pack. In the case of a stateless RPC connection where the connection is terminated before the transaction is complete, remote-curl will blindly forward the packets before waiting on more input from fetch-pack. Meanwhile, fetch-pack will read the transaction and continue reading, expecting more input to continue the transaction. This results in a deadlock between the two processes. This can be seen in the following command which does not terminate: $ git -c protocol.version=2 clone https://github.com/git/git.git --shallow-since=20151012 Cloning into 'git'... whereas the v1 version does terminate as expected: $ git -c protocol.version=1 clone https://github.com/git/git.git --shallow-since=20151012 Cloning into 'git'... fatal: the remote end hung up unexpectedly Instead of blindly forwarding packets, make remote-curl insert a response end packet after proxying the responses from the remote server when using stateless_connect(). On the RPC client side, ensure that each response ends as described. A separate control packet is chosen because we need to be able to differentiate between what the remote server sends and remote-curl's control packets. By ensuring in the remote-curl code that a server cannot send response end packets, we prevent a malicious server from being able to perform a denial of service attack in which they spoof a response end packet and cause the described deadlock to happen. Reported-by: Force Charlie <charlieio@outlook.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Denton Liu <liu.denton@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-05-13Merge branch 'tb/shallow-cleanup'Libravatar Junio C Hamano1-1/+2
Code cleanup. * tb/shallow-cleanup: shallow: use struct 'shallow_lock' for additional safety shallow.h: document '{commit,rollback}_shallow_file' shallow: extract a header file for shallow-related functions commit: make 'commit_graft_pos' non-static
2020-05-01Merge branch 'jt/v2-fetch-nego-fix'Libravatar Junio C Hamano1-12/+38
The upload-pack protocol v2 gave up too early before finding a common ancestor, resulting in a wasteful fetch from a fork of a project. This has been corrected to match the behaviour of v0 protocol. * jt/v2-fetch-nego-fix: fetch-pack: in protocol v2, reset in_vain upon ACK fetch-pack: in protocol v2, in_vain only after ACK fetch-pack: return enum from process_acks()
2020-05-01Merge branch 'tb/reset-shallow'Libravatar Junio C Hamano1-5/+5
Fix in-core inconsistency after fetching into a shallow repository that broke the code to write out commit-graph. * tb/reset-shallow: shallow.c: use '{commit,rollback}_shallow_file' t5537: use test_write_lines and indented heredocs for readability
2020-04-30shallow: use struct 'shallow_lock' for additional safetyLibravatar Taylor Blau1-1/+1
In previous patches, the functions 'commit_shallow_file' and 'rollback_shallow_file' were introduced to reset the shallowness validity checks on a repository after potentially modifying '.git/shallow'. These functions can be made safer by wrapping the 'struct lockfile *' in a new type, 'shallow_lock', so that they cannot be called with a raw lock (and potentially misused by other code that happens to possess a lockfile, but has nothing to do with shallowness). This patch introduces that type as a thin wrapper around 'struct lockfile', and updates the two aforementioned functions and their callers to use it. Suggested-by: Junio C Hamano <gitster@pobox.com> Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-04-30shallow: extract a header file for shallow-related functionsLibravatar Taylor Blau1-0/+1
There are many functions in commit.h that are more related to shallow repositories than they are to any sort of generic commit machinery. Likely this began when there were only a few shallow-related functions, and commit.h seemed a reasonable enough place to put them. But, now there are a good number of shallow-related functions, and placing them all in 'commit.h' doesn't make sense. This patch extracts a 'shallow.h', which takes all of the declarations from 'commit.h' for functions which already exist in 'shallow.c'. We will bring the remaining shallow-related functions defined in 'commit.c' in a subsequent patch. For now, move only the ones that already are implemented in 'shallow.c', and update the necessary includes. Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-04-28fetch-pack: in protocol v2, reset in_vain upon ACKLibravatar Jonathan Tan1-0/+1
In the function process_acks() in fetch-pack.c, the variable received_ack is meant to track that an ACK was received, but it was never set. This results in negotiation terminating prematurely through the in_vain counter, when the counter should have been reset upon every ACK. Therefore, reset the in_vain counter upon every ACK. Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-04-28fetch-pack: in protocol v2, in_vain only after ACKLibravatar Jonathan Tan1-4/+9
When fetching, Git stops negotiation when it has sent at least MAX_IN_VAIN (which is 256) "have" lines without having any of them ACK-ed. But this is supposed to trigger only after the first ACK, as pack-protocol.txt says: However, the 256 limit *only* turns on in the canonical client implementation if we have received at least one "ACK %s continue" during a prior round. This helps to ensure that at least one common ancestor is found before we give up entirely. The code path for protocol v0 observes this, but not protocol v2, resulting in shorter negotiation rounds but significantly larger packfiles. Teach the code path for protocol v2 to check this criterion only after at least one ACK was received. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-04-28fetch-pack: return enum from process_acks()Libravatar Jonathan Tan1-8/+28
process_acks() returns 0, 1, or 2, depending on whether "ready" was received and if not, whether at least one commit was found to be common. Replace these magic numbers with a documented enum. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-04-24shallow.c: use '{commit,rollback}_shallow_file'Libravatar Taylor Blau1-5/+5
In bd0b42aed3 (fetch-pack: do not take shallow lock unnecessarily, 2019-01-10), the author noted that 'is_repository_shallow' produces visible side-effect(s) by setting 'is_shallow' and 'shallow_stat'. This is a problem for e.g., fetching with '--update-shallow' in a shallow repository with 'fetch.writeCommitGraph' enabled, since the update to '.git/shallow' will cause Git to think that the repository isn't shallow when it is, thereby circumventing the commit-graph compatibility check. This causes problems in shallow repositories with at least shallow refs that have at least one ancestor (since the client won't have those objects, and therefore can't take the reachability closure over commits when writing a commit-graph). Address this by introducing thin wrappers over 'commit_lock_file' and 'rollback_lock_file' for use specifically when the lock is held over '.git/shallow'. These wrappers (appropriately called 'commit_shallow_file' and 'rollback_shallow_file') call into their respective functions in 'lockfile.h', but additionally reset validity checks used by the shallow machinery. Replace each instance of 'commit_lock_file' and 'rollback_lock_file' with 'commit_shallow_file' and 'rollback_shallow_file' when the lock being held is over the '.git/shallow' file. As a result, 'prune_shallow' can now only be called once (since 'check_shallow_file_for_update' will die after calling 'reset_repository_shallow'). But, this is OK since we only call 'prune_shallow' at most once per process. Helped-by: Jonathan Tan <jonathantanmy@google.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Taylor Blau <me@ttaylorr.com> Reviewed-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-03-30oid_array: rename source file from sha1-arrayLibravatar Jeff King1-1/+1
We renamed the actual data structure in 910650d2f8 (Rename sha1_array to oid_array, 2017-03-31), but the file is still called sha1-array. Besides being slightly confusing, it makes it more annoying to grep for leftover occurrences of "sha1" in various files, because the header is included in so many places. Let's complete the transition by renaming the source and header files (and fixing up a few comment references). I kept the "-" in the name, as that seems to be our style; cf. fc1395f4a4 (sha1_file.c: rename to use dash in file name, 2018-04-10). We also have oidmap.h and oidset.h without any punctuation, but those are "struct oidmap" and "struct oidset" in the code. We _could_ make this "oidarray" to match, but somehow it looks uglier to me because of the length of "array" (plus it would be a very invasive patch for little gain). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-12-05Merge branch 'ec/fetch-mark-common-refs-trace2'Libravatar Junio C Hamano1-1/+12
Trace2 annotation. * ec/fetch-mark-common-refs-trace2: fetch: add trace2 instrumentation
2019-12-01Merge branch 'jt/fetch-remove-lazy-fetch-plugging'Libravatar Junio C Hamano1-15/+34
"git fetch" codepath had a big "do not lazily fetch missing objects when I ask if something exists" switch. This has been corrected by marking the "does this thing exist?" calls with "if not please do not lazily fetch it" flag. * jt/fetch-remove-lazy-fetch-plugging: promisor-remote: remove fetch_if_missing=0 clone: remove fetch_if_missing=0 fetch: remove fetch_if_missing=0
2019-11-20fetch: add trace2 instrumentationLibravatar Erik Chen1-1/+12
Add trace2 regions to fetch-pack.c to better track time spent in the various phases of a fetch: * parsing remote refs and finding a cutoff * marking local refs as complete * marking complete remote refs as common All stages could potentially be slow for repositories with many refs. Signed-off-by: Erik Chen <erikchen@chromium.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-13promisor-remote: remove fetch_if_missing=0Libravatar Jonathan Tan1-14/+32
Commit 6462d5eb9a ("fetch: remove fetch_if_missing=0", 2019-11-08) strove to remove the need for fetch_if_missing=0 from the fetching mechanism, so it is plausible to attempt removing fetch_if_missing=0 from the lazy-fetching mechanism in promisor-remote as well. But doing so reveals a bug - when the server does not send an object pointed to by a tag object, an infinite loop occurs: Git attempts to fetch the missing object, which causes a deferencing of all refs (for negotiation), which causes a lazy fetch of that missing object, and so on. This bug is because of unnecessary use of the fetch negotiator during lazy fetching - it is not used after initialization, but it is still initialized (which causes the dereferencing of all refs). Thus, when the negotiator is not used during fetching, refrain from initializing it. Then, remove fetch_if_missing from promisor-remote. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-10Merge branch 'jt/fetch-pack-record-refs-in-the-dot-promisor'Libravatar Junio C Hamano1-4/+43
Debugging support for lazy cloning has been a bit improved. * jt/fetch-pack-record-refs-in-the-dot-promisor: fetch-pack: write fetched refs to .promisor
2019-11-08fetch: remove fetch_if_missing=0Libravatar Jonathan Tan1-1/+2
In fetch_pack() (and all functions it calls), pass OBJECT_INFO_SKIP_FETCH_OBJECT whenever we query an object that could be a tree or blob that we do not want to be lazy-fetched even if it is absent. Thus, the only lazy-fetches occurring for trees and blobs are when resolving deltas. Thus, we can remove fetch_if_missing=0 from builtin/fetch.c. Remove this, and also add a test ensuring that such objects are not lazy-fetched. (We might be able to remove fetch_if_missing=0 from other places too, but I have limited myself to builtin/fetch.c in this commit because I have not written tests for the other commands yet.) Note that commits and tags may still be lazy-fetched. I limited myself to objects that could be trees or blobs here because Git does not support creating such commit- and tag-excluding clones yet, and even if such a clone were manually created, Git does not have good support for fetching a single commit (when fetching a commit, it and all its ancestors would be sent). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-16fetch-pack: write fetched refs to .promisorLibravatar Jonathan Tan1-4/+43
The specification of promisor packfiles (in partial-clone.txt) states that the .promisor files that accompany packfiles do not matter (just like .keep files), so whenever a packfile is fetched from the promisor remote, Git has been writing empty .promisor files. But these files could contain more useful information. So instead of writing empty files, write the refs fetched to these files. This makes it easier to debug issues with partial clones, as we can identify what refs (and their associated hashes) were fetched at the time the packfile was downloaded, and if necessary, compare those hashes against what the promisor remote reports now. This is implemented by teaching fetch-pack to write its own non-empty .promisor file whenever it knows the name of the pack's lockfile. This covers the case wherein the user runs "git fetch" with an internal protocol or HTTP protocol v2 (fetch_refs_via_pack() in transport.c sets lock_pack) and with HTTP protocol v0/v1 (fetch_git() in remote-curl.c passes "--lock-pack" to "fetch-pack"). Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Acked-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-15Merge branch 'js/trace2-fetch-push'Libravatar Junio C Hamano1-1/+12
Dev support. * js/trace2-fetch-push: transport: push codepath can take arbitrary repository push: add trace2 instrumentation fetch: add trace2 instrumentation
2019-10-11Merge branch 'bc/object-id-part17'Libravatar Junio C Hamano1-6/+6
Preparation for SHA-256 upgrade continues. * bc/object-id-part17: (26 commits) midx: switch to using the_hash_algo builtin/show-index: replace sha1_to_hex rerere: replace sha1_to_hex builtin/receive-pack: replace sha1_to_hex builtin/index-pack: replace sha1_to_hex packfile: replace sha1_to_hex wt-status: convert struct wt_status to object_id cache: remove null_sha1 builtin/worktree: switch null_sha1 to null_oid builtin/repack: write object IDs of the proper length pack-write: use hash_to_hex when writing checksums sequencer: convert to use the_hash_algo bisect: switch to using the_hash_algo sha1-lookup: switch hard-coded constants to the_hash_algo config: use the_hash_algo in abbrev comparison combine-diff: replace GIT_SHA1_HEXSZ with the_hash_algo bundle: switch to use the_hash_algo connected: switch GIT_SHA1_HEXSZ to the_hash_algo show-index: switch hard-coded constants to the_hash_algo blame: remove needless comparison with GIT_SHA1_HEXSZ ...
2019-10-03fetch: add trace2 instrumentationLibravatar Josh Steadmon1-1/+12
Add trace2 regions to fetch-pack.c and builtins/fetch.c to better track time spent in the various phases of a fetch: * listing refs * negotiation for protocol versions v0-v2 * fetching refs * consuming refs Signed-off-by: Josh Steadmon <steadmon@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-09-18Merge branch 'md/list-objects-filter-combo'Libravatar Junio C Hamano1-13/+7
The list-objects-filter API (used to create a sparse/lazy clone) learned to take a combined filter specification. * md/list-objects-filter-combo: list-objects-filter-options: make parser void list-objects-filter-options: clean up use of ALLOC_GROW list-objects-filter-options: allow mult. --filter strbuf: give URL-encoding API a char predicate fn list-objects-filter-options: make filter_spec a string_list list-objects-filter-options: move error check up list-objects-filter: implement composite filters list-objects-filter-options: always supply *errbuf list-objects-filter: put omits set in filter struct list-objects-filter: encapsulate filter components
2019-08-19fetch-pack: use parse_oid_hexLibravatar brian m. carlson1-6/+6
Instead of hard-coding constants, use parse_oid_hex to compute a pointer and use it in further parsing operations. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-08-13repo-settings: create feature.experimental settingLibravatar Derrick Stolee1-6/+5
The 'feature.experimental' setting includes config options that are not committed to become defaults, but could use additional testing. Update the following config settings to take new defaults, and to use the repo_settings struct if not already using them: * 'pack.useSparse=true' * 'fetch.negotiationAlgorithm=skipping' In the case of fetch.negotiationAlgorithm, the existing logic would load the config option only when about to use the setting, so had a die() statement on an unknown string value. This is removed as now the config is parsed under prepare_repo_settings(). In general, this die() is probably misplaced and not valuable. A test was removed that checked this die() statement executed. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-07-09Merge branch 'nd/fetch-capability-tweak'Libravatar Junio C Hamano1-24/+37
Protocol capabilities that go over wire should never be translated, but it was incorrectly marked for translation, which has been corrected. The output of protocol capabilities for debugging has been tweaked a bit. * nd/fetch-capability-tweak: fetch-pack: print server version at the top in -v -v fetch-pack: print all relevant supported capabilities with -v -v fetch-pack: move capability names out of i18n strings
2019-06-28list-objects-filter-options: make filter_spec a string_listLibravatar Matthew DeVore1-13/+7
Make the filter_spec string a string_list rather than a raw C string. The list of strings must be concatted together to make a complete filter_spec. A future patch will use this capability to build "combine:" filter specs gradually. A strbuf would seem to be a more natural choice for this object, but it unfortunately requires initialization besides just zero'ing out the memory. This results in all container structs, and all containers of those structs, etc., to also require initialization. Initializing them all would be more cumbersome that simply using a string_list, which behaves properly when its contents are zero'd. For the purposes of code simplification, change behavior in how filter specs are conveyed over the protocol: do not normalize the tree:<depth> filter specs since there should be no server in existence that supports tree:# but not tree:#k etc. Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Matthew DeVore <matvore@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-20fetch-pack: print server version at the top in -v -vLibravatar Nguyễn Thái Ngọc Duy1-6/+7
Before the previous patch, the server version is printed after all the "Server supports" lines. The previous one puts the version in the middle of "Server supports" group. Instead of moving it to the bottom, I move it to the top. Version may stand out more at the top as we will have even more debug out after capabilities. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-20fetch-pack: print all relevant supported capabilities with -v -vLibravatar Nguyễn Thái Ngọc Duy1-9/+21
When we check if some capability is supported, we do print something in verbose mode. Some capabilities are not printed though (and it made me think it's not supported; I was more used to GIT_TRACE_PACKET) so let's print them all. It's a bit more code. And one could argue for printing all supported capabilities the server sends us. But I think it's still valuable this way because we see the capabilities that the client cares about. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-20fetch-pack: move capability names out of i18n stringsLibravatar Nguyễn Thái Ngọc Duy1-9/+9
This reduces the work on translators since they only have one string to translate (and I think it's still enough context to translate). It also makes sure no capability name is translated by accident. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-20object: convert lookup_object() to use object_idLibravatar Jeff King1-6/+6
There are no callers left of lookup_object() that aren't just passing us the "hash" member of a "struct object_id". Let's take the whole struct, which gets us closer to removing all raw sha1 variables. It also matches the existing conversions of lookup_blob(), etc. The conversions of callers were done by hand, but they're all mechanical one-liners. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-05-30Merge branch 'jt/clone-server-option'Libravatar Junio C Hamano1-1/+1
A brown-paper-bag bugfix to a change already in 'master'. * jt/clone-server-option: fetch-pack: send server options after command
2019-05-28fetch-pack: send server options after commandLibravatar Jonathan Tan1-1/+1
Currently, if any server options are specified during a protocol v2 fetch, server options will be sent before "command=fetch". Write server options to the request buffer in send_fetch_request() so that the components of the request are sent in the correct order. The protocol documentation states that the command must come first. The Git server implementation in serve.c (see process_request() in that file) tolerates any order of command and capability, which is perhaps why we haven't noticed this. This was noticed when testing against a JGit server implementation, which follows the documentation in this regard. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Acked-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-25Merge branch 'jk/fetch-reachability-error-fix'Libravatar Junio C Hamano1-7/+9
Code clean-up and a fix for "git fetch" by an explicit object name (as opposed to fetching refs by name). * jk/fetch-reachability-error-fix: fetch: do not consider peeled tags as advertised tips remote.c: make singular free_ref() public fetch: use free_refs() pkt-line: prepare buffer before handling ERR packets upload-pack: send ERR packet for non-tip objects t5530: check protocol response for "not our ref" t5516: drop ok=sigpipe from unreachable-want tests
2019-04-25Merge branch 'jt/fetch-no-update-shallow-in-proto-v2'Libravatar Junio C Hamano1-10/+41
Fix for protocol v2 support in "git fetch-pack" of shallow clones. * jt/fetch-no-update-shallow-in-proto-v2: fetch-pack: respect --no-update-shallow in v2 fetch-pack: call prepare_shallow_info only if v0
2019-04-25Merge branch 'jt/fetch-pack-wanted-refs-optim'Libravatar Junio C Hamano1-9/+10
Performance fix around "git fetch" that grabs many refs. * jt/fetch-pack-wanted-refs-optim: fetch-pack: binary search when storing wanted-refs
2019-04-15fetch: do not consider peeled tags as advertised tipsLibravatar Jeff King1-3/+8
Our filter_refs() function accidentally considers the target of a peeled tag to be advertised by the server, even though upload-pack on the server side does not consider it so. This can result in the client making a bogus fetch to the server, which will end with the server complaining "not our ref". Whereas the correct behavior is for the client to notice that the server will not allow the request and error out immediately. So as bugs go, this is not very serious (the outcome is the same either way -- the fetch fails). But it's worth making the logic here correct and consistent with other related cases (e.g., fetching an oid that the server did not mention at all). The crux of the issue comes from fdb69d33c4 (fetch-pack: always allow fetching of literal SHA1s, 2017-05-15). After that, the strategy of filter_refs() is basically: - for each advertised ref, try to match it with a "sought" ref provided by the user. Skip any malformed refs (which includes peeled values like "refs/tags/foo^{}"), and place any unmatched items onto the unmatched list. - if there are unmatched sought refs, then put all of the advertised tips into an oidset, including the unmatched ones. - for each sought ref, see if it's in the oidset, in which case it's legal for us to ask the server for it The problem is in the second step. Our list of unmatched refs includes the peeled refs, even though upload-pack does not allow them to be directly fetched. So the simplest fix would be to exclude them during that step. However, we can observe that the unmatched list isn't used for anything else, and is freed at the end. We can just free those malformed refs immediately. That saves us having to check each ref a second time to see if it's malformed. Note that this code only kicks in when "strict" is in effect. I.e., if we are using the v0 protocol and uploadpack.allowReachableSHA1InWant is not in effect. With v2, all oids are allowed, and we do not bother creating or consulting the oidset at all. To future-proof our test against the upcoming GIT_TEST_PROTOCOL_VERSION flag, we'll manually mark it as a v0-only test. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-15fetch: use free_refs()Libravatar Jeff King1-4/+1
There's no need for us to write this loop manually when a helper function can already do it. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01fetch-pack: binary search when storing wanted-refsLibravatar Jonathan Tan1-9/+10
In do_fetch_pack_v2(), the "sought" array is sorted by name, and it is not subsequently reordered (within the function). Therefore, receive_wanted_refs() can assume that "sought" is sorted, and can thus use a binary search when storing wanted-refs retrieved from the server. Replace the existing linear search with a binary search. This improves performance significantly when mirror cloning a repository with more than 1 million refs. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01fetch-pack: respect --no-update-shallow in v2Libravatar Jonathan Tan1-7/+34
In protocol v0, when sending "shallow" lines, the server distinguishes between lines caused by the remote repo being shallow and lines caused by client-specified depth settings. Unless "--update-shallow" is specified, there is a difference in behavior: refs that reach the former "shallow" lines, but not the latter, are rejected. But in v2, the server does not, and the client treats all "shallow" lines like lines caused by client-specified depth settings. Full restoration of v0 functionality is not possible without protocol change, but we can implement a heuristic: if we specify any depth setting, treat all "shallow" lines like lines caused by client-specified depth settings (that is, unaffected by "--no-update-shallow"), but otherwise, treat them like lines caused by the remote repo being shallow (that is, affected by "--no-update-shallow"). This restores most of v0 behavior, except in the case where a client fetches from a shallow repository with depth settings. This patch causes a test that previously failed with GIT_TEST_PROTOCOL_VERSION=2 to pass. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01fetch-pack: call prepare_shallow_info only if v0Libravatar Jonathan Tan1-3/+7
In fetch_pack(), be clearer that there is no shallow information before the fetch when v2 is used - memset the struct shallow_info to 0 instead of calling prepare_shallow_info(). This patch is in preparation for a future patch in which a v2 fetch might call prepare_shallow_info() after shallow info has been retrieved during the fetch, so I needed to ensure that prepare_shallow_info() is not called before the fetch. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-20fetch_pack(): drop unused parametersLibravatar Jeff King1-2/+1
We don't need the caller of fetch_pack() to pass in "dest", which is the remote URL. Since ba227857d2 (Reduce the number of connects when fetching, 2008-02-04), the caller is responsible for calling git_connect() itself, and our "dest" parameter is unused. That commit also started passing us the resulting "conn" child_process from git_connect(). But likewise, we do not need do anything with it. The descriptors in "fd" are enough for us, and the caller is responsible for cleaning up "conn". We can just drop both parameters. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-20Merge branch 'jk/no-sigpipe-during-network-transport'Libravatar Junio C Hamano1-3/+6
On platforms where "git fetch" is killed with SIGPIPE (e.g. OSX), the upload-pack that runs on the other end that hangs up after detecting an error could cause "git fetch" to die with a signal, which led to a flakey test. "git fetch" now ignores SIGPIPE during the network portion of its operation (this is not a problem as we check the return status from our write(2)s). * jk/no-sigpipe-during-network-transport: fetch: ignore SIGPIPE during network operation fetch: avoid calling write_or_die()
2019-03-05fetch: avoid calling write_or_die()Libravatar Jeff King1-3/+6
The write_or_die() function has one quirk that a caller might not expect: when it sees EPIPE from the write() call, it translates that into a death by SIGPIPE. This doesn't change the overall behavior (the program exits either way), but it does potentially confuse test scripts looking for a non-signal exit code. Let's switch away from using write_or_die() in a few code paths, which will give us more consistent exit codes. It also gives us the opportunity to write more descriptive error messages, since we have context that write_or_die() does not. Note that this won't do much by itself, since we'd typically be killed by SIGPIPE before write_or_die() even gets a chance to do its thing. That will be addressed in the next patch. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-02-06Merge branch 'bc/fetch-pack-clear-alternate-shallow'Libravatar Junio C Hamano1-0/+5
"git fetch" over protocol v2 that needs to make a second connection to backfill tags did not clear a variable that holds shallow repository information correctly, leading to an access of freed piece of memory. * bc/fetch-pack-clear-alternate-shallow: fetch-pack: clear alternate shallow in one more place fetch-pack: clear alternate shallow when complete