summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2021-04-30Merge branch 'mt/parallel-checkout-part-2'Libravatar Junio C Hamano12-5/+1240
The checkout machinery has been taught to perform the actual write-out of the files in parallel when able. * mt/parallel-checkout-part-2: parallel-checkout: add design documentation parallel-checkout: support progress displaying parallel-checkout: add configuration options parallel-checkout: make it truly parallel unpack-trees: add basic support for parallel checkout
2021-04-30Merge branch 'so/log-diff-merge'Libravatar Junio C Hamano7-21/+95
"git log" learned "--diff-merges=<style>" option, with an associated configuration variable log.diffMerges. * so/log-diff-merge: doc/diff-options: document new --diff-merges features diff-merges: introduce log.diffMerges config variable diff-merges: adapt -m to enable default diff format diff-merges: refactor set_diff_merges() diff-merges: introduce --diff-merges=on
2021-04-30Merge branch 'ds/sparse-index-protections'Libravatar Junio C Hamano48-109/+1257
Builds on top of the sparse-index infrastructure to mark operations that are not ready to mark with the sparse index, causing them to fall back on fully-populated index that they always have worked with. * ds/sparse-index-protections: (47 commits) name-hash: use expand_to_path() sparse-index: expand_to_path() name-hash: don't add directories to name_hash revision: ensure full index resolve-undo: ensure full index read-cache: ensure full index pathspec: ensure full index merge-recursive: ensure full index entry: ensure full index dir: ensure full index update-index: ensure full index stash: ensure full index rm: ensure full index merge-index: ensure full index ls-files: ensure full index grep: ensure full index fsck: ensure full index difftool: ensure full index commit: ensure full index checkout: ensure full index ...
2021-04-30Merge branch 'ds/maintenance-prefetch-fix'Libravatar Junio C Hamano6-40/+134
The prefetch task in "git maintenance" assumed that "git fetch" from any remote would fetch all its local branches, which would fetch too much if the user is interested in only a subset of branches there. * ds/maintenance-prefetch-fix: maintenance: respect remote.*.skipFetchAll maintenance: use 'git fetch --prefetch' fetch: add --prefetch option maintenance: simplify prefetch logic
2021-04-30Merge branch 'ow/push-quiet-set-upstream'Libravatar Junio C Hamano2-5/+12
"git push --quiet --set-upstream" was not quiet when setting the upstream branch configuration, which has been corrected. * ow/push-quiet-set-upstream: transport: respect verbosity when setting upstream
2021-04-30Merge branch 'mt/pkt-write-errors'Libravatar Junio C Hamano1-7/+24
When packet_write() fails, we gave an extra error message unnecessarily, which has been corrected. * mt/pkt-write-errors: pkt-line: do not report packet write errors twice
2021-04-30Merge branch 'jk/promisor-optim'Libravatar Junio C Hamano12-15/+27
Handling of "promisor packs" that allows certain objects to be missing and lazily retrievable has been optimized (a bit). * jk/promisor-optim: revision: avoid parsing with --exclude-promisor-objects lookup_unknown_object(): take a repository argument is_promisor_object(): free tree buffer after parsing
2021-04-20The twelfth batchLibravatar Junio C Hamano1-0/+19
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-20Merge branch 'js/access-nul-emulation-on-windows'Libravatar Junio C Hamano1-0/+2
Portability fix. * js/access-nul-emulation-on-windows: msvc: avoid calling `access("NUL", flags)`
2021-04-20Merge branch 'sg/bugreport-fixes'Libravatar Junio C Hamano1-2/+2
The dependencies for config-list.h and command-list.h were broken when the former was split out of the latter, which has been corrected. * sg/bugreport-fixes: Makefile: add missing dependencies of 'config-list.h'
2021-04-20Merge branch 'jc/doc-do-not-capitalize-clarification'Libravatar Junio C Hamano2-5/+13
Doc update for developers. * jc/doc-do-not-capitalize-clarification: doc: clarify "do not capitalize the first word" rule
2021-04-20Merge branch 'ab/usage-error-docs'Libravatar Junio C Hamano3-15/+14
Documentation updates, with unrelated comment updates, too. * ab/usage-error-docs: api docs: document that BUG() emits a trace2 error event api docs: document BUG() in api-error-handling.txt usage.c: don't copy/paste the same comment three times
2021-04-20Merge branch 'ab/detox-gettext-tests'Libravatar Junio C Hamano3-13/+6
Test clean-up. * ab/detox-gettext-tests: tests: remove all uses of test_i18cmp
2021-04-20Merge branch 'jt/fetch-pack-request-fix'Libravatar Junio C Hamano1-1/+1
* jt/fetch-pack-request-fix: fetch-pack: buffer object-format with other args
2021-04-20Merge branch 'hn/reftable-tables-doc-update'Libravatar Junio C Hamano1-2/+7
Doc updte. * hn/reftable-tables-doc-update: reftable: document an alternate cleanup method on Windows
2021-04-20Merge branch 'jk/pack-objects-bitmap-progress-fix'Libravatar Junio C Hamano2-1/+25
When "git pack-objects" makes a literal copy of a part of existing packfile using the reachability bitmaps, its update to the progress meter was broken. * jk/pack-objects-bitmap-progress-fix: pack-objects: update "nr_seen" progress based on pack-reused count
2021-04-20Merge branch 'ab/userdiff-tests'Libravatar Junio C Hamano9-109/+213
A bit of code clean-up and a lot of test clean-up around userdiff area. * ab/userdiff-tests: blame tests: simplify userdiff driver test blame tests: don't rely on t/t4018/ directory userdiff: remove support for "broken" tests userdiff tests: list builtin drivers via test-tool userdiff tests: explicitly test "default" pattern userdiff: add and use for_each_userdiff_driver() userdiff style: normalize pascal regex declaration userdiff style: declare patterns with consistent style userdiff style: re-order drivers in alphabetical order
2021-04-20Merge branch 'ar/userdiff-scheme'Libravatar Junio C Hamano18-0/+101
Userdiff patterns for "Scheme" has been added. * ar/userdiff-scheme: userdiff: add support for Scheme
2021-04-19parallel-checkout: add design documentationLibravatar Matheus Tavares2-0/+271
Co-authored-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-19parallel-checkout: support progress displayingLibravatar Matheus Tavares3-7/+43
Original-patch-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-19parallel-checkout: add configuration optionsLibravatar Matheus Tavares4-10/+54
Make parallel checkout configurable by introducing two new settings: checkout.workers and checkout.thresholdForParallelism. The first defines the number of workers (where one means sequential checkout), and the second defines the minimum number of entries to attempt parallel checkout. To decide the default value for checkout.workers, the parallel version was benchmarked during three operations in the linux repo, with cold cache: cloning v5.8, checking out v5.8 from v2.6.15 (checkout I) and checking out v5.8 from v5.7 (checkout II). The four tables below show the mean run times and standard deviations for 5 runs in: a local file system on SSD, a local file system on HDD, a Linux NFS server, and Amazon EFS (all on Linux). Each parallel checkout test was executed with the number of workers that brings the best overall results in that environment. Local SSD: Sequential 10 workers Speedup Clone 8.805 s ± 0.043 s 3.564 s ± 0.041 s 2.47 ± 0.03 Checkout I 9.678 s ± 0.057 s 4.486 s ± 0.050 s 2.16 ± 0.03 Checkout II 5.034 s ± 0.072 s 3.021 s ± 0.038 s 1.67 ± 0.03 Local HDD: Sequential 10 workers Speedup Clone 32.288 s ± 0.580 s 30.724 s ± 0.522 s 1.05 ± 0.03 Checkout I 54.172 s ± 7.119 s 54.429 s ± 6.738 s 1.00 ± 0.18 Checkout II 40.465 s ± 2.402 s 38.682 s ± 1.365 s 1.05 ± 0.07 Linux NFS server (v4.1, on EBS, single availability zone): Sequential 32 workers Speedup Clone 240.368 s ± 6.347 s 57.349 s ± 0.870 s 4.19 ± 0.13 Checkout I 242.862 s ± 2.215 s 58.700 s ± 0.904 s 4.14 ± 0.07 Checkout II 65.751 s ± 1.577 s 23.820 s ± 0.407 s 2.76 ± 0.08 EFS (v4.1, replicated over multiple availability zones): Sequential 32 workers Speedup Clone 922.321 s ± 2.274 s 210.453 s ± 3.412 s 4.38 ± 0.07 Checkout I 1011.300 s ± 7.346 s 297.828 s ± 0.964 s 3.40 ± 0.03 Checkout II 294.104 s ± 1.836 s 126.017 s ± 1.190 s 2.33 ± 0.03 The above benchmarks show that parallel checkout is most effective on repositories located on an SSD or over a distributed file system. For local file systems on spinning disks, and/or older machines, the parallelism does not always bring a good performance. For this reason, the default value for checkout.workers is one, a.k.a. sequential checkout. To decide the default value for checkout.thresholdForParallelism, another benchmark was executed in the "Local SSD" setup, where parallel checkout showed to be beneficial. This time, we compared the runtime of a `git checkout -f`, with and without parallelism, after randomly removing an increasing number of files from the Linux working tree. The "sequential fallback" column below corresponds to the executions where checkout.workers was 10 but checkout.thresholdForParallelism was equal to the number of to-be-updated files plus one (so that we end up writing sequentially). Each test case was sampled 15 times, and each sample had a randomly different set of files removed. Here are the results: sequential fallback 10 workers speedup 10 files 772.3 ms ± 12.6 ms 769.0 ms ± 13.6 ms 1.00 ± 0.02 20 files 780.5 ms ± 15.8 ms 775.2 ms ± 9.2 ms 1.01 ± 0.02 50 files 806.2 ms ± 13.8 ms 767.4 ms ± 8.5 ms 1.05 ± 0.02 100 files 833.7 ms ± 21.4 ms 750.5 ms ± 16.8 ms 1.11 ± 0.04 200 files 897.6 ms ± 30.9 ms 730.5 ms ± 14.7 ms 1.23 ± 0.05 500 files 1035.4 ms ± 48.0 ms 677.1 ms ± 22.3 ms 1.53 ± 0.09 1000 files 1244.6 ms ± 35.6 ms 654.0 ms ± 38.3 ms 1.90 ± 0.12 2000 files 1488.8 ms ± 53.4 ms 658.8 ms ± 23.8 ms 2.26 ± 0.12 From the above numbers, 100 files seems to be a reasonable default value for the threshold setting. Note: Up to 1000 files, we observe a drop in the execution time of the parallel code with an increase in the number of files. This is a rather odd behavior, but it was observed in multiple repetitions. Above 1000 files, the execution time increases according to the number of files, as one would expect. About the test environments: Local SSD tests were executed on an i7-7700HQ (4 cores with hyper-threading) running Manjaro Linux. Local HDD tests were executed on an Intel(R) Xeon(R) E3-1230 (also 4 cores with hyper-threading), HDD Seagate Barracuda 7200.14 SATA 3.1, running Debian. NFS and EFS tests were executed on an Amazon EC2 c5n.xlarge instance, with 4 vCPUs. The Linux NFS server was running on a m6g.large instance with 2 vCPUSs and a 1 TB EBS GP2 volume. Before each timing, the linux repository was removed (or checked out back to its previous state), and `sync && sysctl vm.drop_caches=3` was executed. Co-authored-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-19parallel-checkout: make it truly parallelLibravatar Matheus Tavares7-27/+496
Use multiple worker processes to distribute the queued entries and call write_pc_item() in parallel for them. The items are distributed uniformly in contiguous chunks. This minimizes the chances of two workers writing to the same directory simultaneously, which could affect performance due to lock contention in the kernel. Work stealing (or any other format of re-distribution) is not implemented yet. The protocol between the main process and the workers is quite simple. They exchange binary messages packed in pkt-line format, and use PKT-FLUSH to mark the end of input (from both sides). The main process starts the communication by sending N pkt-lines, each corresponding to an item that needs to be written. These packets contain all the necessary information to load, smudge, and write the blob associated with each item. Then it waits for the worker to send back N pkt-lines containing the results for each item. The resulting packet must contain: the identification number of the item that it refers to, the status of the operation, and the lstat() data gathered after writing the file (iff the operation was successful). For now, checkout always uses a hardcoded value of 2 workers, only to demonstrate that the parallel checkout framework correctly divides and writes the queued entries. The next patch will add user configurations and define a more reasonable default, based on tests with the said settings. Co-authored-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Co-authored-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-19unpack-trees: add basic support for parallel checkoutLibravatar Matheus Tavares5-3/+418
This new interface allows us to enqueue some of the entries being checked out to later uncompress them, apply in-process filters, and write out the files in parallel. For now, the parallel checkout machinery is enabled by default and there is no user configuration, but run_parallel_checkout() just writes the queued entries in sequence (without spawning additional workers). The next patch will actually implement the parallelism and, later, we will make it configurable. Note that, to avoid potential data races, not all entries are eligible for parallel checkout. Also, paths that collide on disk (e.g. case-sensitive paths in case-insensitive file systems), are detected by the parallel checkout code and skipped, so that they can be safely sequentially handled later. The collision detection works like the following: - If the collision was at basename (e.g. 'a/b' and 'a/B'), the framework detects it by looking for EEXIST and EISDIR errors after an open(O_CREAT | O_EXCL) failure. - If the collision was at dirname (e.g. 'a/b' and 'A'), it is detected at the has_dirs_only_path() check, which is done for the leading path of each item in the parallel checkout queue. Both verifications rely on the fact that, before enqueueing an entry for parallel checkout, checkout_entry() makes sure that there is no file at the entry's path and that its leading components are all real directories. So, any later change in these conditions indicates that there was a collision (either between two parallel-eligible entries or between an eligible and an ineligible one). After all parallel-eligible entries have been processed, the collided (and thus, skipped) entries are sequentially fed to checkout_entry() again. This is similar to the way the current code deals with collisions, overwriting the previously checked out entries with the subsequent ones. The only difference is that, since we no longer create the files in the same order that they appear on index, we are not able to determine which of the colliding entries will survive on disk (for the classic code, it is always the last entry). Co-authored-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Co-authored-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16doc/diff-options: document new --diff-merges featuresLibravatar Sergey Organov1-4/+11
Document changes in -m and --diff-merges=m semantics, as well as new --diff-merges=on option. Signed-off-by: Sergey Organov <sorganov@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16diff-merges: introduce log.diffMerges config variableLibravatar Sergey Organov6-0/+46
New log.diffMerges configuration variable sets the format that --diff-merges=on will be using. The default is "separate". t4013: add the following tests for log.diffMerges config: * Test that wrong values are denied. * Test that the value of log.diffMerges properly affects both --diff-merges=on and -m. t9902: fix completion tests for log.d* to match log.diffMerges. Added documentation for log.diffMerges. Signed-off-by: Sergey Organov <sorganov@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16diff-merges: adapt -m to enable default diff formatLibravatar Sergey Organov1-4/+4
Let -m option (and --diff-merges=m) enable the default format instead of "separate", to be able to tune it with log.diffMerges option. Signed-off-by: Sergey Organov <sorganov@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16diff-merges: refactor set_diff_merges()Libravatar Sergey Organov1-15/+21
Split set_diff_merges() into separate parsing and execution functions, the former to be reused for parsing of configuration values later in the patch series. Signed-off-by: Sergey Organov <sorganov@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16diff-merges: introduce --diff-merges=onLibravatar Sergey Organov2-0/+15
Introduce the notion of default diff format for merges, and the option "on" to select it. The default format is "separate" and can't yet be changed, so effectively "on" is just a synonym for "separate" for now. Add corresponding test to t4013. This is in preparation for introducing log.diffMerges configuration option that will let --diff-merges=on to be configured to any supported format. Signed-off-by: Sergey Organov <sorganov@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16The eleventh (aka "ort") batchLibravatar Junio C Hamano1-0/+6
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16Merge branch 'ah/merge-ort-ubsan-fix'Libravatar Junio C Hamano1-14/+6
Code clean-up for merge-ort backend. * ah/merge-ort-ubsan-fix: merge-ort: only do pointer arithmetic for non-empty lists
2021-04-16Merge branch 'en/ort-readiness'Libravatar Junio C Hamano16-38/+446
Plug the ort merge backend throughout the rest of the system, and start testing it as a replacement for the recursive backend. * en/ort-readiness: Add testing with merge-ort merge strategy t6423: mark remaining expected failure under merge-ort as such Revert "merge-ort: ignore the directory rename split conflict for now" merge-recursive: add a bunch of FIXME comments documenting known bugs merge-ort: write $GIT_DIR/AUTO_MERGE whenever we hit a conflict t: mark several submodule merging tests as fixed under merge-ort merge-ort: implement CE_SKIP_WORKTREE handling with conflicted entries t6428: new test for SKIP_WORKTREE handling and conflicts merge-ort: support subtree shifting merge-ort: let renormalization change modify/delete into clean delete merge-ort: have ll_merge() use a special attr_index for renormalization merge-ort: add a special minimal index just for renormalization merge-ort: use STABLE_QSORT instead of QSORT where required
2021-04-16Merge branch 'en/ort-perf-batch-10'Libravatar Junio C Hamano3-47/+281
Various rename detection optimization to help "ort" merge strategy backend. * en/ort-perf-batch-10: diffcore-rename: determine which relevant_sources are no longer relevant merge-ort: record the reason that we want a rename for a file diffcore-rename: add computation of number of unknown renames diffcore-rename: check if we have enough renames for directories early on diffcore-rename: only compute dir_rename_count for relevant directories merge-ort: record the reason that we want a rename for a directory merge-ort, diffcore-rename: tweak dirs_removed and relevant_source type diffcore-rename: take advantage of "majority rules" to skip more renames
2021-04-16maintenance: respect remote.*.skipFetchAllLibravatar Derrick Stolee2-1/+10
If a remote has the skipFetchAll setting enabled, then that remote is not intended for frequent fetching. It makes sense to not fetch that data during the 'prefetch' maintenance task. Skip that remote in the iteration without error. The skip_default_update member is initialized in remote.c:handle_config() as part of initializing the 'struct remote'. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16maintenance: use 'git fetch --prefetch'Libravatar Derrick Stolee3-15/+12
The 'prefetch' maintenance task previously forced the following refspec for each remote: +refs/heads/*:refs/prefetch/<remote>/* If a user has specified a more strict refspec for the remote, then this prefetch task downloads more objects than necessary. The previous change introduced the '--prefetch' option to 'git fetch' which manipulates the remote's refspec to place all resulting refs into refs/prefetch/, with further partitioning based on the destinations of those refspecs. Update the documentation to be more generic about the destination refs. Do not mention custom refspecs explicitly, as that does not need to be highlighted in this documentation. The important part of placing refs in refs/prefetch/ remains. Reported-by: Tom Saeger <tom.saeger@oracle.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16fetch: add --prefetch optionLibravatar Derrick Stolee3-1/+106
The --prefetch option will be used by the 'prefetch' maintenance task instead of sending refspecs explicitly across the command-line. The intention is to modify the refspec to place all results in refs/prefetch/ instead of anywhere else. Create helper method filter_prefetch_refspec() to modify a given refspec to fit the rules expected of the prefetch task: * Negative refspecs are preserved. * Refspecs without a destination are removed. * Refspecs whose source starts with "refs/tags/" are removed. * Other refspecs are placed within "refs/prefetch/". Finally, we add the 'force' option to ensure that prefetch refs are replaced as necessary. There are some interesting cases that are worth testing. An earlier version of this change dropped the "i--" from the loop that deletes a refspec item and shifts the remaining entries down. This allowed some refspecs to not be modified. The subtle part about the first --prefetch test is that the "refs/tags/*" refspec appears directly before the "refs/heads/bogus/*" refspec. Without that "i--", this ordering would remove the "refs/tags/*" refspec and leave the last one unmodified, placing the result in "refs/heads/*". It is possible to have an empty refspec. This is typically the case for remotes other than the origin, where users want to fetch a specific tag or branch. To correctly test this case, we need to further remove the upstream remote for the local branch. Thus, we are testing a refspec that will be deleted, leaving nothing to fetch. Helped-by: Tom Saeger <tom.saeger@oracle.com> Helped-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-16msvc: avoid calling `access("NUL", flags)`Libravatar Johannes Schindelin1-0/+2
Apparently this is not supported with Microsoft's Universal C Runtime. So let's not actually do that. Instead, just return success because we _know_ that we expect the `NUL` device to be present. Side note: it is possible to turn off the "Null device driver" and thereby disable `NUL`. Too many things are broken if this driver is disabled, therefore it is not worth bothering to try to detect its presence when `access()` is called. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-15pkt-line: do not report packet write errors twiceLibravatar Matheus Tavares1-7/+24
On write() errors, packet_write() dies with the same error message that is already printed by its callee, packet_write_gently(). This produces an unnecessarily verbose and repetitive output: error: packet write failed fatal: packet write failed: <strerror() message> In addition to that, packet_write_gently() does not always fulfill its caller expectation that errno will be properly set before a non-zero return. In particular, that is not the case for a "data exceeds max packet size" error. So, in this case, packet_write() will call die_errno() and print an strerror(errno) message that might be totally unrelated to the actual error. Fix both those issues by turning packet_write() and packet_write_gently() into wrappers to a common lower level function that doesn't print the error message, but instead returns it on a buffer for the caller to die() or error() as appropriate. Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-15The tenth batchLibravatar Junio C Hamano1-0/+12
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-15Merge branch 'jz/apply-3way-cached'Libravatar Junio C Hamano3-5/+60
"git apply" now takes "--3way" and "--cached" at the same time, and work and record results only in the index. * jz/apply-3way-cached: git-apply: allow simultaneous --cached and --3way options
2021-04-15Merge branch 'ab/complete-cherry-pick-head'Libravatar Junio C Hamano1-1/+1
The command line completion (in contrib/) has learned that CHERRY_PICK_HEAD is a possible pseudo-ref. * ab/complete-cherry-pick-head: bash completion: complete CHERRY_PICK_HEAD
2021-04-15Merge branch 'jz/apply-run-3way-first'Libravatar Junio C Hamano3-10/+28
"git apply --3way" has always been "to fall back to 3-way merge only when straight application fails". Swap the order of falling back so that 3-way is always attempted first (only when the option is given, of course) and then straight patch application is used as a fallback when it fails. * jz/apply-run-3way-first: git-apply: try threeway first when "--3way" is used
2021-04-15transport: respect verbosity when setting upstreamLibravatar Øystein Walle2-5/+12
A command such as `git push -qu origin feature` will print "Branch 'feature' set up to track remote branch 'feature' from 'origin'." even when --quiet is passed. In this case it's because install_branch_config() is always called with BRANCH_CONFIG_VERBOSE. struct transport keeps track of the desired verbosity. Fix the above issue by passing BRANCH_CONFIG_VERBOSE conditionally based on that. Signed-off-by: Øystein Walle <oystwa@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-14doc: clarify "do not capitalize the first word" ruleLibravatar Junio C Hamano2-5/+13
The same "do not capitalize the first word" rule is applied to both our patch titles and error messages, but the existing description was fuzzy in two aspects. * For error messages, it was not said that this was only about the first word that begins the sentence. * For both, it was not clear when a capital letter there was not an error. We avoid capitalizing the first word when the only reason you would capitalize it is because it happens to be the first word in the sentence. If a proper noun, which is usually spelled in capital letters, happens to come at the beginning of the sentence, it should be kept in capital letters. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-14name-hash: use expand_to_path()Libravatar Derrick Stolee1-0/+4
A sparse-index loads the name-hash data for its entries, including the sparse-directory entries. If a caller asks for a path that is contained within a sparse-directory entry, we need to expand to a full index and recalculate the name hash table before returning the result. Insert calls to expand_to_path() to protect against this case. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-14sparse-index: expand_to_path()Libravatar Derrick Stolee2-0/+86
Some users of the index API have a specific path they are looking for, but choose to use index_file_exists() to rely on the name-hash hashtable instead of doing binary search with index_name_pos(). These users only need to know a yes/no answer, not a position within the cache array. When the index is sparse, the name-hash hash table does not contain the full list of paths within sparse directories. It _does_ contain the directory names for the sparse-directory entries. Create a helper function, expand_to_path(), for intended use with the name-hash hashtable functions. The integration with name-hash.c will follow in a later change. The solution here is to use ensure_full_index() when we determine that the requested path is within a sparse directory entry. This will populate the name-hash hashtable as the index is recomputed from scratch. There may be cases where the caller is trying to find an untracked path that is not in the index but also is not within a sparse directory entry. We want to minimize the overhead for these requests. If we used index_name_pos() to find the insertion order of the path, then we could determine from that position if a sparse-directory exists. (In fact, just calling index_name_pos() in that case would lead to expanding the index to a full index.) However, this takes O(log N) time where N is the number of cache entries. To keep the performance of this call based mostly on the input string, use index_file_exists() to look for the ancestors of the path. Using the heuristic that a sparse directory is likely to have a small number of parent directories, we start from the bottom and build up. Use a string buffer to allow mutating the path name to terminate after each slash for each hashset test. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-14name-hash: don't add directories to name_hashLibravatar Derrick Stolee1-2/+5
Sparse directory entries represent a directory that is outside the sparse-checkout definition. These are not paths to blobs, so should not be added to the name_hash table. Instead, they should be added to the directory hashtable when 'ignore_case' is true. Add a condition to avoid placing sparse directories into the name_hash hashtable. This avoids filling the table with extra entries that will never be queried. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-14revision: ensure full indexLibravatar Derrick Stolee1-0/+2
Before iterating over all index entries, ensure that a sparse index is expanded to a full index to avoid unexpected behavior. This case could be integrated later by ensuring that we walk the tree in the sparse-directory entry, but the current behavior is only expecting blobs. Save this integration for later when it can be properly tested. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-14resolve-undo: ensure full indexLibravatar Derrick Stolee1-0/+4
Before iterating over all cache entries, ensure that a sparse index is expanded to a full index to avoid unexpected behavior. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-14read-cache: ensure full indexLibravatar Derrick Stolee1-0/+4
Before iterating over all cache entries, ensure that a sparse index is expanded to a full index to avoid unexpected behavior. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-14pathspec: ensure full indexLibravatar Derrick Stolee1-0/+2
Before iterating over all cache entries, ensure that a sparse index is expanded to a full index to avoid unexpected behavior. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>