summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2020-09-25maintenance: optionally skip --auto processLibravatar Derrick Stolee3-0/+24
Some commands run 'git maintenance run --auto --[no-]quiet' after doing their normal work, as a way to keep repositories clean as they are used. Currently, users who do not want this maintenance to occur would set the 'gc.auto' config option to 0 to avoid the 'gc' task from running. However, this does not stop the extra process invocation. On Windows, this extra process invocation can be more expensive than necessary. Allow users to drop this extra process by setting 'maintenance.auto' to 'false'. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25maintenance: add incremental-repack auto conditionLibravatar Derrick Stolee3-0/+68
The incremental-repack task updates the multi-pack-index by deleting pack- files that have been replaced with new packs, then repacking a batch of small pack-files into a larger pack-file. This incremental repack is faster than rewriting all object data, but is slower than some other maintenance activities. The 'maintenance.incremental-repack.auto' config option specifies how many pack-files should exist outside of the multi-pack-index before running the step. These pack-files could be created by 'git fetch' commands or by the loose-objects task. The default value is 10. Setting the option to zero disables the task with the '--auto' option, and a negative value makes the task run every time. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25maintenance: auto-size incremental-repack batchLibravatar Derrick Stolee2-3/+76
When repacking during the 'incremental-repack' task, we use the --batch-size option in 'git multi-pack-index repack'. The initial setting used --batch-size=0 to repack everything into a single pack-file. This is not sustainable for a large repository. The amount of work required is also likely to use too many system resources for a background job. Update the 'incremental-repack' task by dynamically computing a --batch-size option based on the current pack-file structure. The dynamic default size is computed with this idea in mind for a client repository that was cloned from a very large remote: there is likely one "big" pack-file that was created at clone time. Thus, do not try repacking it as it is likely packed efficiently by the server. Instead, we select the second-largest pack-file, and create a batch size that is one larger than that pack-file. If there are three or more pack-files, then this guarantees that at least two will be combined into a new pack-file. Of course, this means that the second-largest pack-file size is likely to grow over time and may eventually surpass the initially-cloned pack-file. Recall that the pack-file batch is selected in a greedy manner: the packs are considered from oldest to newest and are selected if they have size smaller than the batch size until the total selected size is larger than the batch size. Thus, that oldest "clone" pack will be first to repack after the new data creates a pack larger than that. We also want to place some limits on how large these pack-files become, in order to bound the amount of time spent repacking. A maximum batch-size of two gigabytes means that large repositories will never be packed into a single pack-file using this job, but also that repack is rather expensive. This is a trade-off that is valuable to have if the maintenance is being run automatically or in the background. Users who truly want to optimize for space and performance (and are willing to pay the upfront cost of a full repack) can use the 'gc' task to do so. Create a test for this two gigabyte limit by creating an EXPENSIVE test that generates two pack-files of roughly 2.5 gigabytes in size, then performs an incremental repack. Check that the --batch-size argument in the subcommand uses the hard-coded maximum. Helped-by: Chris Torek <chris.torek@gmail.com> Reported-by: Son Luong Ngoc <sluongng@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25maintenance: add incremental-repack taskLibravatar Derrick Stolee4-0/+133
The previous change cleaned up loose objects using the 'loose-objects' that can be run safely in the background. Add a similar job that performs similar cleanups for pack-files. One issue with running 'git repack' is that it is designed to repack all pack-files into a single pack-file. While this is the most space-efficient way to store object data, it is not time or memory efficient. This becomes extremely important if the repo is so large that a user struggles to store two copies of the pack on their disk. Instead, perform an "incremental" repack by collecting a few small pack-files into a new pack-file. The multi-pack-index facilitates this process ever since 'git multi-pack-index expire' was added in 19575c7 (multi-pack-index: implement 'expire' subcommand, 2019-06-10) and 'git multi-pack-index repack' was added in ce1e4a1 (midx: implement midx_repack(), 2019-06-10). The 'incremental-repack' task runs the following steps: 1. 'git multi-pack-index write' creates a multi-pack-index file if one did not exist, and otherwise will update the multi-pack-index with any new pack-files that appeared since the last write. This is particularly relevant with the background fetch job. When the multi-pack-index sees two copies of the same object, it stores the offset data into the newer pack-file. This means that some old pack-files could become "unreferenced" which I will use to mean "a pack-file that is in the pack-file list of the multi-pack-index but none of the objects in the multi-pack-index reference a location inside that pack-file." 2. 'git multi-pack-index expire' deletes any unreferenced pack-files and updaes the multi-pack-index to drop those pack-files from the list. This is safe to do as concurrent Git processes will see the multi-pack-index and not open those packs when looking for object contents. (Similar to the 'loose-objects' job, there are some Git commands that open pack-files regardless of the multi-pack-index, but they are rarely used. Further, a user that self-selects to use background operations would likely refrain from using those commands.) 3. 'git multi-pack-index repack --bacth-size=<size>' collects a set of pack-files that are listed in the multi-pack-index and creates a new pack-file containing the objects whose offsets are listed by the multi-pack-index to be in those objects. The set of pack- files is selected greedily by sorting the pack-files by modified time and adding a pack-file to the set if its "expected size" is smaller than the batch size until the total expected size of the selected pack-files is at least the batch size. The "expected size" is calculated by taking the size of the pack-file divided by the number of objects in the pack-file and multiplied by the number of objects from the multi-pack-index with offset in that pack-file. The expected size approximates how much data from that pack-file will contribute to the resulting pack-file size. The intention is that the resulting pack-file will be close in size to the provided batch size. The next run of the incremental-repack task will delete these repacked pack-files during the 'expire' step. In this version, the batch size is set to "0" which ignores the size restrictions when selecting the pack-files. It instead selects all pack-files and repacks all packed objects into a single pack-file. This will be updated in the next change, but it requires doing some calculations that are better isolated to a separate change. These steps are based on a similar background maintenance step in Scalar (and VFS for Git) [1]. This was incredibly effective for users of the Windows OS repository. After using the same VFS for Git repository for over a year, some users had _thousands_ of pack-files that combined to up to 250 GB of data. We noticed a few users were running into the open file descriptor limits (due in part to a bug in the multi-pack-index fixed by af96fe3 (midx: add packs to packed_git linked list, 2019-04-29). These pack-files were mostly small since they contained the commits and trees that were pushed to the origin in a given hour. The GVFS protocol includes a "prefetch" step that asks for pre-computed pack- files containing commits and trees by timestamp. These pack-files were grouped into "daily" pack-files once a day for up to 30 days. If a user did not request prefetch packs for over 30 days, then they would get the entire history of commits and trees in a new, large pack-file. This led to a large number of pack-files that had poor delta compression. By running this pack-file maintenance step once per day, these repos with thousands of packs spanning 200+ GB dropped to dozens of pack- files spanning 30-50 GB. This was done all without removing objects from the system and using a constant batch size of two gigabytes. Once the work was done to reduce the pack-files to small sizes, the batch size of two gigabytes means that not every run triggers a repack operation, so the following run will not expire a pack-file. This has kept these repos in a "clean" state. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/PackfileMaintenanceStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25midx: use start_delayed_progress()Libravatar Derrick Stolee2-12/+12
Now that the multi-pack-index may be written as part of auto maintenance at the end of a command, reduce the progress output when the operations are quick. Use start_delayed_progress() instead of start_progress(). Update t5319-multi-pack-index.sh to use GIT_PROGRESS_DELAY=0 now that the progress indicators are conditional. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25midx: enable core.multiPackIndex by defaultLibravatar Derrick Stolee4-10/+13
The core.multiPackIndex setting has been around since c4d25228ebb (config: create core.multiPackIndex setting, 2018-07-12), but has been disabled by default. If a user wishes to use the multi-pack-index feature, then they must enable this config and run 'git multi-pack-index write'. The multi-pack-index feature is relatively stable now, so make the config option true by default. For users that do not use a multi-pack-index, the only extra cost will be a file lookup to see if a multi-pack-index file exists (once per process, per object directory). Also, this config option will be referenced by an upcoming "incremental-repack" task in the maintenance builtin, so move the config option into the repository settings struct. Note that if GIT_TEST_MULTI_PACK_INDEX=1, then we want to ignore the config option and treat core.multiPackIndex as enabled. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25maintenance: create auto condition for loose-objectsLibravatar Derrick Stolee3-0/+61
The loose-objects task deletes loose objects that already exist in a pack-file, then place the remaining loose objects into a new pack-file. If this step runs all the time, then we risk creating pack-files with very few objects with every 'git commit' process. To prevent overwhelming the packs directory with small pack-files, place a minimum number of objects to justify the task. The 'maintenance.loose-objects.auto' config option specifies a minimum number of loose objects to justify the task to run under the '--auto' option. This defaults to 100 loose objects. Setting the value to zero will prevent the step from running under '--auto' while a negative value will force it to run every time. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25maintenance: add loose-objects taskLibravatar Derrick Stolee3-0/+151
One goal of background maintenance jobs is to allow a user to disable auto-gc (gc.auto=0) but keep their repository in a clean state. Without any cleanup, loose objects will clutter the object database and slow operations. In addition, the loose objects will take up extra space because they are not stored with deltas against similar objects. Create a 'loose-objects' task for the 'git maintenance run' command. This helps clean up loose objects without disrupting concurrent Git commands using the following sequence of events: 1. Run 'git prune-packed' to delete any loose objects that exist in a pack-file. Concurrent commands will prefer the packed version of the object to the loose version. (Of course, there are exceptions for commands that specifically care about the location of an object. These are rare for a user to run on purpose, and we hope a user that has selected background maintenance will not be trying to do foreground maintenance.) 2. Run 'git pack-objects' on a batch of loose objects. These objects are grouped by scanning the loose object directories in lexicographic order until listing all loose objects -or- reaching 50,000 objects. This is more than enough if the loose objects are created only by a user doing normal development. We noticed users with _millions_ of loose objects because VFS for Git downloads blobs on-demand when a file read operation requires populating a virtual file. This step is based on a similar step in Scalar [1] and VFS for Git. [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/LooseObjectsStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-25maintenance: add prefetch taskLibravatar Derrick Stolee3-0/+92
When working with very large repositories, an incremental 'git fetch' command can download a large amount of data. If there are many other users pushing to a common repo, then this data can rival the initial pack-file size of a 'git clone' of a medium-size repo. Users may want to keep the data on their local repos as close as possible to the data on the remote repos by fetching periodically in the background. This can break up a large daily fetch into several smaller hourly fetches. The task is called "prefetch" because it is work done in advance of a foreground fetch to make that 'git fetch' command much faster. However, if we simply ran 'git fetch <remote>' in the background, then the user running a foreground 'git fetch <remote>' would lose some important feedback when a new branch appears or an existing branch updates. This is especially true if a remote branch is force-updated and this isn't noticed by the user because it occurred in the background. Further, the functionality of 'git push --force-with-lease' becomes suspect. When running 'git fetch <remote> <options>' in the background, use the following options for careful updating: 1. --no-tags prevents getting a new tag when a user wants to see the new tags appear in their foreground fetches. 2. --refmap= removes the configured refspec which usually updates refs/remotes/<remote>/* with the refs advertised by the remote. While this looks confusing, this was documented and tested by b40a50264ac (fetch: document and test --refmap="", 2020-01-21), including this sentence in the documentation: Providing an empty `<refspec>` to the `--refmap` option causes Git to ignore the configured refspecs and rely entirely on the refspecs supplied as command-line arguments. 3. By adding a new refspec "+refs/heads/*:refs/prefetch/<remote>/*" we can ensure that we actually load the new values somewhere in our refspace while not updating refs/heads or refs/remotes. By storing these refs here, the commit-graph job will update the commit-graph with the commits from these hidden refs. 4. --prune will delete the refs/prefetch/<remote> refs that no longer appear on the remote. 5. --no-write-fetch-head prevents updating FETCH_HEAD. We've been using this step as a critical background job in Scalar [1] (and VFS for Git). This solved a pain point that was showing up in user reports: fetching was a pain! Users do not like waiting to download the data that was created while they were away from their machines. After implementing background fetch, the foreground fetch commands sped up significantly because they mostly just update refs and download a small amount of new data. The effect is especially dramatic when paried with --no-show-forced-udpates (through fetch.showForcedUpdates=false). [1] https://github.com/microsoft/scalar/blob/master/Scalar.Common/Maintenance/FetchStep.cs Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: add trace2 regions for task executionLibravatar Derrick Stolee1-0/+2
Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: add auto condition for commit-graph taskLibravatar Derrick Stolee3-0/+93
Instead of writing a new commit-graph in every 'git maintenance run --auto' process (when maintenance.commit-graph.enalbed is configured to be true), only write when there are "enough" commits not in a commit-graph file. This count is controlled by the maintenance.commit-graph.auto config option. To compute the count, use a depth-first search starting at each ref, and leaving markers using the SEEN flag. If this count reaches the limit, then terminate early and start the task. Otherwise, this operation will peel every ref and parse the commit it points to. If these are all in the commit-graph, then this is typically a very fast operation. Users with many refs might feel a slow-down, and hence could consider updating their limit to be very small. A negative value will force the step to run every time. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: use pointers to check --autoLibravatar Derrick Stolee3-2/+18
The 'git maintenance run' command has an '--auto' option. This is used by other Git commands such as 'git commit' or 'git fetch' to check if maintenance should be run after adding data to the repository. Previously, this --auto option was only used to add the argument to the 'git gc' command as part of the 'gc' task. We will be expanding the other tasks to perform a check to see if they should do work as part of the --auto flag, when they are enabled by config. First, update the 'gc' task to perform the auto check inside the maintenance process. This prevents running an extra 'git gc --auto' command when not needed. It also shows a model for other tasks. Second, use the 'auto_condition' function pointer as a signal for whether we enable the maintenance task under '--auto'. For instance, we do not want to enable the 'fetch' task in '--auto' mode, so that function pointer will remain NULL. Now that we are not automatically calling 'git gc', a test in t5514-fetch-multiple.sh must be changed to watch for 'git maintenance' instead. We continue to pass the '--auto' option to the 'git gc' command when necessary, because of the gc.autoDetach config option changes behavior. Likely, we will want to absorb the daemonizing behavior implied by gc.autoDetach as a maintenance.autoDetach config option. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: create maintenance.<task>.enabled configLibravatar Derrick Stolee5-5/+44
Currently, a normal run of "git maintenance run" will only run the 'gc' task, as it is the only one enabled. This is mostly for backwards- compatible reasons since "git maintenance run --auto" commands replaced previous "git gc --auto" commands after some Git processes. Users could manually run specific maintenance tasks by calling "git maintenance run --task=<task>" directly. Allow users to customize which steps are run automatically using config. The 'maintenance.<task>.enabled' option then can turn on these other tasks (or turn off the 'gc' task). Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: take a lock on the objects directoryLibravatar Derrick Stolee1-0/+20
Performing maintenance on a Git repository involves writing data to the .git directory, which is not safe to do with multiple writers attempting the same operation. Ensure that only one 'git maintenance' process is running at a time by holding a file-based lock. Simply the presence of the .git/maintenance.lock file will prevent future maintenance. This lock is never committed, since it does not represent meaningful data. Instead, it is only a placeholder. If the lock file already exists, then no maintenance tasks are attempted. This will become very important later when we implement the 'prefetch' task, as this is our stop-gap from creating a recursive process loop between 'git fetch' and 'git maintenance run --auto'. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: add --task optionLibravatar Derrick Stolee3-4/+98
A user may want to only run certain maintenance tasks in a certain order. Add the --task=<task> option, which allows a user to specify an ordered list of tasks to run. These cannot be run multiple times, however. Here is where our array of maintenance_task pointers becomes critical. We can sort the array of pointers based on the task order, but we do not want to move the struct data itself in order to preserve the hashmap references. We use the hashmap to match the --task=<task> arguments into the task struct data. Keep in mind that the 'enabled' member of the maintenance_task struct is a placeholder for a future 'maintenance.<task>.enabled' config option. Thus, we use the 'enabled' member to specify which tasks are run when the user does not specify any --task=<task> arguments. The 'enabled' member should be ignored if --task=<task> appears. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: add commit-graph taskLibravatar Derrick Stolee5-4/+45
The first new task in the 'git maintenance' builtin is the 'commit-graph' task. This updates the commit-graph file incrementally with the command git commit-graph write --reachable --split By writing an incremental commit-graph file using the "--split" option we minimize the disruption from this operation. The default behavior is to merge layers until the new "top" layer is less than half the size of the layer below. This provides quick writes most of the time, with the longer writes following a power law distribution. Most importantly, concurrent Git processes only look at the commit-graph-chain file for a very short amount of time, so they will verly likely not be holding a handle to the file when we try to replace it. (This only matters on Windows.) If a concurrent process reads the old commit-graph-chain file, but our job expires some of the .graph files before they can be read, then those processes will see a warning message (but not fail). This could be avoided by a future update to use the --expire-time argument when writing the commit-graph. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: initialize task arrayLibravatar Derrick Stolee1-1/+42
In anticipation of implementing multiple maintenance tasks inside the 'maintenance' builtin, use a list of structs to describe the work to be done. The struct maintenance_task stores the name of the task (as given by a future command-line argument) along with a function pointer to its implementation and a boolean for whether the step is enabled. A list these structs are initialized with the full list of implemented tasks along with a default order. For now, this list only contains the "gc" task. This task is also the only task enabled by default. The run subcommand will return a nonzero exit code if any task fails. However, it will attempt all tasks in its loop before returning with the failure. Also each failed task will print an error message. Helped-by: Taylor Blau <me@ttaylorr.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: replace run_auto_gc()Libravatar Derrick Stolee10-23/+25
The run_auto_gc() method is used in several places to trigger a check for repo maintenance after some Git commands, such as 'git commit' or 'git fetch'. To allow for extra customization of this maintenance activity, replace the 'git gc --auto [--quiet]' call with one to 'git maintenance run --auto [--quiet]'. As we extend the maintenance builtin with other steps, users will be able to select different maintenance activities. Rename run_auto_gc() to run_auto_maintenance() to be clearer what is happening on this call, and to expose all callers in the current diff. Rewrite the method to use a struct child_process to simplify the calls slightly. Since 'git fetch' already allows disabling the 'git gc --auto' subprocess, add an equivalent option with a different name to be more descriptive of the new behavior: '--[no-]maintenance'. Update the documentation to include these options at the same time. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: add --quiet optionLibravatar Derrick Stolee3-6/+23
Maintenance activities are commonly used as steps in larger scripts. Providing a '--quiet' option allows those scripts to be less noisy when run on a terminal window. Turn this mode on by default when stderr is not a terminal. Pipe the option to the 'git gc' child process. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-09-17maintenance: create basic maintenance runnerLibravatar Derrick Stolee8-0/+175
The 'gc' builtin is our current entrypoint for automatically maintaining a repository. This one tool does many operations, such as repacking the repository, packing refs, and rewriting the commit-graph file. The name implies it performs "garbage collection" which means several different things, and some users may not want to use this operation that rewrites the entire object database. Create a new 'maintenance' builtin that will become a more general- purpose command. To start, it will only support the 'run' subcommand, but will later expand to add subcommands for scheduling maintenance in the background. For now, the 'maintenance' builtin is a thin shim over the 'gc' builtin. In fact, the only option is the '--auto' toggle, which is handed directly to the 'gc' builtin. The current change is isolated to this simple operation to prevent more interesting logic from being lost in all of the boilerplate of adding a new builtin. Use existing builtin/gc.c file because we want to share code between the two builtins. It is possible that we will have 'maintenance' replace the 'gc' builtin entirely at some point, leaving 'git gc' as an alias for some specific arguments to 'git maintenance run'. Create a new test_subcommand helper that allows us to test if a certain subcommand was run. It requires storing the GIT_TRACE2_EVENT logs in a file. A negation mode is available that will be used in later tests. Helped-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-18fetch: optionally allow disabling FETCH_HEAD updateLibravatar Junio C Hamano4-5/+41
If you run fetch but record the result in remote-tracking branches, and either if you do nothing with the fetched refs (e.g. you are merely mirroring) or if you always work from the remote-tracking refs (e.g. you fetch and then merge origin/branchname separately), you can get away with having no FETCH_HEAD at all. Teach "git fetch" a command line option "--[no-]write-fetch-head". The default is to write FETCH_HEAD, and the option is primarily meant to be used with the "--no-" prefix to override this default, because there is no matching fetch.writeFetchHEAD configuration variable to flip the default to off (in which case, the positive form may become necessary to defeat it). Note that under "--dry-run" mode, FETCH_HEAD is never written; otherwise you'd see list of objects in the file that you do not actually have. Passing `--write-fetch-head` does not force `git fetch` to write the file. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-17Eighth batchLibravatar Junio C Hamano1-0/+36
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-17Merge branch 'so/log-diff-merges-opt'Libravatar Junio C Hamano5-2/+171
Earlier, to countermand the implicit "-m" option when the "--first-parent" option is used with "git log", we added the "--[no-]diff-merges" option in the jk/log-fp-implies-m topic. To leave the door open to allow the "--diff-merges" option to take values that instructs how patches for merge commits should be computed (e.g. "cc"? "-p against first parent?"), redefine "--diff-merges" to take non-optional value, and implement "off" that means the same thing as "--no-diff-merges". * so/log-diff-merges-opt: t/t4013: add test for --diff-merges=off doc/git-log: describe --diff-merges=off revision: change "--diff-merges" option to require parameter
2020-08-17Merge branch 'jk/log-fp-implies-m'Libravatar Junio C Hamano9-55/+158
"git log --first-parent -p" showed patches only for single-parent commits on the first-parent chain; the "--first-parent" option has been made to imply "-m". Use "--no-diff-merges" to restore the previous behaviour to omit patches for merge commits. * jk/log-fp-implies-m: doc/git-log: clarify handling of merge commit diffs doc/git-log: move "-t" into diff-options list doc/git-log: drop "-r" diff option doc/git-log: move "Diff Formatting" from rev-list-options log: enable "-m" automatically with "--first-parent" revision: add "--no-diff-merges" option to counteract "-m" log: drop "--cc implies -m" logic
2020-08-17Merge branch 'ma/stop-progress-null-fix'Libravatar Junio C Hamano1-2/+10
NULL dereference fix. * ma/stop-progress-null-fix: progress: don't dereference before checking for NULL
2020-08-17Merge branch 'es/test-cmp-typocatcher'Libravatar Junio C Hamano1-2/+14
Test framework update. * es/test-cmp-typocatcher: test_cmp: diagnose incorrect arguments
2020-08-17Merge branch 'rp/apply-cached-with-i-t-a'Libravatar Junio C Hamano2-4/+77
Recent versions of "git diff-files" shows a diff between the index and the working tree for "intent-to-add" paths as a "new file" patch; "git apply --cached" should be able to take "git diff-files" and should act as an equivalent to "git add" for the path, but the command failed to do so for such a path. * rp/apply-cached-with-i-t-a: t4140: test apply with i-t-a paths apply: make i-t-a entries never match worktree apply: allow "new file" patches on i-t-a entries
2020-08-17Merge branch 'al/bisect-first-parent'Libravatar Junio C Hamano11-103/+195
"git bisect" learns the "--first-parent" option to find the first breakage along the first-parent chain. * al/bisect-first-parent: bisect: combine args passed to find_bisection() bisect: introduce first-parent flag cmd_bisect__helper: defer parsing no-checkout flag rev-list: allow bisect and first-parent flags t6030: modernize "git bisect run" tests
2020-08-17Merge branch 'jk/sideband-error-l10n'Libravatar Junio C Hamano1-1/+1
Mark error message for i18n. * jk/sideband-error-l10n: sideband: mark "remote error:" prefix for translation
2020-08-17Merge branch 'jc/noop-with-static-inline'Libravatar Junio C Hamano1-5/+15
A no-op replacement function implemented as a C preprocessor macro does not perform as good a job as one implemented as a "static inline" function in catching errors in parameters; replace the former with the latter in <git-compat-util.h> header. * jc/noop-with-static-inline: compat-util: type-check parameters of no-op replacement functions
2020-08-17Merge branch 'pd/mergetool-nvimdiff'Libravatar Junio C Hamano9-18/+51
The existing backends for "git mergetool" based on variants of vim have been refactored and then support for "nvim" has been added. * pd/mergetool-nvimdiff: mergetools: add support for nvimdiff (neovim) family mergetool--lib: improve support for vimdiff-style tool variants
2020-08-17Merge branch 'hn/reftable-prep-part-2'Libravatar Junio C Hamano4-139/+36
Further preliminary change to refs API. * hn/reftable-prep-part-2: Make HEAD a PSEUDOREF rather than PER_WORKTREE. Modify pseudo refs through ref backend storage t1400: use git rev-parse for testing PSEUDOREF existence
2020-08-17Merge branch 'dd/send-email-config'Libravatar Junio C Hamano4-0/+68
Stop when "sendmail.*" configuration variables are defined, which could be a mistaken attempt to define "sendemail.*" variables. * dd/send-email-config: git-send-email: die if sendmail.* config is set
2020-08-17Merge branch 'ps/ref-transaction-hook'Libravatar Junio C Hamano2-1/+28
The logic to find the ref transaction hook script attempted to cache the path to the found hook without realizing that it needed to keep a copied value, as the API it used returned a transitory buffer space. This has been corrected. * ps/ref-transaction-hook: t1416: avoid hard-coded sha1 ids refs: fix interleaving hook calls with reference-transaction hook
2020-08-13Seventh batchLibravatar Junio C Hamano1-0/+9
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-13Merge branch 'rp/blame-first-parent-doc'Libravatar Junio C Hamano1-0/+6
The "git blame --first-parent" option was not documented, but now it is. * rp/blame-first-parent-doc: blame-options.txt: document --first-parent option
2020-08-13Merge branch 'ma/test-quote-cleanup'Libravatar Junio C Hamano18-80/+53
Test cleanup. * ma/test-quote-cleanup: t4104: modernize and simplify quoting t: don't spuriously close and reopen quotes
2020-08-13Merge branch 'jt/has_object'Libravatar Junio C Hamano7-10/+62
A new helper function has_object() has been introduced to make it easier to mark object existence checks that do and don't want to trigger lazy fetches, and a few such checks are converted using it. * jt/has_object: fsck: do not lazy fetch known non-promisor object pack-objects: no fetch when allow-{any,promisor} apply: do not lazy fetch when applying binary sha1-file: introduce no-lazy-fetch has_object()
2020-08-13Merge branch 'bc/sha-256-cvs-svn-updates'Libravatar Junio C Hamano1-1/+1
Portability fix. * bc/sha-256-cvs-svn-updates: git-cvsexportcommit: support Perl before 5.10.1
2020-08-11Sixth batchLibravatar Junio C Hamano1-0/+11
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-11Merge branch 'ss/cmake-build'Libravatar Junio C Hamano2-15/+1024
CMake support to build with MSVC for Windows bypassing the Makefile. * ss/cmake-build: ci: modification of main.yml to use cmake for vs-build job cmake: support for building git on windows with msvc and clang. cmake: support for building git on windows with mingw cmake: support for testing git when building out of the source tree cmake: support for testing git with ctest cmake: installation support for git cmake: generate the shell/perl/python scripts and templates, translations Introduce CMake support for configuring Git
2020-08-11Merge branch 'tb/upload-pack-filters'Libravatar Junio C Hamano5-0/+184
The component to respond to "git fetch" request is made more configurable to selectively allow or reject object filtering specification used for partial cloning. * tb/upload-pack-filters: t5616: use test_i18ngrep for upload-pack errors upload-pack.c: introduce 'uploadpackfilter.tree.maxDepth' upload-pack.c: allow banning certain object filter(s) list_objects_filter_options: introduce 'list_object_filter_config_name'
2020-08-11Merge branch 'es/worktree-doc-cleanups'Libravatar Junio C Hamano1-61/+62
Doc cleanup around "worktree". * es/worktree-doc-cleanups: git-worktree.txt: link to man pages when citing other Git commands git-worktree.txt: make start of new sentence more obvious git-worktree.txt: fix minor grammatical issues git-worktree.txt: consistently use term "working tree" git-worktree.txt: employ fixed-width typeface consistently
2020-08-11Merge branch 'bc/sha-256-part-3'Libravatar Junio C Hamano74-351/+633
The final leg of SHA-256 transition. * bc/sha-256-part-3: (39 commits) t: remove test_oid_init in tests docs: add documentation for extensions.objectFormat ci: run tests with SHA-256 t: make SHA1 prerequisite depend on default hash t: allow testing different hash algorithms via environment t: add test_oid option to select hash algorithm repository: enable SHA-256 support by default setup: add support for reading extensions.objectformat bundle: add new version for use with SHA-256 builtin/verify-pack: implement an --object-format option http-fetch: set up git directory before parsing pack hashes t0410: mark test with SHA1 prerequisite t5308: make test work with SHA-256 t9700: make hash size independent t9500: ensure that algorithm info is preserved in config t9350: make hash size independent t9301: make hash size independent t9300: use $ZERO_OID instead of hard-coded object ID t9300: abstract away SHA-1-specific constants t8011: make hash size independent ...
2020-08-11t/t4013: add test for --diff-merges=offLibravatar Sergey Organov3-0/+158
Signed-off-by: Sergey Organov <sorganov@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-11doc/git-log: describe --diff-merges=offLibravatar Sergey Organov1-1/+5
Signed-off-by: Sergey Organov <sorganov@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-11revision: change "--diff-merges" option to require parameterLibravatar Sergey Organov1-1/+8
--diff-merges=off is the only accepted form for now, a synonym for --no-diff-merges. This patch is a preparation for adding more values, as well as supporting --diff-merges=<parent>, where <parent> is single parent number to output diff against. Signed-off-by: Sergey Organov <sorganov@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-11t1416: avoid hard-coded sha1 idsLibravatar Jeff King1-2/+3
The test added by e5256c82e5 (refs: fix interleaving hook calls with reference-transaction hook, 2020-08-07) uses hard-coded sha1 object ids in its expected output. This causes it to fail when run with GIT_TEST_DEFAULT_HASH=sha256. Let's make use of the oid variables we define earlier, as the rest of the nearby tests do. Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-10progress: don't dereference before checking for NULLLibravatar Martin Ågren1-2/+10
In `stop_progress()`, we're careful to check that `p_progress` is non-NULL before we dereference it, but by then we have already dereferenced it when calling `finish_if_sparse(*p_progress)`. And, for what it's worth, we'll go on to blindly dereference it again inside `stop_progress_msg()`. We could return early if we get a NULL-pointer, but let's go one step further and BUG instead. The progress API handles NULL just fine, but that's the NULL-ness of `*p_progress`, e.g., when running with `--no-progress`. If `p_progress` is NULL, chances are that's a mistake. For symmetry, let's do the same check in `stop_progress_msg()`, too. Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-08-10Fifth batchLibravatar Junio C Hamano1-0/+15
Signed-off-by: Junio C Hamano <gitster@pobox.com>