summaryrefslogtreecommitdiff
path: root/builtin/gc.c
AgeCommit message (Collapse)AuthorFilesLines
2017-10-18Merge branch 'aw/gc-lockfile-fscanf-fix' into maintLibravatar Junio C Hamano1-1/+1
"git gc" tries to avoid running two instances at the same time by reading and writing pid/host from and to a lock file; it used to use an incorrect fscanf() format when reading, which has been corrected. * aw/gc-lockfile-fscanf-fix: gc: call fscanf() with %<len>s, not %<len>c, when reading hostname
2017-09-17gc: call fscanf() with %<len>s, not %<len>c, when reading hostnameLibravatar Junio C Hamano1-1/+1
Earlier in this codepath, we (ab)used "%<len>c" to read the hostname recorded in the lockfile into locking_host[HOST_NAME_MAX + 1] while substituting <len> with the actual value of HOST_NAME_MAX. This turns out to be incorrect, as it is an instruction to read exactly the specified number of bytes. Because we are trying to read at most that many bytes, we should be using "%<len>s" instead. Helped-by: A. Wilcox <awilfox@adelielinux.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-18Merge branch 'jk/gc-pre-detach-under-hook'Libravatar Junio C Hamano1-0/+4
We run an early part of "git gc" that deals with refs before daemonising (and not under lock) even when running a background auto-gc, which caused multiple gc processes attempting to run the early part at the same time. This is now prevented by running the early part also under the GC lock. * jk/gc-pre-detach-under-hook: gc: run pre-detach operations under lock
2017-07-12Merge branch 'rs/use-div-round-up'Libravatar Junio C Hamano1-1/+1
Code cleanup. * rs/use-div-round-up: use DIV_ROUND_UP
2017-07-12gc: run pre-detach operations under lockLibravatar Jeff King1-0/+4
We normally try to avoid having two auto-gc operations run at the same time, because it wastes resources. This was done long ago in 64a99eb47 (gc: reject if another gc is running, unless --force is given, 2013-08-08). When we do a detached auto-gc, we run the ref-related commands _before_ detaching, to avoid confusing lock contention. This was done by 62aad1849 (gc --auto: do not lock refs in the background, 2014-05-25). These two features do not interact well. The pre-detach operations are run before we check the gc.pid lock, meaning that on a busy repository we may run many of them concurrently. Ideally we'd take the lock before spawning any operations, and hold it for the duration of the program. This is tricky, though, with the way the pid-file interacts with the daemonize() process. Other processes will check that the pid recorded in the pid-file still exists. But detaching causes us to fork and continue running under a new pid. So if we take the lock before detaching, the pid-file will have a bogus pid in it. We'd have to go back and update it with the new pid after detaching. We'd also have to play some tricks with the tempfile subsystem to tweak the "owner" field, so that the parent process does not clean it up on exit, but the child process does. Instead, we can do something a bit simpler: take the lock only for the duration of the pre-detach work, then detach, then take it again for the post-detach work. Technically, this means that the post-detach lock could lose to another process doing pre-detach work. But in the long run this works out. That second process would then follow-up by doing post-detach work. Unless it was in turn blocked by a third process doing pre-detach work, and so on. This could in theory go on indefinitely, as the pre-detach work does not repack, and so need_to_gc() will continue to trigger. But in each round we are racing between the pre- and post-detach locks. Eventually, one of the post-detach locks will win the race and complete the full gc. So in the worst case, we may racily repeat the pre-detach work, but we would never do so simultaneously (it would happen via a sequence of serialized race-wins). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-10use DIV_ROUND_UPLibravatar René Scharfe1-1/+1
Convert code that divides and rounds up to use DIV_ROUND_UP to make the intent clearer and reduce the number of magic constants. Signed-off-by: Rene Scharfe <l.s.r@web.de> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-24Merge branch 'bw/config-h'Libravatar Junio C Hamano1-0/+1
Fix configuration codepath to pay proper attention to commondir that is used in multi-worktree situation, and isolate config API into its own header file. * bw/config-h: config: don't implicitly use gitdir or commondir config: respect commondir setup: teach discover_git_directory to respect the commondir config: don't include config.h by default config: remove git_config_iter config: create config.h
2017-06-15config: don't include config.h by defaultLibravatar Brandon Williams1-0/+1
Stop including config.h by default in cache.h. Instead only include config.h in those files which require use of the config system. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-16Merge branch 'js/larger-timestamps'Libravatar Junio C Hamano1-1/+1
Some platforms have ulong that is smaller than time_t, and our historical use of ulong for timestamp would mean they cannot represent some timestamp that the platform allows. Invent a separate and dedicated timestamp_t (so that we can distingiuish timestamps and a vanilla ulongs, which along is already a good move), and then declare uintmax_t is the type to be used as the timestamp_t. * js/larger-timestamps: archive-tar: fix a sparse 'constant too large' warning use uintmax_t for timestamps date.c: abort if the system time cannot handle one of our timestamps timestamp_t: a new data type for timestamps PRItime: introduce a new "printf format" for timestamps parse_timestamp(): specify explicitly where we parse timestamps t0006 & t5000: skip "far in the future" test when time_t is too limited t0006 & t5000: prepare for 64-bit timestamps ref-filter: avoid using `unsigned long` for catch-all data type
2017-04-27timestamp_t: a new data type for timestampsLibravatar Johannes Schindelin1-1/+1
Git's source code assumes that unsigned long is at least as precise as time_t. Which is incorrect, and causes a lot of problems, in particular where unsigned long is only 32-bit (notably on Windows, even in 64-bit versions). So let's just use a more appropriate data type instead. In preparation for this, we introduce the new `timestamp_t` data type. By necessity, this is a very, very large patch, as it has to replace all timestamps' data type in one go. As we will use a data type that is not necessarily identical to `time_t`, we need to be very careful to use `time_t` whenever we interact with the system functions, and `timestamp_t` everywhere else. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-23Merge branch 'dt/xgethostname-nul-termination'Libravatar Junio C Hamano1-4/+8
gethostname(2) may not NUL terminate the buffer if hostname does not fit; unfortunately there is no easy way to see if our buffer was too small, but at least this will make sure we will not end up using garbage past the end of the buffer. * dt/xgethostname-nul-termination: xgethostname: handle long hostnames use HOST_NAME_MAX to size buffers for gethostname(2)
2017-04-18xgethostname: handle long hostnamesLibravatar David Turner1-1/+1
If the full hostname doesn't fit in the buffer supplied to gethostname, POSIX does not specify whether the buffer will be null-terminated, so to be safe, we should do it ourselves. Introduce new function, xgethostname, which ensures that there is always a \0 at the end of the buffer. Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-18use HOST_NAME_MAX to size buffers for gethostname(2)Libravatar René Scharfe1-3/+7
POSIX limits the length of host names to HOST_NAME_MAX. Export the fallback definition from daemon.c and use this constant to make all buffers used with gethostname(2) big enough for any possible result and a terminating NUL. Inspired-by: David Turner <dturner@twosigma.com> Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: David Turner <dturner@twosigma.com> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-30gc: replace local buffer with git_pathLibravatar Jeff King1-7/+1
We probe the "17/" loose object directory for auto-gc, and use a local buffer to format the path. We can just use git_path() for this. It handles paths of any length (reducing our error handling). And because we feed the result straight to a system call, we can just use the static variant. Note that git_path also knows the string "objects/" is special, and will replace it with git_object_directory() when necessary. Another alternative would be to use sha1_file_name() for the pretend object "170000...", but that ends up being more hassle for no gain, as we have to truncate the final path component. Signed-off-by: Jeff King <peff@peff.net>
2017-03-17Merge branch 'cc/split-index-config'Libravatar Junio C Hamano1-14/+3
The experimental "split index" feature has gained a few configuration variables to make it easier to use. * cc/split-index-config: (22 commits) Documentation/git-update-index: explain splitIndex.* Documentation/config: add splitIndex.sharedIndexExpire read-cache: use freshen_shared_index() in read_index_from() read-cache: refactor read_index_from() t1700: test shared index file expiration read-cache: unlink old sharedindex files config: add git_config_get_expiry() from gc.c read-cache: touch shared index files when used sha1_file: make check_and_freshen_file() non static Documentation/config: add splitIndex.maxPercentChange t1700: add tests for splitIndex.maxPercentChange read-cache: regenerate shared index if necessary config: add git_config_get_max_percent_split_change() Documentation/git-update-index: talk about core.splitIndex config var Documentation/config: add information for core.splitIndex t1700: add tests for core.splitIndex update-index: warn in case of split-index incoherency read-cache: add and then use tweak_split_index() split-index: add {add,remove}_split_index() functions config: add git_config_get_split_index() ...
2017-03-01config: add git_config_get_expiry() from gc.cLibravatar Christian Couder1-13/+2
This function will be used in a following commit to get the expiration time of the shared index files from the config, and it is generic enough to be put in "config.c". Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-13gc: ignore old gc.log filesLibravatar David Turner1-7/+50
A server can end up in a state where there are lots of unreferenced loose objects (say, because many users are doing a bunch of rebasing and pushing their rebased branches). Running "git gc --auto" in this state would cause a gc.log file to be created, preventing future auto gcs, causing pack files to pile up. Since many git operations are O(n) in the number of pack files, this would lead to poor performance. Git should never get itself into a state where it refuses to do any maintenance, just because at some point some piece of the maintenance didn't make progress. Teach Git to ignore gc.log files which are older than (by default) one day old, which can be tweaked via the gc.logExpiry configuration variable. That way, these pack files will get cleaned up, if necessary, at least once per day. And operators who find a need for more-frequent gcs can adjust gc.logExpiry to meet their needs. There is also some cleanup: a successful manual gc, or a warning-free auto gc with an old log file, will remove any old gc.log files. It might still happen that manual intervention is required (e.g. because the repo is corrupt), but at the very least it won't be because Git is too dumb to try again. Signed-off-by: David Turner <dturner@twosigma.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-12-29auto gc: don't write bitmaps for incremental repacksLibravatar David Turner1-1/+8
When git gc --auto does an incremental repack of loose objects, we do not expect to be able to write a bitmap; it is very likely that objects in the new pack will have references to objects outside of the pack. So we shouldn't try to write a bitmap, because doing so will likely issue a warning. This warning was making its way into gc.log. When the gc.log was present, future auto gc runs would refuse to run. Patch by Jeff King. Bug report, test, and commit message by David Turner. Signed-off-by: David Turner <dturner@twosigma.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-09-29Merge branch 'jk/reduce-gc-aggressive-depth' into maintLibravatar Junio C Hamano1-1/+1
"git gc --aggressive" used to limit the delta-chain length to 250, which is way too deep for gaining additional space savings and is detrimental for runtime performance. The limit has been reduced to 50. * jk/reduce-gc-aggressive-depth: gc: default aggressive depth to 50
2016-09-21Merge branch 'jk/reduce-gc-aggressive-depth'Libravatar Junio C Hamano1-1/+1
"git gc --aggressive" used to limit the delta-chain length to 250, which is way too deep for gaining additional space savings and is detrimental for runtime performance. The limit has been reduced to 50. * jk/reduce-gc-aggressive-depth: gc: default aggressive depth to 50
2016-08-11gc: default aggressive depth to 50Libravatar Jeff King1-1/+1
This commit message is long and has lots of background and numbers. The summary is: the current default of 250 doesn't save much space, and costs CPU. It's not a good tradeoff. Read on for details. The "--aggressive" flag to git-gc does three things: 1. use "-f" to throw out existing deltas and recompute from scratch 2. use "--window=250" to look harder for deltas 3. use "--depth=250" to make longer delta chains Items (1) and (2) are good matches for an "aggressive" repack. They ask the repack to do more computation work in the hopes of getting a better pack. You pay the costs during the repack, and other operations see only the benefit. Item (3) is not so clear. Allowing longer chains means fewer restrictions on the deltas, which means potentially finding better ones and saving some space. But it also means that operations which access the deltas have to follow longer chains, which affects their performance. So it's a tradeoff, and it's not clear that the tradeoff is even a good one. The existing "250" numbers for "--aggressive" come originally from this thread: http://public-inbox.org/git/alpine.LFD.0.9999.0712060803430.13796@woody.linux-foundation.org/ where Linus says: So when I said "--depth=250 --window=250", I chose those numbers more as an example of extremely aggressive packing, and I'm not at all sure that the end result is necessarily wonderfully usable. It's going to save disk space (and network bandwidth - the delta's will be re-used for the network protocol too!), but there are definitely downsides too, and using long delta chains may simply not be worth it in practice. There are some numbers in that thread, but they're mostly focused on the improved window size, and measure the improvement from --depth=250 and --window=250 together. E.g.: http://public-inbox.org/git/9e4733910712062006l651571f3w7f76ce64c6650dff@mail.gmail.com/ talks about the improved run-time of "git-blame", which comes from the reduced pack size. But most of that reduction is coming from --window=250, whereas most of the extra costs come from --depth=250. There's a link in that thread showing that increasing the depth beyond 50 doesn't seem to help much with the size: https://vcscompare.blogspot.com/2008/06/git-repack-parameters.html but again, no discussion of the timing impact. In an earlier thread from Ted Ts'o which discussed setting the non-aggressive default (from 10 to 50): http://public-inbox.org/git/20070509134958.GA21489%40thunk.org/ we have more numbers, with the conclusion that going past 50 does not help size much, and hurts the speed of normal operations. So from that, we might guess that 50 is actually a sweet spot, even for aggressive, if we interpret aggressive to "spend time now to make a better pack". It is not clear that "--depth=250" is actually a better pack. It may be slightly _smaller_, but it carries a run-time penalty. Here are some more recent timings I did to verify that. They show three things: - the size of the resulting pack (so disk saved to store, bandwidth saved on clones/fetches) - the cost of "rev-list --objects --all", which shows the effect of the delta chains on trees (commits typically don't delta, and the command doesn't touch the blobs at all) - the cost of "log -Sfoo", which will additionally access each blob All cases were repacked with "git repack -adf --depth=$d --window=250" (so basically, what would happen if we tweaked the "gc --aggressive" default depth). The timings are all wall-clock best-of-3. The machine itself has plenty of RAM compared to the repositories (which is probably typical of most workstations these days), so we're really measuring CPU usage, as the whole thing will be in disk cache after the first run. The core.deltaBaseCacheLimit is at its default of 96MiB. It's possible that tweaking it would have some impact on the tests, as some of them (especially "log -S" on a large repo) are likely to overflow that. But bumping that carries a run-time memory cost, so for these tests, I focused on what we could do just with the on-disk pack tradeoffs. Each test is done for four depths: 250 (the current value), 50 (the current default that tested well previously), 100 (to show something on the larger side, which previous tests showed was not a good tradeoff), and 10 (the very old default, which previous tests showed was worse than 50). Here are the numbers for linux.git: depth | size | % | rev-list | % | log -Sfoo | % -------+-------+-------+----------+--------+-----------+------- 250 | 967MB | n/a | 48.159s | n/a | 378.088 | n/a 100 | 971MB | +0.4% | 41.471s | -13.9% | 342.060 | -9.5% 50 | 979MB | +1.2% | 37.778s | -21.6% | 311.040s | -17.7% 10 | 1.1GB | +6.6% | 32.518s | -32.5% | 279.890s | -25.9% and for git.git: depth | size | % | rev-list | % | log -Sfoo | % -------+-------+-------+----------+--------+-----------+------- 250 | 48MB | n/a | 2.215s | n/a | 20.922s | n/a 100 | 49MB | +0.5% | 2.140s | -3.4% | 17.736s | -15.2% 50 | 49MB | +1.7% | 2.099s | -5.2% | 15.418s | -26.3% 10 | 53MB | +9.3% | 2.001s | -9.7% | 12.677s | -39.4% You can see that that the CPU savings for regular operations improves as we decrease the depth. The savings are less for "rev-list" on a smaller repository than they are for blob-accessing operations, or even rev-list on a larger repository. This may mean that a larger delta cache would help (though setting core.deltaBaseCacheLimit by itself doesn't). But we can also see that the space savings are not that great as the depth goes higher. Saving 5-10% between 10 and 50 is probably worth the CPU tradeoff. Saving 1% to go from 50 to 100, or another 0.5% to go from 100 to 250 is probably not. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-07-28Merge branch 'ew/gc-auto-pack-limit-fix' into maintLibravatar Junio C Hamano1-1/+1
"gc.autoPackLimit" when set to 1 should not trigger a repacking when there is only one pack, but the code counted poorly and did so. * ew/gc-auto-pack-limit-fix: gc: fix off-by-one error with gc.autoPackLimit
2016-07-13Merge branch 'ew/gc-auto-pack-limit-fix'Libravatar Junio C Hamano1-1/+1
"gc.autoPackLimit" when set to 1 should not trigger a repacking when there is only one pack, but the code counted poorly and did so. * ew/gc-auto-pack-limit-fix: gc: fix off-by-one error with gc.autoPackLimit
2016-06-27gc: fix off-by-one error with gc.autoPackLimitLibravatar Eric Wong1-1/+1
This matches the documentation and allows gc.autoPackLimit=1 to maintain a single pack without attempting a repack on every "git gc --auto" invocation. Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-11-20Merge branch 'dk/gc-idx-wo-pack'Libravatar Jeff King1-0/+21
Having a leftover .idx file without corresponding .pack file in the repository hurts performance; "git gc" learned to prune them. * dk/gc-idx-wo-pack: gc: remove garbage .idx files from pack dir t5304: test cleaning pack garbage prepare_packed_git(): refactor garbage reporting in pack directory
2015-11-04gc: remove garbage .idx files from pack dirLibravatar Doug Kelly1-0/+21
Add a custom report_garbage handler to collect and remove garbage .idx files from the pack directory. Signed-off-by: Doug Kelly <dougk.ff7@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-10-30Merge branch 'js/misc-fixes'Libravatar Junio C Hamano1-1/+1
Various compilation fixes and squelching of warnings. * js/misc-fixes: Correct fscanf formatting string for I64u values Silence GCC's "cast of pointer to integer of a different size" warning Squelch warning about an integer overflow
2015-10-26Merge branch 'jk/repository-extension'Libravatar Junio C Hamano1-9/+11
Prepare for Git on-disk repository representation to undergo backward incompatible changes by introducing a new repository format version "1", with an extension mechanism. * jk/repository-extension: introduce "preciousObjects" repository extension introduce "extensions" form of core.repositoryformatversion
2015-10-26Correct fscanf formatting string for I64u valuesLibravatar Waldek Maleska1-1/+1
This fix is probably purely cosmetic because PRIuMAX is likely identical to SCNuMAX. Nevertheless, when using a function of the scanf() family, the correct interpolation to use is the latter, not the former. Signed-off-by: Waldek Maleska <w.maleska@gmail.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-10-20Merge branch 'jk/war-on-sprintf'Libravatar Junio C Hamano1-1/+1
Many allocations that is manually counted (correctly) that are followed by strcpy/sprintf have been replaced with a less error prone constructs such as xstrfmt. Macintosh-specific breakage was noticed and corrected in this reroll. * jk/war-on-sprintf: (70 commits) name-rev: use strip_suffix to avoid magic numbers use strbuf_complete to conditionally append slash fsck: use for_each_loose_file_in_objdir Makefile: drop D_INO_IN_DIRENT build knob fsck: drop inode-sorting code convert strncpy to memcpy notes: document length of fanout path with a constant color: add color_set helper for copying raw colors prefer memcpy to strcpy help: clean up kfmclient munging receive-pack: simplify keep_arg computation avoid sprintf and strcpy with flex arrays use alloc_ref rather than hand-allocating "struct ref" color: add overflow checks for parsing colors drop strcpy in favor of raw sha1_to_hex use sha1_to_hex_r() instead of strcpy daemon: use cld->env_array when re-spawning stat_tracking_info: convert to argv_array http-push: use an argv_array for setup_revisions fetch-pack: use argv_array for index-pack / unpack-objects ...
2015-10-15Merge branch 'nd/gc-auto-background-fix'Libravatar Junio C Hamano1-1/+55
When "git gc --auto" is backgrounded, its diagnosis message is lost. Save it to a file in $GIT_DIR and show it next time the "gc --auto" is run. * nd/gc-auto-background-fix: gc: save log from daemonized gc --auto and print it next time
2015-09-25convert trivial sprintf / strcpy calls to xsnprintfLibravatar Jeff King1-1/+1
We sometimes sprintf into fixed-size buffers when we know that the buffer is large enough to fit the input (either because it's a constant, or because it's numeric input that is bounded in size). Likewise with strcpy of constant strings. However, these sites make it hard to audit sprintf and strcpy calls for buffer overflows, as a reader has to cross-reference the size of the array with the input. Let's use xsnprintf instead, which communicates to a reader that we don't expect this to overflow (and catches the mistake in case we do). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-09-21gc: save log from daemonized gc --auto and print it next timeLibravatar Nguyễn Thái Ngọc Duy1-1/+55
While commit 9f673f9 (gc: config option for running --auto in background - 2014-02-08) helps reduce some complaints about 'gc --auto' hogging the terminal, it creates another set of problems. The latest in this set is, as the result of daemonizing, stderr is closed and all warnings are lost. This warning at the end of cmd_gc() is particularly important because it tells the user how to avoid "gc --auto" running repeatedly. Because stderr is closed, the user does not know, naturally they complain about 'gc --auto' wasting CPU. Daemonized gc now saves stderr to $GIT_DIR/gc.log. Following gc --auto will not run and gc.log printed out until the user removes gc.log. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-25Merge branch 'mh/tempfile'Libravatar Junio C Hamano1-22/+10
The "lockfile" API has been rebuilt on top of a new "tempfile" API. * mh/tempfile: credential-cache--daemon: use tempfile module credential-cache--daemon: delete socket from main() gc: use tempfile module to handle gc.pid file lock_repo_for_gc(): compute the path to "gc.pid" only once diff: use tempfile module setup_temporary_shallow(): use tempfile module write_shared_index(): use tempfile module register_tempfile(): new function to handle an existing temporary file tempfile: add several functions for creating temporary files prepare_tempfile_object(): new function, extracted from create_tempfile() tempfile: a new module for handling temporary files commit_lock_file(): use get_locked_file_path() lockfile: add accessor get_lock_file_path() lockfile: add accessors get_lock_file_fd() and get_lock_file_fp() create_bundle(): duplicate file descriptor to avoid closing it twice lockfile: move documentation to lockfile.h and lockfile.c
2015-08-12gc: use tempfile module to handle gc.pid fileLibravatar Michael Haggerty1-20/+5
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-12lock_repo_for_gc(): compute the path to "gc.pid" only onceLibravatar Michael Haggerty1-3/+6
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-12Merge branch 'es/worktree-add'Libravatar Junio C Hamano1-1/+1
Remove remaining cruft from "git checkout --to", which transitioned to "git worktree add". * es/worktree-add: config: rename "gc.pruneWorktreesExpire" to "gc.worktreePruneExpire" Documentation/git-worktree: wordsmith worktree-related manpages Documentation/config: fix stale "git prune --worktree" reference Documentation/git-worktree: fix incorrect reference to file "locked" Documentation/git-worktree: consistently use term "linked working tree"
2015-07-20config: rename "gc.pruneWorktreesExpire" to "gc.worktreePruneExpire"Libravatar Eric Sunshine1-1/+1
As of df0b6cf (worktree: new place for "git prune --worktrees", 2015-06-29), linked worktree pruning functionality moved from "git prune --worktrees" to "git worktree prune". Rename the associated configuration variable accordingly. Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-07-13Merge branch 'nd/multiple-work-trees'Libravatar Junio C Hamano1-1/+1
"git checkout [<tree-ish>] <paths>" spent unnecessary cycles checking if the current branch was checked out elsewhere, when we know we are not switching the branches ourselves. * nd/multiple-work-trees: worktree: new place for "git prune --worktrees" checkout: don't check worktrees when not necessary
2015-06-29worktree: new place for "git prune --worktrees"Libravatar Nguyễn Thái Ngọc Duy1-1/+1
Commit 23af91d (prune: strategies for linked checkouts - 2014-11-30) adds "--worktrees" to "git prune" without realizing that "git prune" is for object database only. This patch moves the same functionality to a new command "git worktree". Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com>
2015-06-24introduce "preciousObjects" repository extensionLibravatar Jeff King1-9/+11
If this extension is used in a repository, then no operations should run which may drop objects from the object storage. This can be useful if you are sharing that storage with other repositories whose refs you cannot see. For instance, if you do: $ git clone -s parent child $ git -C parent config extensions.preciousObjects true $ git -C parent config core.repositoryformatversion 1 you now have additional safety when running git in the parent repository. Prunes and repacks will bail with an error, and `git gc` will skip those operations (it will continue to pack refs and do other non-object operations). Older versions of git, when run in the repository, will fail on every operation. Note that we do not set the preciousObjects extension by default when doing a "clone -s", as doing so breaks backwards compatibility. It is a decision the user should make explicitly. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-05-11Merge branch 'nd/multiple-work-trees'Libravatar Junio C Hamano1-11/+23
A replacement for contrib/workdir/git-new-workdir that does not rely on symbolic links and make sharing of objects and refs safer by making the borrowee and borrowers aware of each other. * nd/multiple-work-trees: (41 commits) prune --worktrees: fix expire vs worktree existence condition t1501: fix test with split index t2026: fix broken &&-chain t2026 needs procondition SANITY git-checkout.txt: a note about multiple checkout support for submodules checkout: add --ignore-other-wortrees checkout: pass whole struct to parse_branchname_arg instead of individual flags git-common-dir: make "modules/" per-working-directory directory checkout: do not fail if target is an empty directory t2025: add a test to make sure grafts is working from a linked checkout checkout: don't require a work tree when checking out into a new one git_path(): keep "info/sparse-checkout" per work-tree count-objects: report unused files in $GIT_DIR/worktrees/... gc: support prune --worktrees gc: factor out gc.pruneexpire parsing code gc: style change -- no SP before closing parenthesis checkout: clean up half-prepared directories in --to mode checkout: reject if the branch is already checked out elsewhere prune: strategies for linked checkouts checkout: support checking out into a new working directory ...
2015-01-14standardize usage info string formatLibravatar Alex Henrie1-1/+1
This patch puts the usage info strings that were not already in docopt- like format into docopt-like format, which will be a litle easier for end users and a lot easier for translators. Changes include: - Placing angle brackets around fill-in-the-blank parameters - Putting dashes in multiword parameter names - Adding spaces to [-f|--foobar] to make [-f | --foobar] - Replacing <foobar>* with [<foobar>...] Signed-off-by: Alex Henrie <alexhenrie24@gmail.com> Reviewed-by: Matthieu Moy <Matthieu.Moy@imag.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-01gc: support prune --worktreesLibravatar Nguyễn Thái Ngọc Duy1-0/+10
Helped-by: Marc Branchaud <marcnarc@xiplink.com> Signed-off-by: Marc Branchaud <marcnarc@xiplink.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-01gc: factor out gc.pruneexpire parsing codeLibravatar Nguyễn Thái Ngọc Duy1-10/+12
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-12-01gc: style change -- no SP before closing parenthesisLibravatar Nguyễn Thái Ngọc Duy1-1/+1
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-10-01lockfile.h: extract new header file for the functions in lockfile.cLibravatar Michael Haggerty1-1/+1
Move the interface declaration for the functions in lockfile.c from cache.h to a new file, lockfile.h. Add #includes where necessary (and remove some redundant includes of cache.h by files that already include builtin.h). Move the documentation of the lock_file state diagram from lockfile.c to the new header file. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-08-07builtin/gc.c: replace `git_config()` with `git_config_get_*()` familyLibravatar Tanay Abhra1-31/+20
Use `git_config_get_*()` family instead of `git_config()` to take advantage of the config-set API which provides a cleaner control flow. Signed-off-by: Tanay Abhra <tanayabh@gmail.com> Reviewed-by: Matthieu Moy <Matthieu.Moy@imag.fr> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-16Merge branch 'nd/daemonize-gc'Libravatar Junio C Hamano1-6/+20
"git gc --auto" was recently changed to run in the background to give control back early to the end-user sitting in front of the terminal, but it forgot that housekeeping involving reflogs should be done without other processes competing for accesses to the refs. * nd/daemonize-gc: gc --auto: do not lock refs in the background
2014-05-27gc --auto: do not lock refs in the backgroundLibravatar Nguyễn Thái Ngọc Duy1-6/+20
9f673f9 (gc: config option for running --auto in background - 2014-02-08) puts "gc --auto" in background to reduce user's wait time. Part of the garbage collecting is pack-refs and pruning reflogs. These require locking some refs and may abort other processes trying to lock the same ref. If gc --auto is fired in the middle of a script, gc's holding locks in the background could fail the script, which could never happen before 9f673f9. Keep running pack-refs and "reflog --prune" in foreground to stop parallel ref updates. The remaining background operations (repack, prune and rerere) should not impact running git processes. Reported-by: Adam Borowski <kilobyte@angband.pl> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>