summaryrefslogtreecommitdiff
path: root/builtin/gc.c
AgeCommit message (Collapse)AuthorFilesLines
2020-07-30strvec: rename struct fieldsLibravatar Jeff King1-12/+12
The "argc" and "argv" names made sense when the struct was argv_array, but now they're just confusing. Let's rename them to "nr" (which we use for counts elsewhere) and "v" (which is rather terse, but reads well when combined with typical variable names like "args.v"). Note that we have to update all of the callers immediately. Playing tricks with the preprocessor is hard here, because we wouldn't want to rewrite unrelated tokens. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: fix indentation in renamed callsLibravatar Jeff King1-1/+1
Code which split an argv_array call across multiple lines, like: argv_array_pushl(&args, "one argument", "another argument", "and more", NULL); was recently mechanically renamed to use strvec, which results in mis-matched indentation like: strvec_pushl(&args, "one argument", "another argument", "and more", NULL); Let's fix these up to align the arguments with the opening paren. I did this manually by sifting through the results of: git jump grep 'strvec_.*,$' and liberally applying my editor's auto-format. Most of the changes are of the form shown above, though I also normalized a few that had originally used a single-tab indentation (rather than our usual style of aligning with the open paren). I also rewrapped a couple of obvious cases (e.g., where previously too-long lines became short enough to fit on one), but I wasn't aggressive about it. In cases broken to three or more lines, the grouping of arguments is sometimes meaningful, and it wasn't worth my time or reviewer time to ponder each case individually. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: convert builtin/ callers away from argv_array nameLibravatar Jeff King1-25/+25
We eventually want to drop the argv_array name and just use strvec consistently. There's no particular reason we have to do it all at once, or care about interactions between converted and unconverted bits. Because of our preprocessor compat layer, the names are interchangeable to the compiler (so even a definition and declaration using different names is OK). This patch converts all of the files in builtin/ to keep the diff to a manageable size. The conversion was done purely mechanically with: git ls-files '*.c' '*.h' | xargs perl -i -pe ' s/ARGV_ARRAY/STRVEC/g; s/argv_array/strvec/g; ' and then selectively staging files with "git add builtin/". We'll deal with any indentation/style fallouts separately. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-07-28strvec: rename files from argv-array to strvecLibravatar Jeff King1-1/+1
This requires updating #include lines across the code-base, but that's all fairly mechanical, and was done with: git ls-files '*.c' '*.h' | xargs perl -i -pe 's/argv-array.h/strvec.h/' Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-04commit-graph.h: store an odb in 'struct write_commit_graph_context'Libravatar Taylor Blau1-1/+1
There are lots of places in 'commit-graph.h' where a function either has (or almost has) a full 'struct object_directory *', accesses '->path', and then throws away the rest of the struct. This can cause headaches when comparing the locations of object directories across alternates (e.g., in the case of deciding if two commit-graph layers can be merged). These paths are normalized with 'normalize_path_copy()' which mitigates some comparison issues, but not all [1]. Replace usage of 'char *object_dir' with 'odb->path' by storing a 'struct object_directory *' in the 'write_commit_graph_context' structure. This is an intermediate step towards getting rid of all path normalization in 'commit-graph.c'. Resolving a user-provided '--object-dir' argument now requires that we compare it to the known alternates for equality. Prior to this patch, an unknown '--object-dir' argument would silently exit with status zero. This can clearly lead to unintended behavior, such as verifying commit-graphs that aren't in a repository's own object store (or one of its alternates), or causing a typo to mask a legitimate commit-graph verification failure. Make this error non-silent by 'die()'-ing when the given '--object-dir' does not match any known alternate object store. [1]: In my testing, for example, I can get one side of the commit-graph code to fill object_dir with "./objects" and the other with just "objects". Signed-off-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-10Fix spelling errors in code commentsLibravatar Elijah Newren1-2/+2
Reported-by: Jens Schleusener <Jens.Schleusener@fossies.org> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-09-18Merge branch 'cc/multi-promisor'Libravatar Junio C Hamano1-1/+2
Teach the lazy clone machinery that there can be more than one promisor remote and consult them in order when downloading missing objects on demand. * cc/multi-promisor: Move core_partial_clone_filter_default to promisor-remote.c Move repository_format_partial_clone to promisor-remote.c Remove fetch-object.{c,h} in favor of promisor-remote.{c,h} remote: add promisor and partial clone config to the doc partial-clone: add multiple remotes in the doc t0410: test fetching from many promisor remotes builtin/fetch: remove unique promisor remote limitation promisor-remote: parse remote.*.partialclonefilter Use promisor_remote_get_direct() and has_promisor_remote() promisor-remote: use repository_format_partial_clone promisor-remote: add promisor_remote_reinit() promisor-remote: implement promisor_remote_get_direct() Add initial support for many promisor remotes fetch-object: make functions return an error code t0410: remove pipes after git commands
2019-09-09Merge branch 'ds/feature-macros'Libravatar Junio C Hamano1-7/+5
A mechanism to affect the default setting for a (related) group of configuration variables is introduced. * ds/feature-macros: repo-settings: create feature.experimental setting repo-settings: create feature.manyFiles setting repo-settings: parse core.untrackedCache commit-graph: turn on commit-graph by default t6501: use 'git gc' in quiet mode repo-settings: consolidate some config settings
2019-08-13repo-settings: consolidate some config settingsLibravatar Derrick Stolee1-7/+5
There are a few important config settings that are not loaded during git_default_config. These are instead loaded on-demand. Centralize these config options to a single scan, and store all of the values in a repo_settings struct. The values for each setting are initialized as negative to indicate "unset". This centralization will be particularly important in a later change to introduce "meta" config settings that change the defaults for these config settings. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-08-05commit-graph: turn a group of write-related macro flags into an enumLibravatar SZEDER Gábor1-1/+1
Signed-off-by: SZEDER Gábor <szeder.dev@gmail.com> Acked-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-07-19Merge branch 'ds/commit-graph-incremental'Libravatar Junio C Hamano1-1/+2
The commits in a repository can be described by multiple commit-graph files now, which allows the commit-graph files to be updated incrementally. * ds/commit-graph-incremental: commit-graph: test verify across alternates commit-graph: normalize commit-graph filenames commit-graph: test --split across alternate without --split commit-graph: test octopus merges with --split commit-graph: clean up chains after flattened write commit-graph: verify chains with --shallow mode commit-graph: create options for split files commit-graph: expire commit-graph files commit-graph: allow cross-alternate chains commit-graph: merge commit-graph chains commit-graph: add --split option to builtin commit-graph: write commit-graph chains commit-graph: rearrange chunk count logic commit-graph: add base graphs chunk commit-graph: load commit-graph chains commit-graph: rename commit_compare to oid_compare commit-graph: prepare for commit-graph chains commit-graph: document commit-graph chains
2019-07-09Merge branch 'ds/close-object-store'Libravatar Junio C Hamano1-2/+2
The commit-graph file is now part of the "files that the runtime may keep open file descriptors on, all of which would need to be closed when done with the object store", and the file descriptor to an existing commit-graph file now is closed before "gc" finalizes a new instance to replace it. * ds/close-object-store: packfile: rename close_all_packs to close_object_store packfile: close commit-graph in close_all_packs commit-graph: use raw_object_store when closing
2019-07-09Merge branch 'ds/commit-graph-write-refactor'Libravatar Junio C Hamano1-3/+4
Renamed from commit-graph-format-v2 and changed scope. * ds/commit-graph-write-refactor: commit-graph: extract write_commit_graph_file() commit-graph: extract copy_oids_to_commits() commit-graph: extract count_distinct_commits() commit-graph: extract fill_oids_from_all_packs() commit-graph: extract fill_oids_from_commit_hex() commit-graph: extract fill_oids_from_packs() commit-graph: create write_commit_graph_context commit-graph: remove Future Work section commit-graph: collapse parameters into flags commit-graph: return with errors during write commit-graph: fix the_repository reference
2019-06-25Use promisor_remote_get_direct() and has_promisor_remote()Libravatar Christian Couder1-1/+2
Instead of using the repository_format_partial_clone global and fetch_objects() directly, let's use has_promisor_remote() and promisor_remote_get_direct(). This way all the configured promisor remotes will be taken into account, not only the one specified by extensions.partialClone. Also when cloning or fetching using a partial clone filter, remote.origin.promisor will be set to "true" instead of setting extensions.partialClone to "origin". This makes it possible to use many promisor remote just by fetching from them. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-19commit-graph: create options for split filesLibravatar Derrick Stolee1-1/+2
The split commit-graph feature is now fully implemented, but needs some more run-time configurability. Allow direct callers to 'git commit-graph write --split' to specify the values used in the merge strategy and the expire time. Update the documentation to specify these values. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-12packfile: rename close_all_packs to close_object_storeLibravatar Derrick Stolee1-2/+2
The close_all_packs() method is now responsible for more than just pack-files. It also closes the commit-graph and the multi-pack-index. Rename the function to be more descriptive of its larger role. The name also fits because the input parameter is a raw_object_store. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-12commit-graph: collapse parameters into flagsLibravatar Derrick Stolee1-2/+2
The write_commit_graph() and write_commit_graph_reachable() methods currently take two boolean parameters: 'append' and 'report_progress'. As we update these methods, adding more parameters this way becomes cluttered and hard to maintain. Collapse these parameters into a 'flags' parameter, and adjust the callers to provide flags as necessary. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-12commit-graph: return with errors during writeLibravatar Derrick Stolee1-3/+4
The write_commit_graph() method uses die() to report failure and exit when confronted with an unexpected condition. This use of die() in a library function is incorrect and is now replaced by error() statements and an int return type. Return zero on success and a negative value on failure. Now that we use 'goto cleanup' to jump to the terminal condition on an error, we have new paths that could lead to uninitialized values. New initializers are added to correct for this. The builtins 'commit-graph', 'gc', and 'commit' call these methods, so update them to check the return value. Test that 'git commit-graph write' returns a proper error code when hitting a failure condition in write_commit_graph(). Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01gc: handle & check gc.reflogExpire configLibravatar Ævar Arnfjörð Bjarmason1-0/+17
Don't redundantly run "git reflog expire --all" when gc.reflogExpire and gc.reflogExpireUnreachable are set to "never", and die immediately if those configuration valuer are bad. As an earlier "assert lack of early exit" change to the tests for "git reflog expire" shows, an early check of gc.reflogExpire{Unreachable,} isn't wanted in general for "git reflog expire", but it makes sense for "gc" because: 1) Similarly to 8ab5aa4bd8 ("parseopt: handle malformed --expire arguments more nicely", 2018-04-21) we'll now die early if the config variables are set to invalid values. We run "pack-refs" before "reflog expire", which can take a while, only to then die on an invalid gc.reflogExpire{Unreachable,} configuration. 2) Not invoking the command at all means it won't show up in trace output, which makes what's going on more obvious when the two are set to "never". 3) As a later change documents we lock the refs when looping over the refs to expire, even in cases where we end up doing nothing due to this config. For the reasons noted in the earlier "assert lack of early exit" change I don't think it's worth it to bend over backwards in "git reflog expire" itself to carefully detect if we'll really do nothing given the combination of all its possible options and skip that locking, but that's easy to detect here in "gc" where we'll only run "reflog expire" in a relatively simple mode. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-18gc: refactor a "call me once" patternLibravatar Ævar Arnfjörð Bjarmason1-3/+9
Change an idiom we're using to ensure that gc_before_repack() only does work once (see 62aad1849f ("gc --auto: do not lock refs in the background", 2014-05-25)) to be more obvious. Nothing except this function cares about the "pack_refs" and "prune_reflogs" variables, so let's not leave the reader wondering if they're being zero'd out for later use somewhere else. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-18gc: convert to using the_hash_algoLibravatar Ævar Arnfjörð Bjarmason1-2/+3
There's been a lot of changing of the hardcoded "40" values to the_hash_algo->hexsz, but we've so far missed this one where we hardcoded 38 for the loose object file length. This is because a SHA-1 like abcde[...] gets turned into objects/ab/cde[...]. There's no reason to suppose the same won't be the case for SHA-256, and reading between the lines in hash-function-transition.txt the format is planned to be the same. In the future we may want to further modify this code for the hash function transition. There's a potential pathological case here where we'll only consider the loose objects for the currently active hash, but objects for that hash will share a directory storage with the other hash. Thus we could theoretically have e.g. 1k SHA-1 loose objects, and 1 million SHA-256 objects. Then not notice that we need to pack them because we're currently using SHA-1, even though our FS may be straining under the stress of such humongous directories. So assuming that "gc" eventually learns to pack up both SHA-1 and SHA-256 objects regardless of what the current the_hash_algo is, perhaps this check should be changed to consider all files in objects/17/ matching [0-9a-f] 38 or 62 characters in length (i.e. both SHA-1 and SHA-256). But none of that is something we need to worry about now, and supporting both 38 and 62 characters depending on "the_hash_algo" removes another case of SHA-1 hardcoding. As noted in [1] I'm making no effort to somehow remove the hardcoding for "2" as in "use the first two hexdigits for the directory name". There's no indication that that'll ever change, and somehow generalizing it here would be a drop in the ocean, so there's no point in doing that. It also couldn't be done without coming up with some generalized version of the magical "objects/17" directory. See [2] for a discussion of that directory. 1. https://public-inbox.org/git/874l84ber7.fsf@evledraar.gmail.com/ 2. https://public-inbox.org/git/87k1mta9x5.fsf@evledraar.gmail.com/ Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-03-14gc: remove redundant check for gc_auto_thresholdLibravatar Ævar Arnfjörð Bjarmason1-3/+0
Checking gc_auto_threshold in too_many_loose_objects() was added in 17815501a8 ("git-gc --auto: run "repack -A -d -l" as necessary.", 2007-09-17) when need_to_gc() itself was also reliant on gc_auto_pack_limit before its early return: gc_auto_threshold <= 0 && gc_auto_pack_limit <= 0 When that check was simplified to just checking "gc_auto_threshold <= 0" in b14d255ba8 ("builtin-gc.c: allow disabling all auto-gc'ing by assigning 0 to gc.auto", 2008-03-19) this unreachable code should have been removed. We only call too_many_loose_objects() from within need_to_gc() itself, which will return if this condition holds, and in cmd_gc() which will return before ever getting to "auto_gc && too_many_loose_objects()" if "auto_gc && !need_to_gc()" is true earlier in the function. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-18Merge branch 'js/gc-repack-close-before-remove'Libravatar Junio C Hamano1-1/+3
"git gc" and "git repack" did not close the open packfiles that they found unneeded before removing them, which didn't work on a platform incapable of removing an open file. This has been corrected. * js/gc-repack-close-before-remove: gc/repack: release packs when needed
2019-01-11gc/repack: release packs when neededLibravatar Johannes Schindelin1-1/+3
On Windows, files cannot be removed nor renamed if there are still handles held by a process. To remedy that, we introduced the close_all_packs() function. Earlier, we made sure that the packs are released just before `git gc` is spawned, in case that gc wants to remove no-longer needed packs. But this developer forgot that gc itself also needs to let go of packs, e.g. when consolidating all packs via the --aggressive option. Likewise, `git repack -d` wants to delete obsolete packs and therefore needs to close all pack handles, too. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-12-09Indent code with TABsLibravatar Nguyễn Thái Ngọc Duy1-1/+1
We indent with TABs and sometimes for fine alignment, TABs followed by spaces, but never all spaces (unless the indentation is less than 8 columns). Indenting with spaces slips through in some places. Fix them. Imported code and compat/ are left alone on purpose. The former should remain as close as upstream as possible. The latter pretty much has separate maintainers, it's up to them to decide. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-16Merge branch 'jn/gc-auto'Libravatar Junio C Hamano1-6/+27
"gc --auto" ended up calling exit(-1) upon error, which has been corrected to use exit(1). Also the error reporting behaviour when daemonized has been updated to exit with zero status when stopping due to a previously discovered error (which implies there is no point running gc to improve the situation); we used to exit with failure in such a case. * jn/gc-auto: gc: do not return error for prior errors in daemonized mode
2018-10-16Merge branch 'jn/gc-auto-prep'Libravatar Junio C Hamano1-22/+18
Code clean-up. * jn/gc-auto-prep: gc: exit with status 128 on failure gc: improve handling of errors reading gc.log
2018-10-16Merge branch 'ab/commit-graph-progress'Libravatar Junio C Hamano1-1/+2
Generation of (experimental) commit-graph files have so far been fairly silent, even though it takes noticeable amount of time in a meaningfully large repository. The users will now see progress output. * ab/commit-graph-progress: gc: fix regression in 7b0f229222 impacting --quiet commit-graph verify: add progress output commit-graph write: add progress output
2018-09-20gc: fix regression in 7b0f229222 impacting --quietLibravatar Ævar Arnfjörð Bjarmason1-1/+1
Fix a regression in my recent 7b0f229222 ("commit-graph write: add progress output", 2018-09-17). The newly added progress output for "commit-graph write" didn't check the --quiet option. Do so, and add a test asserting that this works as expected. Since the TTY prequisite isn't available everywhere let's add a version of this that both requires and doesn't require that. This test might be overly specific and will break if new progress output is added, but I think it'll serve as a good reminder to test the undertested progress mode(s). Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Helped-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-09-17commit-graph write: add progress outputLibravatar Ævar Arnfjörð Bjarmason1-1/+2
Before this change the "commit-graph write" command didn't report any progress. On my machine this command takes more than 10 seconds to write the graph for linux.git, and around 1m30s on the 2015-04-03-1M-git.git[1] test repository (a test case for a large monorepository). Furthermore, since the gc.writeCommitGraph setting was added in d5d5d7b641 ("gc: automatically write commit-graph files", 2018-06-27), there was no indication at all from a "git gc" run that anything was different. This why one of the progress bars being added here uses start_progress() instead of start_delayed_progress(), so that it's guaranteed to be seen. E.g. on my tiny 867 commit dotfiles.git repository: $ git -c gc.writeCommitGraph=true gc Enumerating objects: 2821, done. [...] Computing commit graph generation numbers: 100% (867/867), done. On larger repositories, such as linux.git the delayed progress bar(s) will kick in, and we'll show what's going on instead of, as was previously happening, printing nothing while we write the graph: $ git -c gc.writeCommitGraph=true gc [...] Annotating commits in commit graph: 1565573, done. Computing commit graph generation numbers: 100% (782484/782484), done. Note that here we don't show "Finding commits for commit graph", this is because under "git gc" we seed the search with the commit references in the repository, and that set is too small to show any progress, but would e.g. on a smaller repo such as git.git with --stdin-commits: $ git rev-list --all | git -c gc.writeCommitGraph=true write --stdin-commits Finding commits for commit graph: 100% (162576/162576), done. Computing commit graph generation numbers: 100% (162576/162576), done. With --stdin-packs we don't show any estimation of how much is left to do. This is because we might be processing more than one pack. We could be less lazy here and show progress, either by detecting that we're only processing one pack, or by first looping over the packs to discover how many commits they have. I don't see the point in doing that work. So instead we get (on 2015-04-03-1M-git.git): $ echo pack-<HASH>.idx | git -c gc.writeCommitGraph=true --exec-path=$PWD commit-graph write --stdin-packs Finding commits for commit graph: 13064614, done. Annotating commits in commit graph: 3001341, done. Computing commit graph generation numbers: 100% (1000447/1000447), done. No GC mode uses --stdin-packs. It's what they use at Microsoft to manually compute the generation numbers for their collection of large packs which are never coalesced. The reason we need a "report_progress" variable passed down from "git gc" is so that we don't report this output when we're running in the process "git gc --auto" detaches from the terminal. Since we write the commit graph from the "git gc" process itself (as opposed to what we do with say the "git repack" phase), we'd end up writing the output to .git/gc.log and reporting it to the user next time as part of the "The last gc run reported the following[...]" error, see 329e6e8794 ("gc: save log from daemonized gc --auto and print it next time", 2015-09-19). So we must keep track of whether or not we're running in that demonized mode, and if so print no progress. See [2] and subsequent replies for a discussion of an approach not taken in compute_generation_numbers(). I.e. we're saying "Computing commit graph generation numbers", even though on an established history we're mostly skipping over all the work we did in the past. This is similar to the white lie we tell in the "Writing objects" phase (not all are objects being written). Always showing progress is considered more important than accuracy. I.e. on a repository like 2015-04-03-1M-git.git we'd hang for 6 seconds with no output on the second "git gc" if no changes were made to any objects in the interim if we'd take the approach in [2]. 1. https://github.com/avar/2015-04-03-1M-git 2. <c6960252-c095-fb2b-e0bc-b1e6bb261614@gmail.com> (https://public-inbox.org/git/c6960252-c095-fb2b-e0bc-b1e6bb261614@gmail.com/) Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-20treewide: use get_all_packsLibravatar Derrick Stolee1-2/+2
There are many places in the codebase that want to iterate over all packfiles known to Git. The purposes are wide-ranging, and those that can take advantage of the multi-pack-index already do. So, use get_all_packs() instead of get_packed_git() to be sure we are iterating over all packfiles. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-02Merge branch 'kg/gc-auto-windows-workaround'Libravatar Junio C Hamano1-0/+1
"git gc --auto" opens file descriptors for the packfiles before spawning "git repack/prune", which would upset Windows that does not want a process to work on a file that is open by another process. The issue has been worked around. * kg/gc-auto-windows-workaround: gc --auto: release pack files before auto packing
2018-07-17gc: do not return error for prior errors in daemonized modeLibravatar Jonathan Nieder1-6/+27
Some build machines started consistently failing to fetch updated source using "repo sync", with error error: The last gc run reported the following. Please correct the root cause and remove /build/.repo/projects/tools/git.git/gc.log. Automatic cleanup will not be performed until the file is removed. warning: There are too many unreachable loose objects; run 'git prune' to remove them. The cause takes some time to describe. In v2.0.0-rc0~145^2 (gc: config option for running --auto in background, 2014-02-08), "git gc --auto" learned to run in the background instead of blocking the invoking command. In this mode, it closed stderr to avoid interleaving output with any subsequent commands, causing warnings like the above to be swallowed; v2.6.3~24^2 (gc: save log from daemonized gc --auto and print it next time, 2015-09-19) addressed that by storing any diagnostic output in .git/gc.log and allowing the next "git gc --auto" run to print it. To avoid wasteful repeated fruitless gcs, when gc.log is present, the subsequent "gc --auto" would die after printing its contents. Most git commands, such as "git fetch", ignore the exit status from "git gc --auto" so all is well at this point: the user gets to see the error message, and the fetch succeeds, without a wasteful additional attempt at an automatic gc. External tools like repo[1], though, do care about the exit status from "git gc --auto". In non-daemonized mode, the exit status is straightforward: if there is an error, it is nonzero, but after a warning like the above, the status is zero. The daemonized mode, as a side effect of the other properties provided, offers a very strange exit code convention: - if no housekeeping was required, the exit status is 0 - the first real run, after forking into the background, returns exit status 0 unconditionally. The parent process has no way to know whether gc will succeed. - if there is any diagnostic output in gc.log, subsequent runs return a nonzero exit status to indicate that gc was not triggered. There's nothing for the calling program to act on on the basis of that error. Use status 0 consistently instead, to indicate that we decided not to run a gc (just like if no housekeeping was required). This way, repo and similar tools can get the benefit of the same behavior as tools like "git fetch" that ignore the exit status from gc --auto. Once the period of time described by gc.pruneExpire elapses, the unreachable loose objects will be removed by "git gc --auto" automatically. [1] https://gerrit-review.googlesource.com/c/git-repo/+/10598/ Reported-by: Andrii Dehtiarov <adehtiarov@google.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17gc: exit with status 128 on failureLibravatar Jonathan Nieder1-21/+14
A value of -1 returned from cmd_gc gets propagated to exit(), resulting in an exit status of 255. Use die instead for a clearer error message and a controlled exit. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-17gc: improve handling of errors reading gc.logLibravatar Jonathan Nieder1-3/+6
A collection of minor error handling fixes: - use an error message in lower case, following the usual style - quote filenames in error messages to make them easier to read and to decrease translation load by matching other 'stat' error messages - check for and report errors from 'read', too - avoid being confused by a gc.log larger than INT_MAX bytes Noticed by code inspection. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-07-09gc --auto: release pack files before auto packingLibravatar Kim Gybels1-0/+1
Teach gc --auto to release pack files before auto packing the repository to prevent failures when removing them. Also teach the test 'fetching with auto-gc does not lock up' to complain when it is no longer triggering an auto packing of the repository. Fixes https://github.com/git-for-windows/git/issues/500 Signed-off-by: Kim Gybels <kgybels@infogroep.be> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-27gc: automatically write commit-graph filesLibravatar Derrick Stolee1-0/+6
The commit-graph file is a very helpful feature for speeding up git operations. In order to make it more useful, make it possible to write the commit-graph file during standard garbage collection operations. Add a 'gc.commitGraph' config setting that triggers writing a commit-graph file after any non-trivial 'git gc' command. Defaults to false while the commit-graph feature matures. We specifically do not want to have this on by default until the commit-graph feature is fully integrated with history-modifying features like shallow clones. Helped-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-30Merge branch 'ma/lockfile-cleanup'Libravatar Junio C Hamano1-1/+1
Code clean-up to adjust to a more recent lockfile API convention that allows lockfile instances kept on the stack. * ma/lockfile-cleanup: lock_file: move static locks into functions lock_file: make function-local locks non-static refs.c: do not die if locking fails in `delete_pseudoref()` refs.c: do not die if locking fails in `write_pseudoref()` t/helper/test-write-cache: clean up lock-handling
2018-05-23Merge branch 'nd/repack-keep-pack'Libravatar Junio C Hamano1-6/+159
"git gc" in a large repository takes a lot of time as it considers to repack all objects into one pack by default. The command has been taught to pretend as if the largest existing packfile is marked with ".keep" so that it is left untouched while objects in other packs and loose ones are repacked. * nd/repack-keep-pack: pack-objects: show some progress when counting kept objects gc --auto: exclude base pack if not enough mem to "repack -ad" gc: handle a corner case in gc.bigPackThreshold gc: add gc.bigPackThreshold config gc: add --keep-largest-pack option repack: add --keep-pack option t7700: have closing quote of a test at the beginning of line
2018-05-10lock_file: make function-local locks non-staticLibravatar Martin Ågren1-1/+1
Placing `struct lock_file`s on the stack used to be a bad idea, because the temp- and lockfile-machinery would keep a pointer into the struct. But after 076aa2cbd (tempfile: auto-allocate tempfiles on heap, 2017-09-05), we can safely have lockfiles on the stack. (This applies even if a user returns early, leaving a locked lock behind.) These `struct lock_file`s are local to their respective functions and we can drop their staticness. For good measure, I have inspected these sites and come to believe that they always release the lock, with the possible exception of bailing out using `die()` or `exit()` or by returning from a `cmd_foo()`. As pointed out by Jeff King, it would be bad if someone held on to a `struct lock_file *` for some reason. After some grepping, I agree with his findings: no-one appears to be doing that. Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-08Merge branch 'jc/parseopt-expiry-errors'Libravatar Junio C Hamano1-1/+5
"git gc --prune=nonsense" spent long time repacking and then silently failed when underlying "git prune --expire=nonsense" failed to parse its command line. This has been corrected. * jc/parseopt-expiry-errors: parseopt: handle malformed --expire arguments more nicely gc: do not upcase error message shown with die()
2018-04-23parseopt: handle malformed --expire arguments more nicelyLibravatar Junio C Hamano1-0/+4
A few commands that parse --expire=<time> command line option behave sillily when given nonsense input. For example $ git prune --no-expire Segmentation falut $ git prune --expire=npw; echo $? 129 Both come from parse_opt_expiry_date_cb(). The former is because the function is not prepared to see arg==NULL (for "--no-expire", it is a norm; "--expire" at the end of the command line could be made to pass NULL, if it is told that the argument is optional, but we don't so we do not have to worry about that case). The latter is because it does not check the value returned from the underlying parse_expiry_date(). This seems to be a recent regression introduced while we attempted to avoid spewing the entire usage message when given a correct option but with an invalid value at 3bb0923f ("parse-options: do not show usage upon invalid option value", 2018-03-22). Before that, we didn't fail silently but showed a full usage help (which arguably is not all that better). Also catch this error early when "git gc --prune=<expiration>" is misspelled by doing a dummy parsing before the main body of "gc" that is time consuming even begins. Otherwise, we'd spend time to pack objects and then later have "git prune" first notice the error. Aborting "gc" in the middle that way is not harmful but is ugly and can be avoided. Helped-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-23gc: do not upcase error message shown with die()Libravatar Junio C Hamano1-1/+1
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-16gc --auto: exclude base pack if not enough mem to "repack -ad"Libravatar Nguyễn Thái Ngọc Duy1-1/+97
pack-objects could be a big memory hog especially on large repos, everybody knows that. The suggestion to stick a .keep file on the giant base pack to avoid this problem is also known for a long time. Recent patches add an option to do just this, but it has to be either configured or activated manually. This patch lets `git gc --auto` activate this mode automatically when it thinks `repack -ad` will use a lot of memory and start affecting the system due to swapping or flushing OS cache. gc --auto decides to do this based on an estimation of pack-objects memory usage, which is quite accurate at least for the heap part, and whether that fits in half of system memory (the assumption here is for desktop environment where there are many other applications running). This mechanism only kicks in if gc.bigBasePackThreshold is not configured. If it is, it is assumed that the user already knows what they want. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-16gc: handle a corner case in gc.bigPackThresholdLibravatar Nguyễn Thái Ngọc Duy1-1/+7
This config allows us to keep <N> packs back if their size is larger than a limit. But if this N >= gc.autoPackLimit, we may have a problem. We are supposed to reduce the number of packs after a threshold because it affects performance. We could tell the user that they have incompatible gc.bigPackThreshold and gc.autoPackLimit, but it's kinda hard when 'git gc --auto' runs in background. Instead let's fall back to the next best stategy: try to reduce the number of packs anyway, but keep the base pack out. This reduces the number of packs to two and hopefully won't take up too much resources to repack (the assumption still is the base pack takes most resources to handle). Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-16gc: add gc.bigPackThreshold configLibravatar Nguyễn Thái Ngọc Duy1-6/+20
The --keep-largest-pack option is not very convenient to use because you need to tell gc to do this explicitly (and probably on just a few large repos). Add a config key that enables this mode when packs larger than a limit are found. Note that there's a slight behavior difference compared to --keep-largest-pack: all packs larger than the threshold are kept, not just the largest one. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-16gc: add --keep-largest-pack optionLibravatar Nguyễn Thái Ngọc Duy1-4/+41
This adds a new repack mode that combines everything into a secondary pack, leaving the largest pack alone. This could help reduce memory pressure. On linux-2.6.git, valgrind massif reports 1.6GB heap in "pack all" case, and 535MB in "pack all except the base pack" case. We save roughly 1GB memory by excluding the base pack. This should also lower I/O because we don't have to rewrite a giant pack every time (e.g. for linux-2.6.git that's a 1.4GB pack file).. PS. The use of string_list here seems overkill, but we'll need it in the next patch... Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-11Merge branch 'sb/packfiles-in-repository'Libravatar Junio C Hamano1-2/+1
Refactoring of the internal global data structure continues. * sb/packfiles-in-repository: packfile: keep prepare_packed_git() private packfile: allow find_pack_entry to handle arbitrary repositories packfile: add repository argument to find_pack_entry packfile: allow reprepare_packed_git to handle arbitrary repositories packfile: allow prepare_packed_git to handle arbitrary repositories packfile: allow prepare_packed_git_one to handle arbitrary repositories packfile: add repository argument to reprepare_packed_git packfile: add repository argument to prepare_packed_git packfile: add repository argument to prepare_packed_git_one packfile: allow install_packed_git to handle arbitrary repositories packfile: allow rearrange_packed_git to handle arbitrary repositories packfile: allow prepare_packed_git_mru to handle arbitrary repositories
2018-04-11Merge branch 'sb/object-store'Libravatar Junio C Hamano1-1/+3
Refactoring the internal global data structure to make it possible to open multiple repositories, work with and then close them. Rerolled by Duy on top of a separate preliminary clean-up topic. The resulting structure of the topics looked very sensible. * sb/object-store: (27 commits) sha1_file: allow sha1_loose_object_info to handle arbitrary repositories sha1_file: allow map_sha1_file to handle arbitrary repositories sha1_file: allow map_sha1_file_1 to handle arbitrary repositories sha1_file: allow open_sha1_file to handle arbitrary repositories sha1_file: allow stat_sha1_file to handle arbitrary repositories sha1_file: allow sha1_file_name to handle arbitrary repositories sha1_file: add repository argument to sha1_loose_object_info sha1_file: add repository argument to map_sha1_file sha1_file: add repository argument to map_sha1_file_1 sha1_file: add repository argument to open_sha1_file sha1_file: add repository argument to stat_sha1_file sha1_file: add repository argument to sha1_file_name sha1_file: allow prepare_alt_odb to handle arbitrary repositories sha1_file: allow link_alt_odb_entries to handle arbitrary repositories sha1_file: add repository argument to prepare_alt_odb sha1_file: add repository argument to link_alt_odb_entries sha1_file: add repository argument to read_info_alternates sha1_file: add repository argument to link_alt_odb_entry sha1_file: add raw_object_store argument to alt_odb_usable pack: move approximate object count to object store ...
2018-03-26packfile: keep prepare_packed_git() privateLibravatar Nguyễn Thái Ngọc Duy1-1/+0
The reason callers have to call this is to make sure either packed_git or packed_git_mru pointers are initialized since we don't do that by default. Sometimes it's hard to see this connection between where the function is called and where packed_git pointer is used (sometimes in separate functions). Keep this dependency internal because now all access to packed_git and packed_git_mru must go through get_xxx() wrappers. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>