summaryrefslogtreecommitdiff
path: root/Documentation/technical
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/technical')
-rw-r--r--Documentation/technical/api-allocation-growing.txt39
-rw-r--r--Documentation/technical/api-argv-array.txt65
-rw-r--r--Documentation/technical/api-config.txt319
-rw-r--r--Documentation/technical/api-credentials.txt271
-rw-r--r--Documentation/technical/api-diff.txt174
-rw-r--r--Documentation/technical/api-directory-listing.txt130
-rw-r--r--Documentation/technical/api-error-handling.txt10
-rw-r--r--Documentation/technical/api-gitattributes.txt154
-rw-r--r--Documentation/technical/api-grep.txt8
-rw-r--r--Documentation/technical/api-history-graph.txt173
-rw-r--r--Documentation/technical/api-merge.txt72
-rw-r--r--Documentation/technical/api-object-access.txt15
-rw-r--r--Documentation/technical/api-oid-array.txt90
-rw-r--r--Documentation/technical/api-parse-options.txt8
-rw-r--r--Documentation/technical/api-quote.txt10
-rw-r--r--Documentation/technical/api-ref-iteration.txt78
-rw-r--r--Documentation/technical/api-remote.txt127
-rw-r--r--Documentation/technical/api-revision-walking.txt72
-rw-r--r--Documentation/technical/api-run-command.txt264
-rw-r--r--Documentation/technical/api-setup.txt47
-rw-r--r--Documentation/technical/api-sigchain.txt41
-rw-r--r--Documentation/technical/api-simple-ipc.txt105
-rw-r--r--Documentation/technical/api-submodule-config.txt66
-rw-r--r--Documentation/technical/api-trace.txt140
-rw-r--r--Documentation/technical/api-trace2.txt464
-rw-r--r--Documentation/technical/api-tree-walking.txt147
-rw-r--r--Documentation/technical/api-xdiff-interface.txt7
-rw-r--r--Documentation/technical/bundle-format.txt76
-rw-r--r--Documentation/technical/chunk-format.txt116
-rw-r--r--Documentation/technical/commit-graph-format.txt85
-rw-r--r--Documentation/technical/commit-graph.txt305
-rw-r--r--Documentation/technical/directory-rename-detection.txt29
-rw-r--r--Documentation/technical/hash-function-transition.txt303
-rw-r--r--Documentation/technical/http-protocol.txt7
-rw-r--r--Documentation/technical/index-format.txt111
-rw-r--r--Documentation/technical/multi-pack-index.txt15
-rw-r--r--Documentation/technical/pack-format.txt171
-rw-r--r--Documentation/technical/pack-protocol.txt49
-rw-r--r--Documentation/technical/packfile-uri.txt82
-rw-r--r--Documentation/technical/parallel-checkout.txt270
-rw-r--r--Documentation/technical/partial-clone.txt148
-rw-r--r--Documentation/technical/protocol-capabilities.txt61
-rw-r--r--Documentation/technical/protocol-v2.txt178
-rw-r--r--Documentation/technical/racy-git.txt2
-rw-r--r--Documentation/technical/reftable.txt1098
-rw-r--r--Documentation/technical/remembering-renames.txt671
-rw-r--r--Documentation/technical/rerere.txt2
-rw-r--r--Documentation/technical/shallow.txt2
-rw-r--r--Documentation/technical/sparse-index.txt208
49 files changed, 3865 insertions, 3220 deletions
diff --git a/Documentation/technical/api-allocation-growing.txt b/Documentation/technical/api-allocation-growing.txt
deleted file mode 100644
index 5a59b54844..0000000000
--- a/Documentation/technical/api-allocation-growing.txt
+++ /dev/null
@@ -1,39 +0,0 @@
-allocation growing API
-======================
-
-Dynamically growing an array using realloc() is error prone and boring.
-
-Define your array with:
-
-* a pointer (`item`) that points at the array, initialized to `NULL`
- (although please name the variable based on its contents, not on its
- type);
-
-* an integer variable (`alloc`) that keeps track of how big the current
- allocation is, initialized to `0`;
-
-* another integer variable (`nr`) to keep track of how many elements the
- array currently has, initialized to `0`.
-
-Then before adding `n`th element to the item, call `ALLOC_GROW(item, n,
-alloc)`. This ensures that the array can hold at least `n` elements by
-calling `realloc(3)` and adjusting `alloc` variable.
-
-------------
-sometype *item;
-size_t nr;
-size_t alloc
-
-for (i = 0; i < nr; i++)
- if (we like item[i] already)
- return;
-
-/* we did not like any existing one, so add one */
-ALLOC_GROW(item, nr + 1, alloc);
-item[nr++] = value you like;
-------------
-
-You are responsible for updating the `nr` variable.
-
-If you need to specify the number of elements to allocate explicitly
-then use the macro `REALLOC_ARRAY(item, alloc)` instead of `ALLOC_GROW`.
diff --git a/Documentation/technical/api-argv-array.txt b/Documentation/technical/api-argv-array.txt
deleted file mode 100644
index 870c8edbfb..0000000000
--- a/Documentation/technical/api-argv-array.txt
+++ /dev/null
@@ -1,65 +0,0 @@
-argv-array API
-==============
-
-The argv-array API allows one to dynamically build and store
-NULL-terminated lists. An argv-array maintains the invariant that the
-`argv` member always points to a non-NULL array, and that the array is
-always NULL-terminated at the element pointed to by `argv[argc]`. This
-makes the result suitable for passing to functions expecting to receive
-argv from main(), or the link:api-run-command.html[run-command API].
-
-The string-list API (documented in string-list.h) is similar, but cannot be
-used for these purposes; instead of storing a straight string pointer,
-it contains an item structure with a `util` field that is not compatible
-with the traditional argv interface.
-
-Each `argv_array` manages its own memory. Any strings pushed into the
-array are duplicated, and all memory is freed by argv_array_clear().
-
-Data Structures
----------------
-
-`struct argv_array`::
-
- A single array. This should be initialized by assignment from
- `ARGV_ARRAY_INIT`, or by calling `argv_array_init`. The `argv`
- member contains the actual array; the `argc` member contains the
- number of elements in the array, not including the terminating
- NULL.
-
-Functions
----------
-
-`argv_array_init`::
- Initialize an array. This is no different than assigning from
- `ARGV_ARRAY_INIT`.
-
-`argv_array_push`::
- Push a copy of a string onto the end of the array.
-
-`argv_array_pushl`::
- Push a list of strings onto the end of the array. The arguments
- should be a list of `const char *` strings, terminated by a NULL
- argument.
-
-`argv_array_pushf`::
- Format a string and push it onto the end of the array. This is a
- convenience wrapper combining `strbuf_addf` and `argv_array_push`.
-
-`argv_array_pushv`::
- Push a null-terminated array of strings onto the end of the array.
-
-`argv_array_pop`::
- Remove the final element from the array. If there are no
- elements in the array, do nothing.
-
-`argv_array_clear`::
- Free all memory associated with the array and return it to the
- initial, empty state.
-
-`argv_array_detach`::
- Disconnect the `argv` member from the `argv_array` struct and
- return it. The caller is responsible for freeing the memory used
- by the array, and by the strings it references. After detaching,
- the `argv_array` is in a reinitialized state and can be pushed
- into again.
diff --git a/Documentation/technical/api-config.txt b/Documentation/technical/api-config.txt
deleted file mode 100644
index fa39ac9d71..0000000000
--- a/Documentation/technical/api-config.txt
+++ /dev/null
@@ -1,319 +0,0 @@
-config API
-==========
-
-The config API gives callers a way to access Git configuration files
-(and files which have the same syntax). See linkgit:git-config[1] for a
-discussion of the config file syntax.
-
-General Usage
--------------
-
-Config files are parsed linearly, and each variable found is passed to a
-caller-provided callback function. The callback function is responsible
-for any actions to be taken on the config option, and is free to ignore
-some options. It is not uncommon for the configuration to be parsed
-several times during the run of a Git program, with different callbacks
-picking out different variables useful to themselves.
-
-A config callback function takes three parameters:
-
-- the name of the parsed variable. This is in canonical "flat" form: the
- section, subsection, and variable segments will be separated by dots,
- and the section and variable segments will be all lowercase. E.g.,
- `core.ignorecase`, `diff.SomeType.textconv`.
-
-- the value of the found variable, as a string. If the variable had no
- value specified, the value will be NULL (typically this means it
- should be interpreted as boolean true).
-
-- a void pointer passed in by the caller of the config API; this can
- contain callback-specific data
-
-A config callback should return 0 for success, or -1 if the variable
-could not be parsed properly.
-
-Basic Config Querying
----------------------
-
-Most programs will simply want to look up variables in all config files
-that Git knows about, using the normal precedence rules. To do this,
-call `git_config` with a callback function and void data pointer.
-
-`git_config` will read all config sources in order of increasing
-priority. Thus a callback should typically overwrite previously-seen
-entries with new ones (e.g., if both the user-wide `~/.gitconfig` and
-repo-specific `.git/config` contain `color.ui`, the config machinery
-will first feed the user-wide one to the callback, and then the
-repo-specific one; by overwriting, the higher-priority repo-specific
-value is left at the end).
-
-The `config_with_options` function lets the caller examine config
-while adjusting some of the default behavior of `git_config`. It should
-almost never be used by "regular" Git code that is looking up
-configuration variables. It is intended for advanced callers like
-`git-config`, which are intentionally tweaking the normal config-lookup
-process. It takes two extra parameters:
-
-`config_source`::
-If this parameter is non-NULL, it specifies the source to parse for
-configuration, rather than looking in the usual files. See `struct
-git_config_source` in `config.h` for details. Regular `git_config` defaults
-to `NULL`.
-
-`opts`::
-Specify options to adjust the behavior of parsing config files. See `struct
-config_options` in `config.h` for details. As an example: regular `git_config`
-sets `opts.respect_includes` to `1` by default.
-
-Reading Specific Files
-----------------------
-
-To read a specific file in git-config format, use
-`git_config_from_file`. This takes the same callback and data parameters
-as `git_config`.
-
-Querying For Specific Variables
--------------------------------
-
-For programs wanting to query for specific variables in a non-callback
-manner, the config API provides two functions `git_config_get_value`
-and `git_config_get_value_multi`. They both read values from an internal
-cache generated previously from reading the config files.
-
-`int git_config_get_value(const char *key, const char **value)`::
-
- Finds the highest-priority value for the configuration variable `key`,
- stores the pointer to it in `value` and returns 0. When the
- configuration variable `key` is not found, returns 1 without touching
- `value`. The caller should not free or modify `value`, as it is owned
- by the cache.
-
-`const struct string_list *git_config_get_value_multi(const char *key)`::
-
- Finds and returns the value list, sorted in order of increasing priority
- for the configuration variable `key`. When the configuration variable
- `key` is not found, returns NULL. The caller should not free or modify
- the returned pointer, as it is owned by the cache.
-
-`void git_config_clear(void)`::
-
- Resets and invalidates the config cache.
-
-The config API also provides type specific API functions which do conversion
-as well as retrieval for the queried variable, including:
-
-`int git_config_get_int(const char *key, int *dest)`::
-
- Finds and parses the value to an integer for the configuration variable
- `key`. Dies on error; otherwise, stores the value of the parsed integer in
- `dest` and returns 0. When the configuration variable `key` is not found,
- returns 1 without touching `dest`.
-
-`int git_config_get_ulong(const char *key, unsigned long *dest)`::
-
- Similar to `git_config_get_int` but for unsigned longs.
-
-`int git_config_get_bool(const char *key, int *dest)`::
-
- Finds and parses the value into a boolean value, for the configuration
- variable `key` respecting keywords like "true" and "false". Integer
- values are converted into true/false values (when they are non-zero or
- zero, respectively). Other values cause a die(). If parsing is successful,
- stores the value of the parsed result in `dest` and returns 0. When the
- configuration variable `key` is not found, returns 1 without touching
- `dest`.
-
-`int git_config_get_bool_or_int(const char *key, int *is_bool, int *dest)`::
-
- Similar to `git_config_get_bool`, except that integers are copied as-is,
- and `is_bool` flag is unset.
-
-`int git_config_get_maybe_bool(const char *key, int *dest)`::
-
- Similar to `git_config_get_bool`, except that it returns -1 on error
- rather than dying.
-
-`int git_config_get_string_const(const char *key, const char **dest)`::
-
- Allocates and copies the retrieved string into the `dest` parameter for
- the configuration variable `key`; if NULL string is given, prints an
- error message and returns -1. When the configuration variable `key` is
- not found, returns 1 without touching `dest`.
-
-`int git_config_get_string(const char *key, char **dest)`::
-
- Similar to `git_config_get_string_const`, except that retrieved value
- copied into the `dest` parameter is a mutable string.
-
-`int git_config_get_pathname(const char *key, const char **dest)`::
-
- Similar to `git_config_get_string`, but expands `~` or `~user` into
- the user's home directory when found at the beginning of the path.
-
-`git_die_config(const char *key, const char *err, ...)`::
-
- First prints the error message specified by the caller in `err` and then
- dies printing the line number and the file name of the highest priority
- value for the configuration variable `key`.
-
-`void git_die_config_linenr(const char *key, const char *filename, int linenr)`::
-
- Helper function which formats the die error message according to the
- parameters entered. Used by `git_die_config()`. It can be used by callers
- handling `git_config_get_value_multi()` to print the correct error message
- for the desired value.
-
-See test-config.c for usage examples.
-
-Value Parsing Helpers
----------------------
-
-To aid in parsing string values, the config API provides callbacks with
-a number of helper functions, including:
-
-`git_config_int`::
-Parse the string to an integer, including unit factors. Dies on error;
-otherwise, returns the parsed result.
-
-`git_config_ulong`::
-Identical to `git_config_int`, but for unsigned longs.
-
-`git_config_bool`::
-Parse a string into a boolean value, respecting keywords like "true" and
-"false". Integer values are converted into true/false values (when they
-are non-zero or zero, respectively). Other values cause a die(). If
-parsing is successful, the return value is the result.
-
-`git_config_bool_or_int`::
-Same as `git_config_bool`, except that integers are returned as-is, and
-an `is_bool` flag is unset.
-
-`git_parse_maybe_bool`::
-Same as `git_config_bool`, except that it returns -1 on error rather
-than dying.
-
-`git_config_string`::
-Allocates and copies the value string into the `dest` parameter; if no
-string is given, prints an error message and returns -1.
-
-`git_config_pathname`::
-Similar to `git_config_string`, but expands `~` or `~user` into the
-user's home directory when found at the beginning of the path.
-
-Include Directives
-------------------
-
-By default, the config parser does not respect include directives.
-However, a caller can use the special `git_config_include` wrapper
-callback to support them. To do so, you simply wrap your "real" callback
-function and data pointer in a `struct config_include_data`, and pass
-the wrapper to the regular config-reading functions. For example:
-
--------------------------------------------
-int read_file_with_include(const char *file, config_fn_t fn, void *data)
-{
- struct config_include_data inc = CONFIG_INCLUDE_INIT;
- inc.fn = fn;
- inc.data = data;
- return git_config_from_file(git_config_include, file, &inc);
-}
--------------------------------------------
-
-`git_config` respects includes automatically. The lower-level
-`git_config_from_file` does not.
-
-Custom Configsets
------------------
-
-A `config_set` can be used to construct an in-memory cache for
-config-like files that the caller specifies (i.e., files like `.gitmodules`,
-`~/.gitconfig` etc.). For example,
-
----------------------------------------
-struct config_set gm_config;
-git_configset_init(&gm_config);
-int b;
-/* we add config files to the config_set */
-git_configset_add_file(&gm_config, ".gitmodules");
-git_configset_add_file(&gm_config, ".gitmodules_alt");
-
-if (!git_configset_get_bool(gm_config, "submodule.frotz.ignore", &b)) {
- /* hack hack hack */
-}
-
-/* when we are done with the configset */
-git_configset_clear(&gm_config);
-----------------------------------------
-
-Configset API provides functions for the above mentioned work flow, including:
-
-`void git_configset_init(struct config_set *cs)`::
-
- Initializes the config_set `cs`.
-
-`int git_configset_add_file(struct config_set *cs, const char *filename)`::
-
- Parses the file and adds the variable-value pairs to the `config_set`,
- dies if there is an error in parsing the file. Returns 0 on success, or
- -1 if the file does not exist or is inaccessible. The user has to decide
- if he wants to free the incomplete configset or continue using it when
- the function returns -1.
-
-`int git_configset_get_value(struct config_set *cs, const char *key, const char **value)`::
-
- Finds the highest-priority value for the configuration variable `key`
- and config set `cs`, stores the pointer to it in `value` and returns 0.
- When the configuration variable `key` is not found, returns 1 without
- touching `value`. The caller should not free or modify `value`, as it
- is owned by the cache.
-
-`const struct string_list *git_configset_get_value_multi(struct config_set *cs, const char *key)`::
-
- Finds and returns the value list, sorted in order of increasing priority
- for the configuration variable `key` and config set `cs`. When the
- configuration variable `key` is not found, returns NULL. The caller
- should not free or modify the returned pointer, as it is owned by the cache.
-
-`void git_configset_clear(struct config_set *cs)`::
-
- Clears `config_set` structure, removes all saved variable-value pairs.
-
-In addition to above functions, the `config_set` API provides type specific
-functions in the vein of `git_config_get_int` and family but with an extra
-parameter, pointer to struct `config_set`.
-They all behave similarly to the `git_config_get*()` family described in
-"Querying For Specific Variables" above.
-
-Writing Config Files
---------------------
-
-Git gives multiple entry points in the Config API to write config values to
-files namely `git_config_set_in_file` and `git_config_set`, which write to
-a specific config file or to `.git/config` respectively. They both take a
-key/value pair as parameter.
-In the end they both call `git_config_set_multivar_in_file` which takes four
-parameters:
-
-- the name of the file, as a string, to which key/value pairs will be written.
-
-- the name of key, as a string. This is in canonical "flat" form: the section,
- subsection, and variable segments will be separated by dots, and the section
- and variable segments will be all lowercase.
- E.g., `core.ignorecase`, `diff.SomeType.textconv`.
-
-- the value of the variable, as a string. If value is equal to NULL, it will
- remove the matching key from the config file.
-
-- the value regex, as a string. It will disregard key/value pairs where value
- does not match.
-
-- a multi_replace value, as an int. If value is equal to zero, nothing or only
- one matching key/value is replaced, else all matching key/values (regardless
- how many) are removed, before the new pair is written.
-
-It returns 0 on success.
-
-Also, there are functions `git_config_rename_section` and
-`git_config_rename_section_in_file` with parameters `old_name` and `new_name`
-for renaming or removing sections in the config files. If NULL is passed
-through `new_name` parameter, the section will be removed from the config file.
diff --git a/Documentation/technical/api-credentials.txt b/Documentation/technical/api-credentials.txt
deleted file mode 100644
index 75368f26ca..0000000000
--- a/Documentation/technical/api-credentials.txt
+++ /dev/null
@@ -1,271 +0,0 @@
-credentials API
-===============
-
-The credentials API provides an abstracted way of gathering username and
-password credentials from the user (even though credentials in the wider
-world can take many forms, in this document the word "credential" always
-refers to a username and password pair).
-
-This document describes two interfaces: the C API that the credential
-subsystem provides to the rest of Git, and the protocol that Git uses to
-communicate with system-specific "credential helpers". If you are
-writing Git code that wants to look up or prompt for credentials, see
-the section "C API" below. If you want to write your own helper, see
-the section on "Credential Helpers" below.
-
-Typical setup
--------------
-
-------------
-+-----------------------+
-| Git code (C) |--- to server requiring --->
-| | authentication
-|.......................|
-| C credential API |--- prompt ---> User
-+-----------------------+
- ^ |
- | pipe |
- | v
-+-----------------------+
-| Git credential helper |
-+-----------------------+
-------------
-
-The Git code (typically a remote-helper) will call the C API to obtain
-credential data like a login/password pair (credential_fill). The
-API will itself call a remote helper (e.g. "git credential-cache" or
-"git credential-store") that may retrieve credential data from a
-store. If the credential helper cannot find the information, the C API
-will prompt the user. Then, the caller of the API takes care of
-contacting the server, and does the actual authentication.
-
-C API
------
-
-The credential C API is meant to be called by Git code which needs to
-acquire or store a credential. It is centered around an object
-representing a single credential and provides three basic operations:
-fill (acquire credentials by calling helpers and/or prompting the user),
-approve (mark a credential as successfully used so that it can be stored
-for later use), and reject (mark a credential as unsuccessful so that it
-can be erased from any persistent storage).
-
-Data Structures
-~~~~~~~~~~~~~~~
-
-`struct credential`::
-
- This struct represents a single username/password combination
- along with any associated context. All string fields should be
- heap-allocated (or NULL if they are not known or not applicable).
- The meaning of the individual context fields is the same as
- their counterparts in the helper protocol; see the section below
- for a description of each field.
-+
-The `helpers` member of the struct is a `string_list` of helpers. Each
-string specifies an external helper which will be run, in order, to
-either acquire or store credentials. See the section on credential
-helpers below. This list is filled-in by the API functions
-according to the corresponding configuration variables before
-consulting helpers, so there usually is no need for a caller to
-modify the helpers field at all.
-+
-This struct should always be initialized with `CREDENTIAL_INIT` or
-`credential_init`.
-
-
-Functions
-~~~~~~~~~
-
-`credential_init`::
-
- Initialize a credential structure, setting all fields to empty.
-
-`credential_clear`::
-
- Free any resources associated with the credential structure,
- returning it to a pristine initialized state.
-
-`credential_fill`::
-
- Instruct the credential subsystem to fill the username and
- password fields of the passed credential struct by first
- consulting helpers, then asking the user. After this function
- returns, the username and password fields of the credential are
- guaranteed to be non-NULL. If an error occurs, the function will
- die().
-
-`credential_reject`::
-
- Inform the credential subsystem that the provided credentials
- have been rejected. This will cause the credential subsystem to
- notify any helpers of the rejection (which allows them, for
- example, to purge the invalid credentials from storage). It
- will also free() the username and password fields of the
- credential and set them to NULL (readying the credential for
- another call to `credential_fill`). Any errors from helpers are
- ignored.
-
-`credential_approve`::
-
- Inform the credential subsystem that the provided credentials
- were successfully used for authentication. This will cause the
- credential subsystem to notify any helpers of the approval, so
- that they may store the result to be used again. Any errors
- from helpers are ignored.
-
-`credential_from_url`::
-
- Parse a URL into broken-down credential fields.
-
-Example
-~~~~~~~
-
-The example below shows how the functions of the credential API could be
-used to login to a fictitious "foo" service on a remote host:
-
------------------------------------------------------------------------
-int foo_login(struct foo_connection *f)
-{
- int status;
- /*
- * Create a credential with some context; we don't yet know the
- * username or password.
- */
-
- struct credential c = CREDENTIAL_INIT;
- c.protocol = xstrdup("foo");
- c.host = xstrdup(f->hostname);
-
- /*
- * Fill in the username and password fields by contacting
- * helpers and/or asking the user. The function will die if it
- * fails.
- */
- credential_fill(&c);
-
- /*
- * Otherwise, we have a username and password. Try to use it.
- */
- status = send_foo_login(f, c.username, c.password);
- switch (status) {
- case FOO_OK:
- /* It worked. Store the credential for later use. */
- credential_accept(&c);
- break;
- case FOO_BAD_LOGIN:
- /* Erase the credential from storage so we don't try it
- * again. */
- credential_reject(&c);
- break;
- default:
- /*
- * Some other error occurred. We don't know if the
- * credential is good or bad, so report nothing to the
- * credential subsystem.
- */
- }
-
- /* Free any associated resources. */
- credential_clear(&c);
-
- return status;
-}
------------------------------------------------------------------------
-
-
-Credential Helpers
-------------------
-
-Credential helpers are programs executed by Git to fetch or save
-credentials from and to long-term storage (where "long-term" is simply
-longer than a single Git process; e.g., credentials may be stored
-in-memory for a few minutes, or indefinitely on disk).
-
-Each helper is specified by a single string in the configuration
-variable `credential.helper` (and others, see linkgit:git-config[1]).
-The string is transformed by Git into a command to be executed using
-these rules:
-
- 1. If the helper string begins with "!", it is considered a shell
- snippet, and everything after the "!" becomes the command.
-
- 2. Otherwise, if the helper string begins with an absolute path, the
- verbatim helper string becomes the command.
-
- 3. Otherwise, the string "git credential-" is prepended to the helper
- string, and the result becomes the command.
-
-The resulting command then has an "operation" argument appended to it
-(see below for details), and the result is executed by the shell.
-
-Here are some example specifications:
-
-----------------------------------------------------
-# run "git credential-foo"
-foo
-
-# same as above, but pass an argument to the helper
-foo --bar=baz
-
-# the arguments are parsed by the shell, so use shell
-# quoting if necessary
-foo --bar="whitespace arg"
-
-# you can also use an absolute path, which will not use the git wrapper
-/path/to/my/helper --with-arguments
-
-# or you can specify your own shell snippet
-!f() { echo "password=`cat $HOME/.secret`"; }; f
-----------------------------------------------------
-
-Generally speaking, rule (3) above is the simplest for users to specify.
-Authors of credential helpers should make an effort to assist their
-users by naming their program "git-credential-$NAME", and putting it in
-the $PATH or $GIT_EXEC_PATH during installation, which will allow a user
-to enable it with `git config credential.helper $NAME`.
-
-When a helper is executed, it will have one "operation" argument
-appended to its command line, which is one of:
-
-`get`::
-
- Return a matching credential, if any exists.
-
-`store`::
-
- Store the credential, if applicable to the helper.
-
-`erase`::
-
- Remove a matching credential, if any, from the helper's storage.
-
-The details of the credential will be provided on the helper's stdin
-stream. The exact format is the same as the input/output format of the
-`git credential` plumbing command (see the section `INPUT/OUTPUT
-FORMAT` in linkgit:git-credential[1] for a detailed specification).
-
-For a `get` operation, the helper should produce a list of attributes
-on stdout in the same format. A helper is free to produce a subset, or
-even no values at all if it has nothing useful to provide. Any provided
-attributes will overwrite those already known about by Git. If a helper
-outputs a `quit` attribute with a value of `true` or `1`, no further
-helpers will be consulted, nor will the user be prompted (if no
-credential has been provided, the operation will then fail).
-
-For a `store` or `erase` operation, the helper's output is ignored.
-If it fails to perform the requested operation, it may complain to
-stderr to inform the user. If it does not support the requested
-operation (e.g., a read-only store), it should silently ignore the
-request.
-
-If a helper receives any other operation, it should silently ignore the
-request. This leaves room for future operations to be added (older
-helpers will just ignore the new requests).
-
-See also
---------
-
-linkgit:gitcredentials[7]
-
-linkgit:git-config[1] (See configuration variables `credential.*`)
diff --git a/Documentation/technical/api-diff.txt b/Documentation/technical/api-diff.txt
deleted file mode 100644
index 30fc0e9c93..0000000000
--- a/Documentation/technical/api-diff.txt
+++ /dev/null
@@ -1,174 +0,0 @@
-diff API
-========
-
-The diff API is for programs that compare two sets of files (e.g. two
-trees, one tree and the index) and present the found difference in
-various ways. The calling program is responsible for feeding the API
-pairs of files, one from the "old" set and the corresponding one from
-"new" set, that are different. The library called through this API is
-called diffcore, and is responsible for two things.
-
-* finding total rewrites (`-B`), renames (`-M`) and copies (`-C`), and
- changes that touch a string (`-S`), as specified by the caller.
-
-* outputting the differences in various formats, as specified by the
- caller.
-
-Calling sequence
-----------------
-
-* Prepare `struct diff_options` to record the set of diff options, and
- then call `repo_diff_setup()` to initialize this structure. This
- sets up the vanilla default.
-
-* Fill in the options structure to specify desired output format, rename
- detection, etc. `diff_opt_parse()` can be used to parse options given
- from the command line in a way consistent with existing git-diff
- family of programs.
-
-* Call `diff_setup_done()`; this inspects the options set up so far for
- internal consistency and make necessary tweaking to it (e.g. if
- textual patch output was asked, recursive behaviour is turned on);
- the callback set_default in diff_options can be used to tweak this more.
-
-* As you find different pairs of files, call `diff_change()` to feed
- modified files, `diff_addremove()` to feed created or deleted files,
- or `diff_unmerge()` to feed a file whose state is 'unmerged' to the
- API. These are thin wrappers to a lower-level `diff_queue()` function
- that is flexible enough to record any of these kinds of changes.
-
-* Once you finish feeding the pairs of files, call `diffcore_std()`.
- This will tell the diffcore library to go ahead and do its work.
-
-* Calling `diff_flush()` will produce the output.
-
-
-Data structures
----------------
-
-* `struct diff_filespec`
-
-This is the internal representation for a single file (blob). It
-records the blob object name (if known -- for a work tree file it
-typically is a NUL SHA-1), filemode and pathname. This is what the
-`diff_addremove()`, `diff_change()` and `diff_unmerge()` synthesize and
-feed `diff_queue()` function with.
-
-* `struct diff_filepair`
-
-This records a pair of `struct diff_filespec`; the filespec for a file
-in the "old" set (i.e. preimage) is called `one`, and the filespec for a
-file in the "new" set (i.e. postimage) is called `two`. A change that
-represents file creation has NULL in `one`, and file deletion has NULL
-in `two`.
-
-A `filepair` starts pointing at `one` and `two` that are from the same
-filename, but `diffcore_std()` can break pairs and match component
-filespecs with other filespecs from a different filepair to form new
-filepair. This is called 'rename detection'.
-
-* `struct diff_queue`
-
-This is a collection of filepairs. Notable members are:
-
-`queue`::
-
- An array of pointers to `struct diff_filepair`. This
- dynamically grows as you add filepairs;
-
-`alloc`::
-
- The allocated size of the `queue` array;
-
-`nr`::
-
- The number of elements in the `queue` array.
-
-
-* `struct diff_options`
-
-This describes the set of options the calling program wants to affect
-the operation of diffcore library with.
-
-Notable members are:
-
-`output_format`::
- The output format used when `diff_flush()` is run.
-
-`context`::
- Number of context lines to generate in patch output.
-
-`break_opt`, `detect_rename`, `rename-score`, `rename_limit`::
- Affects the way detection logic for complete rewrites, renames
- and copies.
-
-`abbrev`::
- Number of hexdigits to abbreviate raw format output to.
-
-`pickaxe`::
- A constant string (can and typically does contain newlines to
- look for a block of text, not just a single line) to filter out
- the filepairs that do not change the number of strings contained
- in its preimage and postimage of the diff_queue.
-
-`flags`::
- This is mostly a collection of boolean options that affects the
- operation, but some do not have anything to do with the diffcore
- library.
-
-`touched_flags`::
- Records whether a flag has been changed due to user request
- (rather than just set/unset by default).
-
-`set_default`::
- Callback which allows tweaking the options in diff_setup_done().
-
-BINARY, TEXT;;
- Affects the way how a file that is seemingly binary is treated.
-
-FULL_INDEX;;
- Tells the patch output format not to use abbreviated object
- names on the "index" lines.
-
-FIND_COPIES_HARDER;;
- Tells the diffcore library that the caller is feeding unchanged
- filepairs to allow copies from unmodified files be detected.
-
-COLOR_DIFF;;
- Output should be colored.
-
-COLOR_DIFF_WORDS;;
- Output is a colored word-diff.
-
-NO_INDEX;;
- Tells diff-files that the input is not tracked files but files
- in random locations on the filesystem.
-
-ALLOW_EXTERNAL;;
- Tells output routine that it is Ok to call user specified patch
- output routine. Plumbing disables this to ensure stable output.
-
-QUIET;;
- Do not show any output.
-
-REVERSE_DIFF;;
- Tells the library that the calling program is feeding the
- filepairs reversed; `one` is two, and `two` is one.
-
-EXIT_WITH_STATUS;;
- For communication between the calling program and the options
- parser; tell the calling program to signal the presence of
- difference using program exit code.
-
-HAS_CHANGES;;
- Internal; used for optimization to see if there is any change.
-
-SILENT_ON_REMOVE;;
- Affects if diff-files shows removed files.
-
-RECURSIVE, TREE_IN_RECURSIVE;;
- Tells if tree traversal done by tree-diff should recursively
- descend into a tree object pair that are different in preimage
- and postimage set.
-
-(JC)
diff --git a/Documentation/technical/api-directory-listing.txt b/Documentation/technical/api-directory-listing.txt
deleted file mode 100644
index 5abb8e8b1f..0000000000
--- a/Documentation/technical/api-directory-listing.txt
+++ /dev/null
@@ -1,130 +0,0 @@
-directory listing API
-=====================
-
-The directory listing API is used to enumerate paths in the work tree,
-optionally taking `.git/info/exclude` and `.gitignore` files per
-directory into account.
-
-Data structure
---------------
-
-`struct dir_struct` structure is used to pass directory traversal
-options to the library and to record the paths discovered. A single
-`struct dir_struct` is used regardless of whether or not the traversal
-recursively descends into subdirectories.
-
-The notable options are:
-
-`exclude_per_dir`::
-
- The name of the file to be read in each directory for excluded
- files (typically `.gitignore`).
-
-`flags`::
-
- A bit-field of options:
-
-`DIR_SHOW_IGNORED`:::
-
- Return just ignored files in `entries[]`, not untracked
- files. This flag is mutually exclusive with
- `DIR_SHOW_IGNORED_TOO`.
-
-`DIR_SHOW_IGNORED_TOO`:::
-
- Similar to `DIR_SHOW_IGNORED`, but return ignored files in
- `ignored[]` in addition to untracked files in
- `entries[]`. This flag is mutually exclusive with
- `DIR_SHOW_IGNORED`.
-
-`DIR_KEEP_UNTRACKED_CONTENTS`:::
-
- Only has meaning if `DIR_SHOW_IGNORED_TOO` is also set; if this is set, the
- untracked contents of untracked directories are also returned in
- `entries[]`.
-
-`DIR_SHOW_IGNORED_TOO_MODE_MATCHING`:::
-
- Only has meaning if `DIR_SHOW_IGNORED_TOO` is also set; if
- this is set, returns ignored files and directories that match
- an exclude pattern. If a directory matches an exclude pattern,
- then the directory is returned and the contained paths are
- not. A directory that does not match an exclude pattern will
- not be returned even if all of its contents are ignored. In
- this case, the contents are returned as individual entries.
-+
-If this is set, files and directories that explicitly match an ignore
-pattern are reported. Implicitly ignored directories (directories that
-do not match an ignore pattern, but whose contents are all ignored)
-are not reported, instead all of the contents are reported.
-
-`DIR_COLLECT_IGNORED`:::
-
- Special mode for git-add. Return ignored files in `ignored[]` and
- untracked files in `entries[]`. Only returns ignored files that match
- pathspec exactly (no wildcards). Does not recurse into ignored
- directories.
-
-`DIR_SHOW_OTHER_DIRECTORIES`:::
-
- Include a directory that is not tracked.
-
-`DIR_HIDE_EMPTY_DIRECTORIES`:::
-
- Do not include a directory that is not tracked and is empty.
-
-`DIR_NO_GITLINKS`:::
-
- If set, recurse into a directory that looks like a Git
- directory. Otherwise it is shown as a directory.
-
-The result of the enumeration is left in these fields:
-
-`entries[]`::
-
- An array of `struct dir_entry`, each element of which describes
- a path.
-
-`nr`::
-
- The number of members in `entries[]` array.
-
-`alloc`::
-
- Internal use; keeps track of allocation of `entries[]` array.
-
-`ignored[]`::
-
- An array of `struct dir_entry`, used for ignored paths with the
- `DIR_SHOW_IGNORED_TOO` and `DIR_COLLECT_IGNORED` flags.
-
-`ignored_nr`::
-
- The number of members in `ignored[]` array.
-
-Calling sequence
-----------------
-
-Note: index may be looked at for .gitignore files that are CE_SKIP_WORKTREE
-marked. If you to exclude files, make sure you have loaded index first.
-
-* Prepare `struct dir_struct dir` and clear it with `memset(&dir, 0,
- sizeof(dir))`.
-
-* To add single exclude pattern, call `add_exclude_list()` and then
- `add_exclude()`.
-
-* To add patterns from a file (e.g. `.git/info/exclude`), call
- `add_excludes_from_file()` , and/or set `dir.exclude_per_dir`. A
- short-hand function `setup_standard_excludes()` can be used to set
- up the standard set of exclude settings.
-
-* Set options described in the Data Structure section above.
-
-* Call `read_directory()`.
-
-* Use `dir.entries[]`.
-
-* Call `clear_directory()` when none of the contained elements are no longer in use.
-
-(JC)
diff --git a/Documentation/technical/api-error-handling.txt b/Documentation/technical/api-error-handling.txt
index ceeedd485c..8be4f4d0d6 100644
--- a/Documentation/technical/api-error-handling.txt
+++ b/Documentation/technical/api-error-handling.txt
@@ -1,8 +1,11 @@
Error reporting in git
======================
-`die`, `usage`, `error`, and `warning` report errors of various
-kinds.
+`BUG`, `die`, `usage`, `error`, and `warning` report errors of
+various kinds.
+
+- `BUG` is for failed internal assertions that should never happen,
+ i.e. a bug in git itself.
- `die` is for fatal application errors. It prints a message to
the user and exits with status 128.
@@ -20,6 +23,9 @@ kinds.
without running into too many problems. Like `error`, it
returns -1 after reporting the situation to the caller.
+These reports will be logged via the trace2 facility. See the "error"
+event in link:api-trace2.txt[trace2 API].
+
Customizable error handlers
---------------------------
diff --git a/Documentation/technical/api-gitattributes.txt b/Documentation/technical/api-gitattributes.txt
deleted file mode 100644
index 45f0df600f..0000000000
--- a/Documentation/technical/api-gitattributes.txt
+++ /dev/null
@@ -1,154 +0,0 @@
-gitattributes API
-=================
-
-gitattributes mechanism gives a uniform way to associate various
-attributes to set of paths.
-
-
-Data Structure
---------------
-
-`struct git_attr`::
-
- An attribute is an opaque object that is identified by its name.
- Pass the name to `git_attr()` function to obtain the object of
- this type. The internal representation of this structure is
- of no interest to the calling programs. The name of the
- attribute can be retrieved by calling `git_attr_name()`.
-
-`struct attr_check_item`::
-
- This structure represents one attribute and its value.
-
-`struct attr_check`::
-
- This structure represents a collection of `attr_check_item`.
- It is passed to `git_check_attr()` function, specifying the
- attributes to check, and receives their values.
-
-
-Attribute Values
-----------------
-
-An attribute for a path can be in one of four states: Set, Unset,
-Unspecified or set to a string, and `.value` member of `struct
-attr_check_item` records it. There are three macros to check these:
-
-`ATTR_TRUE()`::
-
- Returns true if the attribute is Set for the path.
-
-`ATTR_FALSE()`::
-
- Returns true if the attribute is Unset for the path.
-
-`ATTR_UNSET()`::
-
- Returns true if the attribute is Unspecified for the path.
-
-If none of the above returns true, `.value` member points at a string
-value of the attribute for the path.
-
-
-Querying Specific Attributes
-----------------------------
-
-* Prepare `struct attr_check` using attr_check_initl()
- function, enumerating the names of attributes whose values you are
- interested in, terminated with a NULL pointer. Alternatively, an
- empty `struct attr_check` can be prepared by calling
- `attr_check_alloc()` function and then attributes you want to
- ask about can be added to it with `attr_check_append()`
- function.
-
-* Call `git_check_attr()` to check the attributes for the path.
-
-* Inspect `attr_check` structure to see how each of the
- attribute in the array is defined for the path.
-
-
-Example
--------
-
-To see how attributes "crlf" and "ident" are set for different paths.
-
-. Prepare a `struct attr_check` with two elements (because
- we are checking two attributes):
-
-------------
-static struct attr_check *check;
-static void setup_check(void)
-{
- if (check)
- return; /* already done */
- check = attr_check_initl("crlf", "ident", NULL);
-}
-------------
-
-. Call `git_check_attr()` with the prepared `struct attr_check`:
-
-------------
- const char *path;
-
- setup_check();
- git_check_attr(path, check);
-------------
-
-. Act on `.value` member of the result, left in `check->items[]`:
-
-------------
- const char *value = check->items[0].value;
-
- if (ATTR_TRUE(value)) {
- The attribute is Set, by listing only the name of the
- attribute in the gitattributes file for the path.
- } else if (ATTR_FALSE(value)) {
- The attribute is Unset, by listing the name of the
- attribute prefixed with a dash - for the path.
- } else if (ATTR_UNSET(value)) {
- The attribute is neither set nor unset for the path.
- } else if (!strcmp(value, "input")) {
- If none of ATTR_TRUE(), ATTR_FALSE(), or ATTR_UNSET() is
- true, the value is a string set in the gitattributes
- file for the path by saying "attr=value".
- } else if (... other check using value as string ...) {
- ...
- }
-------------
-
-To see how attributes in argv[] are set for different paths, only
-the first step in the above would be different.
-
-------------
-static struct attr_check *check;
-static void setup_check(const char **argv)
-{
- check = attr_check_alloc();
- while (*argv) {
- struct git_attr *attr = git_attr(*argv);
- attr_check_append(check, attr);
- argv++;
- }
-}
-------------
-
-
-Querying All Attributes
------------------------
-
-To get the values of all attributes associated with a file:
-
-* Prepare an empty `attr_check` structure by calling
- `attr_check_alloc()`.
-
-* Call `git_all_attrs()`, which populates the `attr_check`
- with the attributes attached to the path.
-
-* Iterate over the `attr_check.items[]` array to examine
- the attribute names and values. The name of the attribute
- described by an `attr_check.items[]` object can be retrieved via
- `git_attr_name(check->items[i].attr)`. (Please note that no items
- will be returned for unset attributes, so `ATTR_UNSET()` will return
- false for all returned `attr_check.items[]` objects.)
-
-* Free the `attr_check` struct by calling `attr_check_free()`.
diff --git a/Documentation/technical/api-grep.txt b/Documentation/technical/api-grep.txt
deleted file mode 100644
index a69cc8964d..0000000000
--- a/Documentation/technical/api-grep.txt
+++ /dev/null
@@ -1,8 +0,0 @@
-grep API
-========
-
-Talk about <grep.h>, things like:
-
-* grep_buffer()
-
-(JC)
diff --git a/Documentation/technical/api-history-graph.txt b/Documentation/technical/api-history-graph.txt
deleted file mode 100644
index d0d1707c8c..0000000000
--- a/Documentation/technical/api-history-graph.txt
+++ /dev/null
@@ -1,173 +0,0 @@
-history graph API
-=================
-
-The graph API is used to draw a text-based representation of the commit
-history. The API generates the graph in a line-by-line fashion.
-
-Functions
----------
-
-Core functions:
-
-* `graph_init()` creates a new `struct git_graph`
-
-* `graph_update()` moves the graph to a new commit.
-
-* `graph_next_line()` outputs the next line of the graph into a strbuf. It
- does not add a terminating newline.
-
-* `graph_padding_line()` outputs a line of vertical padding in the graph. It
- is similar to `graph_next_line()`, but is guaranteed to never print the line
- containing the current commit. Where `graph_next_line()` would print the
- commit line next, `graph_padding_line()` prints a line that simply extends
- all branch lines downwards one row, leaving their positions unchanged.
-
-* `graph_is_commit_finished()` determines if the graph has output all lines
- necessary for the current commit. If `graph_update()` is called before all
- lines for the current commit have been printed, the next call to
- `graph_next_line()` will output an ellipsis, to indicate that a portion of
- the graph was omitted.
-
-The following utility functions are wrappers around `graph_next_line()` and
-`graph_is_commit_finished()`. They always print the output to stdout.
-They can all be called with a NULL graph argument, in which case no graph
-output will be printed.
-
-* `graph_show_commit()` calls `graph_next_line()` and
- `graph_is_commit_finished()` until one of them return non-zero. This prints
- all graph lines up to, and including, the line containing this commit.
- Output is printed to stdout. The last line printed does not contain a
- terminating newline.
-
-* `graph_show_oneline()` calls `graph_next_line()` and prints the result to
- stdout. The line printed does not contain a terminating newline.
-
-* `graph_show_padding()` calls `graph_padding_line()` and prints the result to
- stdout. The line printed does not contain a terminating newline.
-
-* `graph_show_remainder()` calls `graph_next_line()` until
- `graph_is_commit_finished()` returns non-zero. Output is printed to stdout.
- The last line printed does not contain a terminating newline. Returns 1 if
- output was printed, and 0 if no output was necessary.
-
-* `graph_show_strbuf()` prints the specified strbuf to stdout, prefixing all
- lines but the first with a graph line. The caller is responsible for
- ensuring graph output for the first line has already been printed to stdout.
- (This can be done with `graph_show_commit()` or `graph_show_oneline()`.) If
- a NULL graph is supplied, the strbuf is printed as-is.
-
-* `graph_show_commit_msg()` is similar to `graph_show_strbuf()`, but it also
- prints the remainder of the graph, if more lines are needed after the strbuf
- ends. It is better than directly calling `graph_show_strbuf()` followed by
- `graph_show_remainder()` since it properly handles buffers that do not end in
- a terminating newline. The output printed by `graph_show_commit_msg()` will
- end in a newline if and only if the strbuf ends in a newline.
-
-Data structure
---------------
-`struct git_graph` is an opaque data type used to store the current graph
-state.
-
-Calling sequence
-----------------
-
-* Create a `struct git_graph` by calling `graph_init()`. When using the
- revision walking API, this is done automatically by `setup_revisions()` if
- the '--graph' option is supplied.
-
-* Use the revision walking API to walk through a group of contiguous commits.
- The `get_revision()` function automatically calls `graph_update()` each time
- it is invoked.
-
-* For each commit, call `graph_next_line()` repeatedly, until
- `graph_is_commit_finished()` returns non-zero. Each call to
- `graph_next_line()` will output a single line of the graph. The resulting
- lines will not contain any newlines. `graph_next_line()` returns 1 if the
- resulting line contains the current commit, or 0 if this is merely a line
- needed to adjust the graph before or after the current commit. This return
- value can be used to determine where to print the commit summary information
- alongside the graph output.
-
-Limitations
------------
-
-* `graph_update()` must be called with commits in topological order. It should
- not be called on a commit if it has already been invoked with an ancestor of
- that commit, or the graph output will be incorrect.
-
-* `graph_update()` must be called on a contiguous group of commits. If
- `graph_update()` is called on a particular commit, it should later be called
- on all parents of that commit. Parents must not be skipped, or the graph
- output will appear incorrect.
-+
-`graph_update()` may be used on a pruned set of commits only if the parent list
-has been rewritten so as to include only ancestors from the pruned set.
-
-* The graph API does not currently support reverse commit ordering. In
- order to implement reverse ordering, the graphing API needs an
- (efficient) mechanism to find the children of a commit.
-
-Sample usage
-------------
-
-------------
-struct commit *commit;
-struct git_graph *graph = graph_init(opts);
-
-while ((commit = get_revision(opts)) != NULL) {
- while (!graph_is_commit_finished(graph))
- {
- struct strbuf sb;
- int is_commit_line;
-
- strbuf_init(&sb, 0);
- is_commit_line = graph_next_line(graph, &sb);
- fputs(sb.buf, stdout);
-
- if (is_commit_line)
- log_tree_commit(opts, commit);
- else
- putchar(opts->diffopt.line_termination);
- }
-}
-------------
-
-Sample output
--------------
-
-The following is an example of the output from the graph API. This output does
-not include any commit summary information--callers are responsible for
-outputting that information, if desired.
-
-------------
-*
-*
-*
-|\
-* |
-| | *
-| \ \
-| \ \
-*-. \ \
-|\ \ \ \
-| | * | |
-| | | | | *
-| | | | | *
-| | | | | *
-| | | | | |\
-| | | | | | *
-| * | | | | |
-| | | | | * \
-| | | | | |\ |
-| | | | * | | |
-| | | | * | | |
-* | | | | | | |
-| |/ / / / / /
-|/| / / / / /
-* | | | | | |
-|/ / / / / /
-* | | | | |
-| | | | | *
-| | | | |/
-| | | | *
-------------
diff --git a/Documentation/technical/api-merge.txt b/Documentation/technical/api-merge.txt
index 9dc1bed768..487d4d83ff 100644
--- a/Documentation/technical/api-merge.txt
+++ b/Documentation/technical/api-merge.txt
@@ -28,77 +28,9 @@ and `diff.c` for examples.
* `struct ll_merge_options`
-This describes the set of options the calling program wants to affect
-the operation of a low-level (single file) merge. Some options:
-
-`virtual_ancestor`::
- Behave as though this were part of a merge between common
- ancestors in a recursive merge.
- If a helper program is specified by the
- `[merge "<driver>"] recursive` configuration, it will
- be used (see linkgit:gitattributes[5]).
-
-`variant`::
- Resolve local conflicts automatically in favor
- of one side or the other (as in 'git merge-file'
- `--ours`/`--theirs`/`--union`). Can be `0`,
- `XDL_MERGE_FAVOR_OURS`, `XDL_MERGE_FAVOR_THEIRS`, or
- `XDL_MERGE_FAVOR_UNION`.
-
-`renormalize`::
- Resmudge and clean the "base", "theirs" and "ours" files
- before merging. Use this when the merge is likely to have
- overlapped with a change in smudge/clean or end-of-line
- normalization rules.
+Check ll-merge.h for details.
Low-level (single file) merge
-----------------------------
-`ll_merge`::
-
- Perform a three-way single-file merge in core. This is
- a thin wrapper around `xdl_merge` that takes the path and
- any merge backend specified in `.gitattributes` or
- `.git/info/attributes` into account. Returns 0 for a
- clean merge.
-
-Calling sequence:
-
-* Prepare a `struct ll_merge_options` to record options.
- If you have no special requests, skip this and pass `NULL`
- as the `opts` parameter to use the default options.
-
-* Allocate an mmbuffer_t variable for the result.
-
-* Allocate and fill variables with the file's original content
- and two modified versions (using `read_mmfile`, for example).
-
-* Call `ll_merge()`.
-
-* Read the merged content from `result_buf.ptr` and `result_buf.size`.
-
-* Release buffers when finished. A simple
- `free(ancestor.ptr); free(ours.ptr); free(theirs.ptr);
- free(result_buf.ptr);` will do.
-
-If the modifications do not merge cleanly, `ll_merge` will return a
-nonzero value and `result_buf` will generally include a description of
-the conflict bracketed by markers such as the traditional `<<<<<<<`
-and `>>>>>>>`.
-
-The `ancestor_label`, `our_label`, and `their_label` parameters are
-used to label the different sides of a conflict if the merge driver
-supports this.
-
-Everything else
----------------
-
-Talk about <merge-recursive.h> and merge_file():
-
- - merge_trees() to merge with rename detection
- - merge_recursive() for ancestor consolidation
- - try_merge_command() for other strategies
- - conflict format
- - merge options
-
-(Daniel, Miklos, Stephan, JC)
+Check ll-merge.h for details.
diff --git a/Documentation/technical/api-object-access.txt b/Documentation/technical/api-object-access.txt
deleted file mode 100644
index 5b29622d00..0000000000
--- a/Documentation/technical/api-object-access.txt
+++ /dev/null
@@ -1,15 +0,0 @@
-object access API
-=================
-
-Talk about <sha1-file.c> and <object.h> family, things like
-
-* read_sha1_file()
-* read_object_with_reference()
-* has_sha1_file()
-* write_sha1_file()
-* pretend_object_file()
-* lookup_{object,commit,tag,blob,tree}
-* parse_{object,commit,tag,blob,tree}
-* Use of object flags
-
-(JC, Shawn, Daniel, Dscho, Linus)
diff --git a/Documentation/technical/api-oid-array.txt b/Documentation/technical/api-oid-array.txt
deleted file mode 100644
index c97428c2c3..0000000000
--- a/Documentation/technical/api-oid-array.txt
+++ /dev/null
@@ -1,90 +0,0 @@
-oid-array API
-==============
-
-The oid-array API provides storage and manipulation of sets of object
-identifiers. The emphasis is on storage and processing efficiency,
-making them suitable for large lists. Note that the ordering of items is
-not preserved over some operations.
-
-Data Structures
----------------
-
-`struct oid_array`::
-
- A single array of object IDs. This should be initialized by
- assignment from `OID_ARRAY_INIT`. The `oid` member contains
- the actual data. The `nr` member contains the number of items in
- the set. The `alloc` and `sorted` members are used internally,
- and should not be needed by API callers.
-
-Functions
----------
-
-`oid_array_append`::
- Add an item to the set. The object ID will be placed at the end of
- the array (but note that some operations below may lose this
- ordering).
-
-`oid_array_lookup`::
- Perform a binary search of the array for a specific object ID.
- If found, returns the offset (in number of elements) of the
- object ID. If not found, returns a negative integer. If the array
- is not sorted, this function has the side effect of sorting it.
-
-`oid_array_clear`::
- Free all memory associated with the array and return it to the
- initial, empty state.
-
-`oid_array_for_each`::
- Iterate over each element of the list, executing the callback
- function for each one. Does not sort the list, so any custom
- hash order is retained. If the callback returns a non-zero
- value, the iteration ends immediately and the callback's
- return is propagated; otherwise, 0 is returned.
-
-`oid_array_for_each_unique`::
- Iterate over each unique element of the list in sorted order,
- but otherwise behave like `oid_array_for_each`. If the array
- is not sorted, this function has the side effect of sorting
- it.
-
-`oid_array_filter`::
- Apply the callback function `want` to each entry in the array,
- retaining only the entries for which the function returns true.
- Preserve the order of the entries that are retained.
-
-Examples
---------
-
------------------------------------------
-int print_callback(const struct object_id *oid,
- void *data)
-{
- printf("%s\n", oid_to_hex(oid));
- return 0; /* always continue */
-}
-
-void some_func(void)
-{
- struct sha1_array hashes = OID_ARRAY_INIT;
- struct object_id oid;
-
- /* Read objects into our set */
- while (read_object_from_stdin(oid.hash))
- oid_array_append(&hashes, &oid);
-
- /* Check if some objects are in our set */
- while (read_object_from_stdin(oid.hash)) {
- if (oid_array_lookup(&hashes, &oid) >= 0)
- printf("it's in there!\n");
-
- /*
- * Print the unique set of objects. We could also have
- * avoided adding duplicate objects in the first place,
- * but we would end up re-sorting the array repeatedly.
- * Instead, this will sort once and then skip duplicates
- * in linear time.
- */
- oid_array_for_each_unique(&hashes, print_callback, NULL);
-}
------------------------------------------
diff --git a/Documentation/technical/api-parse-options.txt b/Documentation/technical/api-parse-options.txt
index 2b036d7838..5a60bbfa7f 100644
--- a/Documentation/technical/api-parse-options.txt
+++ b/Documentation/technical/api-parse-options.txt
@@ -198,8 +198,10 @@ There are some macros to easily define options:
The filename will be prefixed by passing the filename along with
the prefix argument of `parse_options()` to `prefix_filename()`.
-`OPT_ARGUMENT(long, description)`::
+`OPT_ARGUMENT(long, &int_var, description)`::
Introduce a long-option argument that will be kept in `argv[]`.
+ If this option was seen, `int_var` will be set to one (except
+ if a `NULL` pointer was passed).
`OPT_NUMBER_CALLBACK(&var, description, func_ptr)`::
Recognize numerical options like -123 and feed the integer as
@@ -230,9 +232,9 @@ There are some macros to easily define options:
will be overwritten, so this should only be used for options where
the last one specified on the command line wins.
-`OPT_PASSTHRU_ARGV(short, long, &argv_array_var, arg_str, description, flags)`::
+`OPT_PASSTHRU_ARGV(short, long, &strvec_var, arg_str, description, flags)`::
Introduce an option where all instances of it on the command-line will
- be reconstructed into an argv_array. This is useful when you need to
+ be reconstructed into a strvec. This is useful when you need to
pass the command-line option, which can be specified multiple times,
to another command.
diff --git a/Documentation/technical/api-quote.txt b/Documentation/technical/api-quote.txt
deleted file mode 100644
index e8a1bce94e..0000000000
--- a/Documentation/technical/api-quote.txt
+++ /dev/null
@@ -1,10 +0,0 @@
-quote API
-=========
-
-Talk about <quote.h>, things like
-
-* sq_quote and unquote
-* c_style quote and unquote
-* quoting for foreign languages
-
-(JC)
diff --git a/Documentation/technical/api-ref-iteration.txt b/Documentation/technical/api-ref-iteration.txt
deleted file mode 100644
index 46c3d5c355..0000000000
--- a/Documentation/technical/api-ref-iteration.txt
+++ /dev/null
@@ -1,78 +0,0 @@
-ref iteration API
-=================
-
-
-Iteration of refs is done by using an iterate function which will call a
-callback function for every ref. The callback function has this
-signature:
-
- int handle_one_ref(const char *refname, const struct object_id *oid,
- int flags, void *cb_data);
-
-There are different kinds of iterate functions which all take a
-callback of this type. The callback is then called for each found ref
-until the callback returns nonzero. The returned value is then also
-returned by the iterate function.
-
-Iteration functions
--------------------
-
-* `head_ref()` just iterates the head ref.
-
-* `for_each_ref()` iterates all refs.
-
-* `for_each_ref_in()` iterates all refs which have a defined prefix and
- strips that prefix from the passed variable refname.
-
-* `for_each_tag_ref()`, `for_each_branch_ref()`, `for_each_remote_ref()`,
- `for_each_replace_ref()` iterate refs from the respective area.
-
-* `for_each_glob_ref()` iterates all refs that match the specified glob
- pattern.
-
-* `for_each_glob_ref_in()` the previous and `for_each_ref_in()` combined.
-
-* Use `refs_` API for accessing submodules. The submodule ref store could
- be obtained with `get_submodule_ref_store()`.
-
-* `for_each_rawref()` can be used to learn about broken ref and symref.
-
-* `for_each_reflog()` iterates each reflog file.
-
-Submodules
-----------
-
-If you want to iterate the refs of a submodule you first need to add the
-submodules object database. You can do this by a code-snippet like
-this:
-
- const char *path = "path/to/submodule"
- if (add_submodule_odb(path))
- die("Error submodule '%s' not populated.", path);
-
-`add_submodule_odb()` will return zero on success. If you
-do not do this you will get an error for each ref that it does not point
-to a valid object.
-
-Note: As a side-effect of this you can not safely assume that all
-objects you lookup are available in superproject. All submodule objects
-will be available the same way as the superprojects objects.
-
-Example:
---------
-
-----
-static int handle_remote_ref(const char *refname,
- const unsigned char *sha1, int flags, void *cb_data)
-{
- struct strbuf *output = cb_data;
- strbuf_addf(output, "%s\n", refname);
- return 0;
-}
-
-...
-
- struct strbuf output = STRBUF_INIT;
- for_each_remote_ref(handle_remote_ref, &output);
- printf("%s", output.buf);
-----
diff --git a/Documentation/technical/api-remote.txt b/Documentation/technical/api-remote.txt
deleted file mode 100644
index f10941b2e8..0000000000
--- a/Documentation/technical/api-remote.txt
+++ /dev/null
@@ -1,127 +0,0 @@
-Remotes configuration API
-=========================
-
-The API in remote.h gives access to the configuration related to
-remotes. It handles all three configuration mechanisms historically
-and currently used by Git, and presents the information in a uniform
-fashion. Note that the code also handles plain URLs without any
-configuration, giving them just the default information.
-
-struct remote
--------------
-
-`name`::
-
- The user's nickname for the remote
-
-`url`::
-
- An array of all of the url_nr URLs configured for the remote
-
-`pushurl`::
-
- An array of all of the pushurl_nr push URLs configured for the remote
-
-`push`::
-
- An array of refspecs configured for pushing, with
- push_refspec being the literal strings, and push_refspec_nr
- being the quantity.
-
-`fetch`::
-
- An array of refspecs configured for fetching, with
- fetch_refspec being the literal strings, and fetch_refspec_nr
- being the quantity.
-
-`fetch_tags`::
-
- The setting for whether to fetch tags (as a separate rule from
- the configured refspecs); -1 means never to fetch tags, 0
- means to auto-follow tags based on the default heuristic, 1
- means to always auto-follow tags, and 2 means to fetch all
- tags.
-
-`receivepack`, `uploadpack`::
-
- The configured helper programs to run on the remote side, for
- Git-native protocols.
-
-`http_proxy`::
-
- The proxy to use for curl (http, https, ftp, etc.) URLs.
-
-`http_proxy_authmethod`::
-
- The method used for authenticating against `http_proxy`.
-
-struct remotes can be found by name with remote_get(), and iterated
-through with for_each_remote(). remote_get(NULL) will return the
-default remote, given the current branch and configuration.
-
-struct refspec
---------------
-
-A struct refspec holds the parsed interpretation of a refspec. If it
-will force updates (starts with a '+'), force is true. If it is a
-pattern (sides end with '*') pattern is true. src and dest are the
-two sides (including '*' characters if present); if there is only one
-side, it is src, and dst is NULL; if sides exist but are empty (i.e.,
-the refspec either starts or ends with ':'), the corresponding side is
-"".
-
-An array of strings can be parsed into an array of struct refspecs
-using parse_fetch_refspec() or parse_push_refspec().
-
-remote_find_tracking(), given a remote and a struct refspec with
-either src or dst filled out, will fill out the other such that the
-result is in the "fetch" specification for the remote (note that this
-evaluates patterns and returns a single result).
-
-struct branch
--------------
-
-Note that this may end up moving to branch.h
-
-struct branch holds the configuration for a branch. It can be looked
-up with branch_get(name) for "refs/heads/{name}", or with
-branch_get(NULL) for HEAD.
-
-It contains:
-
-`name`::
-
- The short name of the branch.
-
-`refname`::
-
- The full path for the branch ref.
-
-`remote_name`::
-
- The name of the remote listed in the configuration.
-
-`merge_name`::
-
- An array of the "merge" lines in the configuration.
-
-`merge`::
-
- An array of the struct refspecs used for the merge lines. That
- is, merge[i]->dst is a local tracking ref which should be
- merged into this branch by default.
-
-`merge_nr`::
-
- The number of merge configurations
-
-branch_has_merge_config() returns true if the given branch has merge
-configuration given.
-
-Other stuff
------------
-
-There is other stuff in remote.h that is related, in general, to the
-process of interacting with remotes.
-
-(Daniel Barkalow)
diff --git a/Documentation/technical/api-revision-walking.txt b/Documentation/technical/api-revision-walking.txt
deleted file mode 100644
index 03f9ea6ac4..0000000000
--- a/Documentation/technical/api-revision-walking.txt
+++ /dev/null
@@ -1,72 +0,0 @@
-revision walking API
-====================
-
-The revision walking API offers functions to build a list of revisions
-and then iterate over that list.
-
-Calling sequence
-----------------
-
-The walking API has a given calling sequence: first you need to
-initialize a rev_info structure, then add revisions to control what kind
-of revision list do you want to get, finally you can iterate over the
-revision list.
-
-Functions
----------
-
-`repo_init_revisions`::
-
- Initialize a rev_info structure with default values. The third
- parameter may be NULL or can be prefix path, and then the `.prefix`
- variable will be set to it. This is typically the first function you
- want to call when you want to deal with a revision list. After calling
- this function, you are free to customize options, like set
- `.ignore_merges` to 0 if you don't want to ignore merges, and so on. See
- `revision.h` for a complete list of available options.
-
-`add_pending_object`::
-
- This function can be used if you want to add commit objects as revision
- information. You can use the `UNINTERESTING` object flag to indicate if
- you want to include or exclude the given commit (and commits reachable
- from the given commit) from the revision list.
-+
-NOTE: If you have the commits as a string list then you probably want to
-use setup_revisions(), instead of parsing each string and using this
-function.
-
-`setup_revisions`::
-
- Parse revision information, filling in the `rev_info` structure, and
- removing the used arguments from the argument list. Returns the number
- of arguments left that weren't recognized, which are also moved to the
- head of the argument list. The last parameter is used in case no
- parameter given by the first two arguments.
-
-`prepare_revision_walk`::
-
- Prepares the rev_info structure for a walk. You should check if it
- returns any error (non-zero return code) and if it does not, you can
- start using get_revision() to do the iteration.
-
-`get_revision`::
-
- Takes a pointer to a `rev_info` structure and iterates over it,
- returning a `struct commit *` each time you call it. The end of the
- revision list is indicated by returning a NULL pointer.
-
-`reset_revision_walk`::
-
- Reset the flags used by the revision walking api. You can use
- this to do multiple sequential revision walks.
-
-Data structures
----------------
-
-Talk about <revision.h>, things like:
-
-* two diff_options, one for path limiting, another for output;
-* remaining functions;
-
-(Linus, JC, Dscho)
diff --git a/Documentation/technical/api-run-command.txt b/Documentation/technical/api-run-command.txt
deleted file mode 100644
index 8bf3e37f53..0000000000
--- a/Documentation/technical/api-run-command.txt
+++ /dev/null
@@ -1,264 +0,0 @@
-run-command API
-===============
-
-The run-command API offers a versatile tool to run sub-processes with
-redirected input and output as well as with a modified environment
-and an alternate current directory.
-
-A similar API offers the capability to run a function asynchronously,
-which is primarily used to capture the output that the function
-produces in the caller in order to process it.
-
-
-Functions
----------
-
-`child_process_init`::
-
- Initialize a struct child_process variable.
-
-`start_command`::
-
- Start a sub-process. Takes a pointer to a `struct child_process`
- that specifies the details and returns pipe FDs (if requested).
- See below for details.
-
-`finish_command`::
-
- Wait for the completion of a sub-process that was started with
- start_command().
-
-`run_command`::
-
- A convenience function that encapsulates a sequence of
- start_command() followed by finish_command(). Takes a pointer
- to a `struct child_process` that specifies the details.
-
-`run_command_v_opt`, `run_command_v_opt_cd_env`::
-
- Convenience functions that encapsulate a sequence of
- start_command() followed by finish_command(). The argument argv
- specifies the program and its arguments. The argument opt is zero
- or more of the flags `RUN_COMMAND_NO_STDIN`, `RUN_GIT_CMD`,
- `RUN_COMMAND_STDOUT_TO_STDERR`, or `RUN_SILENT_EXEC_FAILURE`
- that correspond to the members .no_stdin, .git_cmd,
- .stdout_to_stderr, .silent_exec_failure of `struct child_process`.
- The argument dir corresponds the member .dir. The argument env
- corresponds to the member .env.
-
-`child_process_clear`::
-
- Release the memory associated with the struct child_process.
- Most users of the run-command API don't need to call this
- function explicitly because `start_command` invokes it on
- failure and `finish_command` calls it automatically already.
-
-The functions above do the following:
-
-. If a system call failed, errno is set and -1 is returned. A diagnostic
- is printed.
-
-. If the program was not found, then -1 is returned and errno is set to
- ENOENT; a diagnostic is printed only if .silent_exec_failure is 0.
-
-. Otherwise, the program is run. If it terminates regularly, its exit
- code is returned. No diagnostic is printed, even if the exit code is
- non-zero.
-
-. If the program terminated due to a signal, then the return value is the
- signal number + 128, ie. the same value that a POSIX shell's $? would
- report. A diagnostic is printed.
-
-
-`start_async`::
-
- Run a function asynchronously. Takes a pointer to a `struct
- async` that specifies the details and returns a set of pipe FDs
- for communication with the function. See below for details.
-
-`finish_async`::
-
- Wait for the completion of an asynchronous function that was
- started with start_async().
-
-`run_hook`::
-
- Run a hook.
- The first argument is a pathname to an index file, or NULL
- if the hook uses the default index file or no index is needed.
- The second argument is the name of the hook.
- The further arguments correspond to the hook arguments.
- The last argument has to be NULL to terminate the arguments list.
- If the hook does not exist or is not executable, the return
- value will be zero.
- If it is executable, the hook will be executed and the exit
- status of the hook is returned.
- On execution, .stdout_to_stderr and .no_stdin will be set.
- (See below.)
-
-
-Data structures
----------------
-
-* `struct child_process`
-
-This describes the arguments, redirections, and environment of a
-command to run in a sub-process.
-
-The caller:
-
-1. allocates and clears (using child_process_init() or
- CHILD_PROCESS_INIT) a struct child_process variable;
-2. initializes the members;
-3. calls start_command();
-4. processes the data;
-5. closes file descriptors (if necessary; see below);
-6. calls finish_command().
-
-The .argv member is set up as an array of string pointers (NULL
-terminated), of which .argv[0] is the program name to run (usually
-without a path). If the command to run is a git command, set argv[0] to
-the command name without the 'git-' prefix and set .git_cmd = 1.
-
-Note that the ownership of the memory pointed to by .argv stays with the
-caller, but it should survive until `finish_command` completes. If the
-.argv member is NULL, `start_command` will point it at the .args
-`argv_array` (so you may use one or the other, but you must use exactly
-one). The memory in .args will be cleaned up automatically during
-`finish_command` (or during `start_command` when it is unsuccessful).
-
-The members .in, .out, .err are used to redirect stdin, stdout,
-stderr as follows:
-
-. Specify 0 to request no special redirection. No new file descriptor
- is allocated. The child process simply inherits the channel from the
- parent.
-
-. Specify -1 to have a pipe allocated; start_command() replaces -1
- by the pipe FD in the following way:
-
- .in: Returns the writable pipe end into which the caller writes;
- the readable end of the pipe becomes the child's stdin.
-
- .out, .err: Returns the readable pipe end from which the caller
- reads; the writable end of the pipe end becomes child's
- stdout/stderr.
-
- The caller of start_command() must close the so returned FDs
- after it has completed reading from/writing to it!
-
-. Specify a file descriptor > 0 to be used by the child:
-
- .in: The FD must be readable; it becomes child's stdin.
- .out: The FD must be writable; it becomes child's stdout.
- .err: The FD must be writable; it becomes child's stderr.
-
- The specified FD is closed by start_command(), even if it fails to
- run the sub-process!
-
-. Special forms of redirection are available by setting these members
- to 1:
-
- .no_stdin, .no_stdout, .no_stderr: The respective channel is
- redirected to /dev/null.
-
- .stdout_to_stderr: stdout of the child is redirected to its
- stderr. This happens after stderr is itself redirected.
- So stdout will follow stderr to wherever it is
- redirected.
-
-To modify the environment of the sub-process, specify an array of
-string pointers (NULL terminated) in .env:
-
-. If the string is of the form "VAR=value", i.e. it contains '='
- the variable is added to the child process's environment.
-
-. If the string does not contain '=', it names an environment
- variable that will be removed from the child process's environment.
-
-If the .env member is NULL, `start_command` will point it at the
-.env_array `argv_array` (so you may use one or the other, but not both).
-The memory in .env_array will be cleaned up automatically during
-`finish_command` (or during `start_command` when it is unsuccessful).
-
-To specify a new initial working directory for the sub-process,
-specify it in the .dir member.
-
-If the program cannot be found, the functions return -1 and set
-errno to ENOENT. Normally, an error message is printed, but if
-.silent_exec_failure is set to 1, no message is printed for this
-special error condition.
-
-
-* `struct async`
-
-This describes a function to run asynchronously, whose purpose is
-to produce output that the caller reads.
-
-The caller:
-
-1. allocates and clears (memset(&asy, 0, sizeof(asy));) a
- struct async variable;
-2. initializes .proc and .data;
-3. calls start_async();
-4. processes communicates with proc through .in and .out;
-5. closes .in and .out;
-6. calls finish_async().
-
-The members .in, .out are used to provide a set of fd's for
-communication between the caller and the callee as follows:
-
-. Specify 0 to have no file descriptor passed. The callee will
- receive -1 in the corresponding argument.
-
-. Specify < 0 to have a pipe allocated; start_async() replaces
- with the pipe FD in the following way:
-
- .in: Returns the writable pipe end into which the caller
- writes; the readable end of the pipe becomes the function's
- in argument.
-
- .out: Returns the readable pipe end from which the caller
- reads; the writable end of the pipe becomes the function's
- out argument.
-
- The caller of start_async() must close the returned FDs after it
- has completed reading from/writing from them.
-
-. Specify a file descriptor > 0 to be used by the function:
-
- .in: The FD must be readable; it becomes the function's in.
- .out: The FD must be writable; it becomes the function's out.
-
- The specified FD is closed by start_async(), even if it fails to
- run the function.
-
-The function pointer in .proc has the following signature:
-
- int proc(int in, int out, void *data);
-
-. in, out specifies a set of file descriptors to which the function
- must read/write the data that it needs/produces. The function
- *must* close these descriptors before it returns. A descriptor
- may be -1 if the caller did not configure a descriptor for that
- direction.
-
-. data is the value that the caller has specified in the .data member
- of struct async.
-
-. The return value of the function is 0 on success and non-zero
- on failure. If the function indicates failure, finish_async() will
- report failure as well.
-
-
-There are serious restrictions on what the asynchronous function can do
-because this facility is implemented by a thread in the same address
-space on most platforms (when pthreads is available), but by a pipe to
-a forked process otherwise:
-
-. It cannot change the program's state (global variables, environment,
- etc.) in a way that the caller notices; in other words, .in and .out
- are the only communication channels to the caller.
-
-. It must not change the program's state that the caller of the
- facility also uses.
diff --git a/Documentation/technical/api-setup.txt b/Documentation/technical/api-setup.txt
deleted file mode 100644
index eb1fa9853e..0000000000
--- a/Documentation/technical/api-setup.txt
+++ /dev/null
@@ -1,47 +0,0 @@
-setup API
-=========
-
-Talk about
-
-* setup_git_directory()
-* setup_git_directory_gently()
-* is_inside_git_dir()
-* is_inside_work_tree()
-* setup_work_tree()
-
-(Dscho)
-
-Pathspec
---------
-
-See glossary-context.txt for the syntax of pathspec. In memory, a
-pathspec set is represented by "struct pathspec" and is prepared by
-parse_pathspec(). This function takes several arguments:
-
-- magic_mask specifies what features that are NOT supported by the
- following code. If a user attempts to use such a feature,
- parse_pathspec() can reject it early.
-
-- flags specifies other things that the caller wants parse_pathspec to
- perform.
-
-- prefix and args come from cmd_* functions
-
-parse_pathspec() helps catch unsupported features and reject them
-politely. At a lower level, different pathspec-related functions may
-not support the same set of features. Such pathspec-sensitive
-functions are guarded with GUARD_PATHSPEC(), which will die in an
-unfriendly way when an unsupported feature is requested.
-
-The command designers are supposed to make sure that GUARD_PATHSPEC()
-never dies. They have to make sure all unsupported features are caught
-by parse_pathspec(), not by GUARD_PATHSPEC. grepping GUARD_PATHSPEC()
-should give the designers all pathspec-sensitive codepaths and what
-features they support.
-
-A similar process is applied when a new pathspec magic is added. The
-designer lifts the GUARD_PATHSPEC restriction in the functions that
-support the new magic. At the same time (s)he has to make sure this
-new feature will be caught at parse_pathspec() in commands that cannot
-handle the new magic in some cases. grepping parse_pathspec() should
-help.
diff --git a/Documentation/technical/api-sigchain.txt b/Documentation/technical/api-sigchain.txt
deleted file mode 100644
index 9e1189ef01..0000000000
--- a/Documentation/technical/api-sigchain.txt
+++ /dev/null
@@ -1,41 +0,0 @@
-sigchain API
-============
-
-Code often wants to set a signal handler to clean up temporary files or
-other work-in-progress when we die unexpectedly. For multiple pieces of
-code to do this without conflicting, each piece of code must remember
-the old value of the handler and restore it either when:
-
- 1. The work-in-progress is finished, and the handler is no longer
- necessary. The handler should revert to the original behavior
- (either another handler, SIG_DFL, or SIG_IGN).
-
- 2. The signal is received. We should then do our cleanup, then chain
- to the next handler (or die if it is SIG_DFL).
-
-Sigchain is a tiny library for keeping a stack of handlers. Your handler
-and installation code should look something like:
-
-------------------------------------------
- void clean_foo_on_signal(int sig)
- {
- clean_foo();
- sigchain_pop(sig);
- raise(sig);
- }
-
- void other_func()
- {
- sigchain_push_common(clean_foo_on_signal);
- mess_up_foo();
- clean_foo();
- }
-------------------------------------------
-
-Handlers are given the typedef of sigchain_fun. This is the same type
-that is given to signal() or sigaction(). It is perfectly reasonable to
-push SIG_DFL or SIG_IGN onto the stack.
-
-You can sigchain_push and sigchain_pop individual signals. For
-convenience, sigchain_push_common will push the handler onto the stack
-for many common signals.
diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt
new file mode 100644
index 0000000000..d79ad323e6
--- /dev/null
+++ b/Documentation/technical/api-simple-ipc.txt
@@ -0,0 +1,105 @@
+Simple-IPC API
+==============
+
+The Simple-IPC API is a collection of `ipc_` prefixed library routines
+and a basic communication protocol that allow an IPC-client process to
+send an application-specific IPC-request message to an IPC-server
+process and receive an application-specific IPC-response message.
+
+Communication occurs over a named pipe on Windows and a Unix domain
+socket on other platforms. IPC-clients and IPC-servers rendezvous at
+a previously agreed-to application-specific pathname (which is outside
+the scope of this design) that is local to the computer system.
+
+The IPC-server routines within the server application process create a
+thread pool to listen for connections and receive request messages
+from multiple concurrent IPC-clients. When received, these messages
+are dispatched up to the server application callbacks for handling.
+IPC-server routines then incrementally relay responses back to the
+IPC-client.
+
+The IPC-client routines within a client application process connect
+to the IPC-server and send a request message and wait for a response.
+When received, the response is returned back the caller.
+
+For example, the `fsmonitor--daemon` feature will be built as a server
+application on top of the IPC-server library routines. It will have
+threads watching for file system events and a thread pool waiting for
+client connections. Clients, such as `git status` will request a list
+of file system events since a point in time and the server will
+respond with a list of changed files and directories. The formats of
+the request and response are application-specific; the IPC-client and
+IPC-server routines treat them as opaque byte streams.
+
+
+Comparison with sub-process model
+---------------------------------
+
+The Simple-IPC mechanism differs from the existing `sub-process.c`
+model (Documentation/technical/long-running-process-protocol.txt) and
+used by applications like Git-LFS. In the LFS-style sub-process model
+the helper is started by the foreground process, communication happens
+via a pair of file descriptors bound to the stdin/stdout of the
+sub-process, the sub-process only serves the current foreground
+process, and the sub-process exits when the foreground process
+terminates.
+
+In the Simple-IPC model the server is a very long-running service. It
+can service many clients at the same time and has a private socket or
+named pipe connection to each active client. It might be started
+(on-demand) by the current client process or it might have been
+started by a previous client or by the OS at boot time. The server
+process is not associated with a terminal and it persists after
+clients terminate. Clients do not have access to the stdin/stdout of
+the server process and therefore must communicate over sockets or
+named pipes.
+
+
+Server startup and shutdown
+---------------------------
+
+How an application server based upon IPC-server is started is also
+outside the scope of the Simple-IPC design and is a property of the
+application using it. For example, the server might be started or
+restarted during routine maintenance operations, or it might be
+started as a system service during the system boot-up sequence, or it
+might be started on-demand by a foreground Git command when needed.
+
+Similarly, server shutdown is a property of the application using
+the simple-ipc routines. For example, the server might decide to
+shutdown when idle or only upon explicit request.
+
+
+Simple-IPC protocol
+-------------------
+
+The Simple-IPC protocol consists of a single request message from the
+client and an optional response message from the server. Both the
+client and server messages are unlimited in length and are terminated
+with a flush packet.
+
+The pkt-line routines (Documentation/technical/protocol-common.txt)
+are used to simplify buffer management during message generation,
+transmission, and reception. A flush packet is used to mark the end
+of the message. This allows the sender to incrementally generate and
+transmit the message. It allows the receiver to incrementally receive
+the message in chunks and to know when they have received the entire
+message.
+
+The actual byte format of the client request and server response
+messages are application specific. The IPC layer transmits and
+receives them as opaque byte buffers without any concern for the
+content within. It is the job of the calling application layer to
+understand the contents of the request and response messages.
+
+
+Summary
+-------
+
+Conceptually, the Simple-IPC protocol is similar to an HTTP REST
+request. Clients connect, make an application-specific and
+stateless request, receive an application-specific
+response, and disconnect. It is a one round trip facility for
+querying the server. The Simple-IPC routines hide the socket,
+named pipe, and thread pool details and allow the application
+layer to focus on the application at hand.
diff --git a/Documentation/technical/api-submodule-config.txt b/Documentation/technical/api-submodule-config.txt
deleted file mode 100644
index fb06089393..0000000000
--- a/Documentation/technical/api-submodule-config.txt
+++ /dev/null
@@ -1,66 +0,0 @@
-submodule config cache API
-==========================
-
-The submodule config cache API allows to read submodule
-configurations/information from specified revisions. Internally
-information is lazily read into a cache that is used to avoid
-unnecessary parsing of the same .gitmodules files. Lookups can be done by
-submodule path or name.
-
-Usage
------
-
-To initialize the cache with configurations from the worktree the caller
-typically first calls `gitmodules_config()` to read values from the
-worktree .gitmodules and then to overlay the local git config values
-`parse_submodule_config_option()` from the config parsing
-infrastructure.
-
-The caller can look up information about submodules by using the
-`submodule_from_path()` or `submodule_from_name()` functions. They return
-a `struct submodule` which contains the values. The API automatically
-initializes and allocates the needed infrastructure on-demand. If the
-caller does only want to lookup values from revisions the initialization
-can be skipped.
-
-If the internal cache might grow too big or when the caller is done with
-the API, all internally cached values can be freed with submodule_free().
-
-Data Structures
----------------
-
-`struct submodule`::
-
- This structure is used to return the information about one
- submodule for a certain revision. It is returned by the lookup
- functions.
-
-Functions
----------
-
-`void submodule_free(struct repository *r)`::
-
- Use these to free the internally cached values.
-
-`int parse_submodule_config_option(const char *var, const char *value)`::
-
- Can be passed to the config parsing infrastructure to parse
- local (worktree) submodule configurations.
-
-`const struct submodule *submodule_from_path(const unsigned char *treeish_name, const char *path)`::
-
- Given a tree-ish in the superproject and a path, return the
- submodule that is bound at the path in the named tree.
-
-`const struct submodule *submodule_from_name(const unsigned char *treeish_name, const char *name)`::
-
- The same as above but lookup by name.
-
-Whenever a submodule configuration is parsed in `parse_submodule_config_option`
-via e.g. `gitmodules_config()`, it will overwrite the null_sha1 entry.
-So in the normal case, when HEAD:.gitmodules is parsed first and then overlayed
-with the repository configuration, the null_sha1 entry contains the local
-configuration of a submodule (e.g. consolidated values from local git
-configuration and the .gitmodules file in the worktree).
-
-For an example usage see test-submodule-config.c.
diff --git a/Documentation/technical/api-trace.txt b/Documentation/technical/api-trace.txt
deleted file mode 100644
index fadb5979c4..0000000000
--- a/Documentation/technical/api-trace.txt
+++ /dev/null
@@ -1,140 +0,0 @@
-trace API
-=========
-
-The trace API can be used to print debug messages to stderr or a file. Trace
-code is inactive unless explicitly enabled by setting `GIT_TRACE*` environment
-variables.
-
-The trace implementation automatically adds `timestamp file:line ... \n` to
-all trace messages. E.g.:
-
-------------
-23:59:59.123456 git.c:312 trace: built-in: git 'foo'
-00:00:00.000001 builtin/foo.c:99 foo: some message
-------------
-
-Data Structures
----------------
-
-`struct trace_key`::
-
- Defines a trace key (or category). The default (for API functions that
- don't take a key) is `GIT_TRACE`.
-+
-E.g. to define a trace key controlled by environment variable `GIT_TRACE_FOO`:
-+
-------------
-static struct trace_key trace_foo = TRACE_KEY_INIT(FOO);
-
-static void trace_print_foo(const char *message)
-{
- trace_printf_key(&trace_foo, "%s", message);
-}
-------------
-+
-Note: don't use `const` as the trace implementation stores internal state in
-the `trace_key` structure.
-
-Functions
----------
-
-`int trace_want(struct trace_key *key)`::
-
- Checks whether the trace key is enabled. Used to prevent expensive
- string formatting before calling one of the printing APIs.
-
-`void trace_disable(struct trace_key *key)`::
-
- Disables tracing for the specified key, even if the environment
- variable was set.
-
-`void trace_printf(const char *format, ...)`::
-`void trace_printf_key(struct trace_key *key, const char *format, ...)`::
-
- Prints a formatted message, similar to printf.
-
-`void trace_argv_printf(const char **argv, const char *format, ...)``::
-
- Prints a formatted message, followed by a quoted list of arguments.
-
-`void trace_strbuf(struct trace_key *key, const struct strbuf *data)`::
-
- Prints the strbuf, without additional formatting (i.e. doesn't
- choke on `%` or even `\0`).
-
-`uint64_t getnanotime(void)`::
-
- Returns nanoseconds since the epoch (01/01/1970), typically used
- for performance measurements.
-+
-Currently there are high precision timer implementations for Linux (using
-`clock_gettime(CLOCK_MONOTONIC)`) and Windows (`QueryPerformanceCounter`).
-Other platforms use `gettimeofday` as time source.
-
-`void trace_performance(uint64_t nanos, const char *format, ...)`::
-`void trace_performance_since(uint64_t start, const char *format, ...)`::
-
- Prints the elapsed time (in nanoseconds), or elapsed time since
- `start`, followed by a formatted message. Enabled via environment
- variable `GIT_TRACE_PERFORMANCE`. Used for manual profiling, e.g.:
-+
-------------
-uint64_t start = getnanotime();
-/* code section to measure */
-trace_performance_since(start, "foobar");
-------------
-+
-------------
-uint64_t t = 0;
-for (;;) {
- /* ignore */
- t -= getnanotime();
- /* code section to measure */
- t += getnanotime();
- /* ignore */
-}
-trace_performance(t, "frotz");
-------------
-
-Bugs & Caveats
---------------
-
-GIT_TRACE_* environment variables can be used to tell Git to show
-trace output to its standard error stream. Git can often spawn a pager
-internally to run its subcommand and send its standard output and
-standard error to it.
-
-Because GIT_TRACE_PERFORMANCE trace is generated only at the very end
-of the program with atexit(), which happens after the pager exits, it
-would not work well if you send its log to the standard error output
-and let Git spawn the pager at the same time.
-
-As a work around, you can for example use '--no-pager', or set
-GIT_TRACE_PERFORMANCE to another file descriptor which is redirected
-to stderr, or set GIT_TRACE_PERFORMANCE to a file specified by its
-absolute path.
-
-For example instead of the following command which by default may not
-print any performance information:
-
-------------
-GIT_TRACE_PERFORMANCE=2 git log -1
-------------
-
-you may want to use:
-
-------------
-GIT_TRACE_PERFORMANCE=2 git --no-pager log -1
-------------
-
-or:
-
-------------
-GIT_TRACE_PERFORMANCE=3 3>&2 git log -1
-------------
-
-or:
-
-------------
-GIT_TRACE_PERFORMANCE=/path/to/log/file git log -1
-------------
diff --git a/Documentation/technical/api-trace2.txt b/Documentation/technical/api-trace2.txt
index 2de565fa3d..037a91cbca 100644
--- a/Documentation/technical/api-trace2.txt
+++ b/Documentation/technical/api-trace2.txt
@@ -22,21 +22,41 @@ Targets are defined using a VTable allowing easy extension to other
formats in the future. This might be used to define a binary format,
for example.
+Trace2 is controlled using `trace2.*` config values in the system and
+global config files and `GIT_TRACE2*` environment variables. Trace2 does
+not read from repo local or worktree config files or respect `-c`
+command line config settings.
+
== Trace2 Targets
Trace2 defines the following set of Trace2 Targets.
Format details are given in a later section.
-`GIT_TR2` (NORMAL)::
+=== The Normal Format Target
+
+The normal format target is a tradition printf format and similar
+to GIT_TRACE format. This format is enabled with the `GIT_TRACE2`
+environment variable or the `trace2.normalTarget` system or global
+config setting.
+
+For example
- a simple printf format like GIT_TRACE.
-+
------------
-$ export GIT_TR2=~/log.normal
+$ export GIT_TRACE2=~/log.normal
$ git version
git version 2.20.1.155.g426c96fcdb
------------
-+
+
+or
+
+------------
+$ git config --global trace2.normalTarget ~/log.normal
+$ git version
+git version 2.20.1.155.g426c96fcdb
+------------
+
+yields
+
------------
$ cat ~/log.normal
12:28:42.620009 common-main.c:38 version 2.20.1.155.g426c96fcdb
@@ -46,76 +66,85 @@ $ cat ~/log.normal
12:28:42.621250 trace2/tr2_tgt_normal.c:124 atexit elapsed:0.001265 code:0
------------
-`GIT_TR2_PERF` (PERF)::
+=== The Performance Format Target
+
+The performance format target (PERF) is a column-based format to
+replace GIT_TRACE_PERFORMANCE and is suitable for development and
+testing, possibly to complement tools like gprof. This format is
+enabled with the `GIT_TRACE2_PERF` environment variable or the
+`trace2.perfTarget` system or global config setting.
+
+For example
- a column-based format to replace GIT_TRACE_PERFORMANCE suitable for
- development and testing, possibly to complement tools like gprof.
-+
------------
-$ export GIT_TR2_PERF=~/log.perf
+$ export GIT_TRACE2_PERF=~/log.perf
$ git version
git version 2.20.1.155.g426c96fcdb
------------
-+
+
+or
+
+------------
+$ git config --global trace2.perfTarget ~/log.perf
+$ git version
+git version 2.20.1.155.g426c96fcdb
+------------
+
+yields
+
------------
$ cat ~/log.perf
12:28:42.620675 common-main.c:38 | d0 | main | version | | | | | 2.20.1.155.g426c96fcdb
-12:28:42.621001 common-main.c:39 | d0 | main | start | | | | | git version
+12:28:42.621001 common-main.c:39 | d0 | main | start | | 0.001173 | | | git version
12:28:42.621111 git.c:432 | d0 | main | cmd_name | | | | | version (version)
12:28:42.621225 git.c:662 | d0 | main | exit | | 0.001227 | | | code:0
12:28:42.621259 trace2/tr2_tgt_perf.c:211 | d0 | main | atexit | | 0.001265 | | | code:0
------------
-`GIT_TR2_EVENT` (EVENT)::
+=== The Event Format Target
+
+The event format target is a JSON-based format of event data suitable
+for telemetry analysis. This format is enabled with the `GIT_TRACE2_EVENT`
+environment variable or the `trace2.eventTarget` system or global config
+setting.
+
+For example
- a JSON-based format of event data suitable for telemetry analysis.
-+
------------
-$ export GIT_TR2_EVENT=~/log.event
+$ export GIT_TRACE2_EVENT=~/log.event
$ git version
git version 2.20.1.155.g426c96fcdb
------------
-+
-------------
-$ cat ~/log.event
-{"event":"version","sid":"1547659722619736-11614","thread":"main","time":"2019-01-16 17:28:42.620713","file":"common-main.c","line":38,"evt":"1","exe":"2.20.1.155.g426c96fcdb"}
-{"event":"start","sid":"1547659722619736-11614","thread":"main","time":"2019-01-16 17:28:42.621027","file":"common-main.c","line":39,"argv":["git","version"]}
-{"event":"cmd_name","sid":"1547659722619736-11614","thread":"main","time":"2019-01-16 17:28:42.621122","file":"git.c","line":432,"name":"version","hierarchy":"version"}
-{"event":"exit","sid":"1547659722619736-11614","thread":"main","time":"2019-01-16 17:28:42.621236","file":"git.c","line":662,"t_abs":0.001227,"code":0}
-{"event":"atexit","sid":"1547659722619736-11614","thread":"main","time":"2019-01-16 17:28:42.621268","file":"trace2/tr2_tgt_event.c","line":163,"t_abs":0.001265,"code":0}
-------------
-
-== Enabling a Target
-
-A Trace2 Target is enabled when the corresponding environment variable
-(`GIT_TR2`, `GIT_TR2_PERF`, or `GIT_TR2_EVENT`) is set. The following
-values are recognized.
-`0`::
-`false`::
+or
- Disables the target.
-
-`1`::
-`true`::
+------------
+$ git config --global trace2.eventTarget ~/log.event
+$ git version
+git version 2.20.1.155.g426c96fcdb
+------------
- Enables the target and writes stream to `STDERR`.
+yields
-`[2-9]`::
+------------
+$ cat ~/log.event
+{"event":"version","sid":"sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.620713Z","file":"common-main.c","line":38,"evt":"2","exe":"2.20.1.155.g426c96fcdb"}
+{"event":"start","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621027Z","file":"common-main.c","line":39,"t_abs":0.001173,"argv":["git","version"]}
+{"event":"cmd_name","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621122Z","file":"git.c","line":432,"name":"version","hierarchy":"version"}
+{"event":"exit","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621236Z","file":"git.c","line":662,"t_abs":0.001227,"code":0}
+{"event":"atexit","sid":"20190408T191610.507018Z-H9b68c35f-P000059a8","thread":"main","time":"2019-01-16T17:28:42.621268Z","file":"trace2/tr2_tgt_event.c","line":163,"t_abs":0.001265,"code":0}
+------------
- Enables the target and writes to the already opened file descriptor.
+=== Enabling a Target
-`<absolute-pathname>`::
+To enable a target, set the corresponding environment variable or
+system or global config value to one of the following:
- Enables the target, opens and writes to the file in append mode.
+include::../trace2-target-values.txt[]
-`af_unix:[<socket_type>:]<absolute-pathname>`::
-
- Enables the target, opens and writes to a Unix Domain Socket
- (on platforms that support them).
-+
-Socket type can be either `stream` or `dgram`. If the socket type is
-omitted, Git will try both.
+When trace files are written to a target directory, they will be named according
+to the last component of the SID (optionally followed by a counter to avoid
+filename collisions).
== Trace2 API
@@ -149,7 +178,7 @@ describe the simplified forms.
== Public API
-All Trace2 API functions send a messsage to all of the active
+All Trace2 API functions send a message to all of the active
Trace2 Targets. This section describes the set of available
messages.
@@ -159,262 +188,41 @@ purposes.
=== Basic Command Messages
These are concerned with the lifetime of the overall git process.
-
-`void trace2_initialize()`::
-
- Determines if any Trace2 Targets should be enabled and
- initializes the Trace2 facility. This includes starting the
- elapsed time clocks and thread local storage (TLS).
-+
-This function emits a "version" message containing the version of git
-and the Trace2 protocol.
-+
-This function should be called from `main()` as early as possible in
-the life of the process.
-
-`int trace2_is_enabled()`::
-
- Returns 1 if Trace2 is enabled (at least one target is
- active).
-
-`void trace2_cmd_start(int argc, const char **argv)`::
-
- Emits a "start" message containing the process command line
- arguments.
-
-`int trace2_cmd_exit(int exit_code)`::
-
- Emits an "exit" message containing the process exit-code and
- elapsed time.
-+
-Returns the exit-code.
-
-`void trace2_cmd_error(const char *fmt, va_list ap)`::
-
- Emits an "error" message containing a formatted error message.
-
-`void trace2_cmd_path(const char *pathname)`::
-
- Emits a "cmd_path" message with the full pathname of the
- current process.
+e.g: `void trace2_initialize_clock()`, `void trace2_initialize()`,
+`int trace2_is_enabled()`, `void trace2_cmd_start(int argc, const char **argv)`.
=== Command Detail Messages
These are concerned with describing the specific Git command
after the command line, config, and environment are inspected.
-
-`void trace2_cmd_name(const char *name)`::
-
- Emits a "cmd_name" message with the canonical name of the
- command, for example "status" or "checkout".
-
-`void trace2_cmd_mode(const char *mode)`::
-
- Emits a "cmd_mode" message with a qualifier name to further
- describe the current git command.
-+
-This message is intended to be used with git commands having multiple
-major modes. For example, a "checkout" command can checkout a new
-branch or it can checkout a single file, so the checkout code could
-emit a cmd_mode message of "branch" or "file".
-
-`void trace2_cmd_alias(const char *alias, const char **argv_expansion)`::
-
- Emits an "alias" message containing the alias used and the
- argument expansion.
-
-`void trace2_def_param(const char *parameter, const char *value)`::
-
- Emits a "def_param" message containing a key/value pair.
-+
-This message is intended to report some global aspect of the current
-command, such as a configuration setting or command line switch that
-significantly affects program performance or behavior, such as
-`core.abbrev`, `status.showUntrackedFiles`, or `--no-ahead-behind`.
-
-`void trace2_cmd_list_config()`::
-
- Emits a "def_param" messages for "important" configuration
- settings.
-+
-The environment variable `GIT_TR2_CONFIG_PARAMS` can be set to a
-list of patterns of important configuration settings, for example:
-`core.*,remote.*.url`. This function will iterate over all config
-settings and emit a "def_param" message for each match.
-
-`void trace2_cmd_set_config(const char *key, const char *value)`::
-
- Emits a "def_param" message for a specific configuration
- setting IFF it matches the `GIT_TR2_CONFIG_PARAMS` pattern.
-+
-This is used to hook into `git_config_set()` and catch any
-configuration changes and update a value previously reported by
-`trace2_cmd_list_config()`.
-
-`void trace2_def_repo(struct repository *repo)`::
-
- Registers a repository with the Trace2 layer. Assigns a
- unique "repo-id" to `repo->trace2_repo_id`.
-+
-Emits a "worktree" messages containing the repo-id and the worktree
-pathname.
-+
-Region and data messages (described later) may refer to this repo-id.
-+
-The main/top-level repository will have repo-id value 1 (aka "r1").
-+
-The repo-id field is in anticipation of future in-proc submodule
-repositories.
+e.g: `void trace2_cmd_name(const char *name)`,
+`void trace2_cmd_mode(const char *mode)`.
=== Child Process Messages
These are concerned with the various spawned child processes,
including shell scripts, git commands, editors, pagers, and hooks.
-`void trace2_child_start(struct child_process *cmd)`::
-
- Emits a "child_start" message containing the "child-id",
- "child-argv", and "child-classification".
-+
-Before calling this, set `cmd->trace2_child_class` to a name
-describing the type of child process, for example "editor".
-+
-This function assigns a unique "child-id" to `cmd->trace2_child_id`.
-This field is used later during the "child_exit" message to associate
-it with the "child_start" message.
-+
-This function should be called before spawning the child process.
-
-`void trace2_child_exit(struct child_proess *cmd, int child_exit_code)`::
-
- Emits a "child_exit" message containing the "child-id",
- the child's elapsed time and exit-code.
-+
-The reported elapsed time includes the process creation overhead and
-time spend waiting for it to exit, so it may be slightly longer than
-the time reported by the child itself.
-+
-This function should be called after reaping the child process.
-
-`int trace2_exec(const char *exe, const char **argv)`::
-
- Emits a "exec" message containing the "exec-id" and the
- argv of the new process.
-+
-This function should be called before calling one of the `exec()`
-variants, such as `execvp()`.
-+
-This function returns a unique "exec-id". This value is used later
-if the exec() fails and a "exec-result" message is necessary.
-
-`void trace2_exec_result(int exec_id, int error_code)`::
-
- Emits a "exec_result" message containing the "exec-id"
- and the error code.
-+
-On Unix-based systems, `exec()` does not return if successful.
-This message is used to indicate that the `exec()` failed and
-that the current program is continuing.
+e.g: `void trace2_child_start(struct child_process *cmd)`.
=== Git Thread Messages
These messages are concerned with Git thread usage.
-`void trace2_thread_start(const char *thread_name)`::
-
- Emits a "thread_start" message.
-+
-The `thread_name` field should be a descriptive name, such as the
-unique name of the thread-proc. A unique "thread-id" will be added
-to the name to uniquely identify thread instances.
-+
-Region and data messages (described later) may refer to this thread
-name.
-+
-This function must be called by the thread-proc of the new thread
-(so that TLS data is properly initialized) and not by the caller
-of `pthread_create()`.
-
-`void trace2_thread_exit()`::
-
- Emits a "thread_exit" message containing the thread name
- and the thread elapsed time.
-+
-This function must be called by the thread-proc before it returns
-(so that the coorect TLS data is used and cleaned up. It should
-not be called by the caller of `pthread_join()`.
+e.g: `void trace2_thread_start(const char *thread_name)`.
=== Region and Data Messages
These are concerned with recording performance data
-over regions or spans of code.
-
-`void trace2_region_enter(const char *category, const char *label, const struct repository *repo)`::
-
-`void trace2_region_enter_printf(const char *category, const char *label, const struct repository *repo, const char *fmt, ...)`::
-
-`void trace2_region_enter_printf_va(const char *category, const char *label, const struct repository *repo, const char *fmt, va_list ap)`::
-
- Emits a thread-relative "region_enter" message with optional
- printf string.
-+
-This function pushes a new region nesting stack level on the current
-thread and starts a clock for the new stack frame.
-+
-The `category` field is an arbitrary category name used to classify
-regions by feature area, such as "status" or "index". At this time
-it is only just printed along with the rest of the message. It may
-be used in the future to filter messages.
-+
-The `label` field is an arbitrary label used to describe the activity
-being started, such as "read_recursive" or "do_read_index".
-+
-The `repo` field, if set, will be used to get the "repo-id", so that
-recursive oerations can be attributed to the correct repository.
-
-`void trace2_region_leave(const char *category, const char *label, const struct repository *repo)`::
-
-`void trace2_region_leave_printf(const char *category, const char *label, const struct repository *repo, const char *fmt, ...)`::
-
-`void trace2_region_leave_printf_va(const char *category, const char *label, const struct repository *repo, const char *fmt, va_list ap)`::
-
- Emits a thread-relative "region_leave" message with optional
- printf string.
-+
-This function pops the region nesting stack on the current thread
-and reports the elapsed time of the stack frame.
-+
-The `category`, `label`, and `repo` fields are the same as above.
-The `category` and `label` do not need to match the correpsonding
-"region_enter" message, but it makes the data stream easier to
-understand.
-
-`void trace2_data_string(const char *category, const struct repository *repo, const char *key, const char * value)`::
-
-`void trace2_data_intmax(const char *category, const struct repository *repo, const char *key, intmax value)`::
+over regions or spans of code. e.g:
+`void trace2_region_enter(const char *category, const char *label, const struct repository *repo)`.
-`void trace2_data_json(const char *category, const struct repository *repo, const char *key, const struct json_writer *jw)`::
-
- Emits a region- and thread-relative "data" or "data_json" message.
-+
-This is a key/value pair message containing information about the
-current thread, region stack, and repository. This could be used
-to print the number of files in a directory during a multi-threaded
-recursive tree walk.
-
-`void trace2_printf(const char *fmt, ...)`::
-
-`void trace2_printf_va(const char *fmt, va_list ap)`::
-
- Emits a region- and thread-relative "printf" message.
+Refer to trace2.h for details about all trace2 functions.
== Trace2 Target Formats
=== NORMAL Format
-NORMAL format is enabled when the `GIT_TR2` environment variable is
-set.
-
Events are written as lines of the form:
------------
@@ -431,8 +239,8 @@ Events are written as lines of the form:
Note that this may contain embedded LF or CRLF characters that are
not escaped, so the event may spill across multiple lines.
-If `GIT_TR2_BRIEF` is true, the `time`, `filename`, and `line` fields
-are omitted.
+If `GIT_TRACE2_BRIEF` or `trace2.normalBrief` is true, the `time`, `filename`,
+and `line` fields are omitted.
This target is intended to be more of a summary (like GIT_TRACE) and
less detailed than the other targets. It ignores thread, region, and
@@ -440,9 +248,6 @@ data messages, for example.
=== PERF Format
-PERF format is enabled when the `GIT_TR2_PERF` environment variable
-is set.
-
Events are written as lines of the form:
------------
@@ -502,8 +307,8 @@ This field is in anticipation of in-proc submodules in the future.
15:33:33.532712 wt-status.c:2331 | d0 | main | region_leave | r1 | 0.127568 | 0.001504 | status | label:print
------------
-If `GIT_TR2_PERF_BRIEF` is true, the `time`, `file`, and `line`
-fields are omitted.
+If `GIT_TRACE2_PERF_BRIEF` or `trace2.perfBrief` is true, the `time`, `file`,
+and `line` fields are omitted.
------------
d0 | main | region_leave | r1 | 0.011717 | 0.009122 | index | label:preload
@@ -514,9 +319,6 @@ during development and is quite noisy.
=== EVENT Format
-EVENT format is enabled when the `GIT_TR2_EVENT` environment
-variable is set.
-
Each event is a JSON-object containing multiple key/value pairs
written as a single line and followed by a LF.
@@ -534,11 +336,11 @@ The following key/value pairs are common to all events:
------------
{
"event":"version",
- "sid":"1547659722619736-11614",
+ "sid":"20190408T191827.272759Z-H9b68c35f-P00003510",
"thread":"main",
- "time":"2019-01-16 17:28:42.620713",
+ "time":"2019-04-08T19:18:27.282761Z",
"file":"common-main.c",
- "line":38,
+ "line":42,
...
}
------------
@@ -570,24 +372,42 @@ The following key/value pairs are common to all events:
`"repo":<repo-id>`::
when present, is the integer repo-id as described previously.
-If `GIT_TR2_EVENT_BRIEF` is true, the `file` and `line` fields are omitted
-from all events and the `time` field is only present on the "start" and
-"atexit" events.
+If `GIT_TRACE2_EVENT_BRIEF` or `trace2.eventBrief` is true, the `file`
+and `line` fields are omitted from all events and the `time` field is
+only present on the "start" and "atexit" events.
==== Event-Specific Key/Value Pairs
`"version"`::
- This event gives the version of the executable and the EVENT format.
+ This event gives the version of the executable and the EVENT format. It
+ should always be the first event in a trace session. The EVENT format
+ version will be incremented if new event types are added, if existing
+ fields are removed, or if there are significant changes in
+ interpretation of existing events or fields. Smaller changes, such as
+ adding a new field to an existing event, will not require an increment
+ to the EVENT format version.
+
------------
{
"event":"version",
...
- "evt":"1", # EVENT format version
+ "evt":"2", # EVENT format version
"exe":"2.20.1.155.g426c96fcdb" # git version
}
------------
+`"too_many_files"`::
+ This event is written to the git-trace2-discard sentinel file if there
+ are too many files in the target trace directory (see the
+ trace2.maxFiles config option).
++
+------------
+{
+ "event":"too_many_files",
+ ...
+}
+------------
+
`"start"`::
This event contains the complete argv received by main().
+
@@ -595,6 +415,7 @@ from all events and the `time` field is only present on the "start" and
{
"event":"start",
...
+ "t_abs":0.001227, # elapsed time in seconds
"argv":["git","version"]
}
------------
@@ -639,13 +460,13 @@ completed.)
"event":"signal",
...
"t_abs":0.001227, # elapsed time in seconds
- "signal":13 # SIGTERM, SIGINT, etc.
+ "signo":13 # SIGTERM, SIGINT, etc.
}
------------
`"error"`::
- This event is emitted when one of the `error()`, `die()`,
- or `usage()` functions are called.
+ This event is emitted when one of the `BUG()`, `error()`, `die()`,
+ `warning()`, or `usage()` functions are called.
+
------------
{
@@ -770,7 +591,7 @@ with "?".
Note that the session-id of the child process is not available to
the current/spawning process, so the child's PID is reported here as
a hint for post-processing. (But it is only a hint because the child
-proces may be a shell script which doesn't have a session-id.)
+process may be a shell script which doesn't have a session-id.)
+
Note that the `t_rel` field contains the observed run time in seconds
for the child process (starting before the fork/exec/spawn and
@@ -835,7 +656,8 @@ The "exec_id" field is a command-unique id and is only useful if the
------------
`"def_param"`::
- This event is generated to log a global parameter.
+ This event is generated to log a global parameter, such as a config
+ setting, command-line flag, or environment variable.
+
------------
{
@@ -882,7 +704,7 @@ visited.
The `category` field may be used in a future enhancement to
do category-based filtering.
+
-The `GIT_TR2_EVENT_NESTING` environment variable can be used to
+`GIT_TRACE2_EVENT_NESTING` or `trace2.eventNesting` can be used to
filter deeply nested regions and data events. It defaults to "2".
`"region_leave"`::
@@ -1010,8 +832,8 @@ rev-list, and gc. This example also shows that fetch took
5.199 seconds and of that 4.932 was in ssh.
+
----------------
-$ export GIT_TR2_BRIEF=1
-$ export GIT_TR2=~/log.normal
+$ export GIT_TRACE2_BRIEF=1
+$ export GIT_TRACE2=~/log.normal
$ git fetch origin
...
----------------
@@ -1046,8 +868,8 @@ its name as "gc", it also reports the hierarchy as "fetch/gc".
indented for clarity.)
+
----------------
-$ export GIT_TR2_BRIEF=1
-$ export GIT_TR2=~/log.normal
+$ export GIT_TRACE2_BRIEF=1
+$ export GIT_TRACE2=~/log.normal
$ git fetch origin
...
----------------
@@ -1105,14 +927,14 @@ In this example, scanning for untracked files ran from +0.012568 to
+0.027149 (since the process started) and took 0.014581 seconds.
+
----------------
-$ export GIT_TR2_PERF_BRIEF=1
-$ export GIT_TR2_PERF=~/log.perf
+$ export GIT_TRACE2_PERF_BRIEF=1
+$ export GIT_TRACE2_PERF=~/log.perf
$ git status
...
$ cat ~/log.perf
d0 | main | version | | | | | 2.20.1.160.g5676107ecd.dirty
-d0 | main | start | | | | | git status
+d0 | main | start | | 0.001173 | | | git status
d0 | main | def_repo | r1 | | | | worktree:/Users/jeffhost/work/gfw
d0 | main | cmd_name | | | | | status (status)
...
@@ -1130,7 +952,7 @@ d0 | main | atexit | | 0.028809 | |
+
Regions may be nested. This causes messages to be indented in the
PERF target, for example.
-Elapsed times are relative to the start of the correpsonding nesting
+Elapsed times are relative to the start of the corresponding nesting
level as expected. For example, if we add region message to:
+
----------------
@@ -1151,13 +973,13 @@ static enum path_treatment read_directory_recursive(struct dir_struct *dir,
We can further investigate the time spent scanning for untracked files.
+
----------------
-$ export GIT_TR2_PERF_BRIEF=1
-$ export GIT_TR2_PERF=~/log.perf
+$ export GIT_TRACE2_PERF_BRIEF=1
+$ export GIT_TRACE2_PERF=~/log.perf
$ git status
...
$ cat ~/log.perf
d0 | main | version | | | | | 2.20.1.162.gb4ccea44db.dirty
-d0 | main | start | | | | | git status
+d0 | main | start | | 0.001173 | | | git status
d0 | main | def_repo | r1 | | | | worktree:/Users/jeffhost/work/gfw
d0 | main | cmd_name | | | | | status (status)
...
@@ -1207,13 +1029,13 @@ int read_index_from(struct index_state *istate, const char *path,
This example shows that the index contained 3552 entries.
+
----------------
-$ export GIT_TR2_PERF_BRIEF=1
-$ export GIT_TR2_PERF=~/log.perf
+$ export GIT_TRACE2_PERF_BRIEF=1
+$ export GIT_TRACE2_PERF=~/log.perf
$ git status
...
$ cat ~/log.perf
d0 | main | version | | | | | 2.20.1.156.gf9916ae094.dirty
-d0 | main | start | | | | | git status
+d0 | main | start | | 0.001173 | | | git status
d0 | main | def_repo | r1 | | | | worktree:/Users/jeffhost/work/gfw
d0 | main | cmd_name | | | | | status (status)
d0 | main | region_enter | r1 | 0.001791 | | index | label:do_read_index .git/index
@@ -1281,8 +1103,8 @@ Data events are tagged with the active thread name. They are used
to report the per-thread parameters.
+
----------------
-$ export GIT_TR2_PERF_BRIEF=1
-$ export GIT_TR2_PERF=~/log.perf
+$ export GIT_TRACE2_PERF_BRIEF=1
+$ export GIT_TRACE2_PERF=~/log.perf
$ git status
...
$ cat ~/log.perf
@@ -1325,7 +1147,7 @@ d0 | main | atexit | | 0.030027 | |
In this example, the preload region took 0.009122 seconds. The 7 threads
took between 0.006069 and 0.008947 seconds to work on their portion of
the index. Thread "th01" worked on 508 items at offset 0. Thread "th02"
-worked on 508 items at offset 2032. Thread "th04" worked on 508 itemts
+worked on 508 items at offset 2032. Thread "th04" worked on 508 items
at offset 508.
+
This example also shows that thread names are assigned in a racy manner
diff --git a/Documentation/technical/api-tree-walking.txt b/Documentation/technical/api-tree-walking.txt
deleted file mode 100644
index bde18622a8..0000000000
--- a/Documentation/technical/api-tree-walking.txt
+++ /dev/null
@@ -1,147 +0,0 @@
-tree walking API
-================
-
-The tree walking API is used to traverse and inspect trees.
-
-Data Structures
----------------
-
-`struct name_entry`::
-
- An entry in a tree. Each entry has a sha1 identifier, pathname, and
- mode.
-
-`struct tree_desc`::
-
- A semi-opaque data structure used to maintain the current state of the
- walk.
-+
-* `buffer` is a pointer into the memory representation of the tree. It always
-points at the current entry being visited.
-
-* `size` counts the number of bytes left in the `buffer`.
-
-* `entry` points to the current entry being visited.
-
-`struct traverse_info`::
-
- A structure used to maintain the state of a traversal.
-+
-* `prev` points to the traverse_info which was used to descend into the
-current tree. If this is the top-level tree `prev` will point to
-a dummy traverse_info.
-
-* `name` is the entry for the current tree (if the tree is a subtree).
-
-* `pathlen` is the length of the full path for the current tree.
-
-* `conflicts` can be used by callbacks to maintain directory-file conflicts.
-
-* `fn` is a callback called for each entry in the tree. See Traversing for more
-information.
-
-* `data` can be anything the `fn` callback would want to use.
-
-* `show_all_errors` tells whether to stop at the first error or not.
-
-Initializing
-------------
-
-`init_tree_desc`::
-
- Initialize a `tree_desc` and decode its first entry. The buffer and
- size parameters are assumed to be the same as the buffer and size
- members of `struct tree`.
-
-`fill_tree_descriptor`::
-
- Initialize a `tree_desc` and decode its first entry given the
- object ID of a tree. Returns the `buffer` member if the latter
- is a valid tree identifier and NULL otherwise.
-
-`setup_traverse_info`::
-
- Initialize a `traverse_info` given the pathname of the tree to start
- traversing from. The `base` argument is assumed to be the `path`
- member of the `name_entry` being recursed into unless the tree is a
- top-level tree in which case the empty string ("") is used.
-
-Walking
--------
-
-`tree_entry`::
-
- Visit the next entry in a tree. Returns 1 when there are more entries
- left to visit and 0 when all entries have been visited. This is
- commonly used in the test of a while loop.
-
-`tree_entry_len`::
-
- Calculate the length of a tree entry's pathname. This utilizes the
- memory structure of a tree entry to avoid the overhead of using a
- generic strlen().
-
-`update_tree_entry`::
-
- Walk to the next entry in a tree. This is commonly used in conjunction
- with `tree_entry_extract` to inspect the current entry.
-
-`tree_entry_extract`::
-
- Decode the entry currently being visited (the one pointed to by
- `tree_desc's` `entry` member) and return the sha1 of the entry. The
- `pathp` and `modep` arguments are set to the entry's pathname and mode
- respectively.
-
-`get_tree_entry`::
-
- Find an entry in a tree given a pathname and the sha1 of a tree to
- search. Returns 0 if the entry is found and -1 otherwise. The third
- and fourth parameters are set to the entry's sha1 and mode
- respectively.
-
-Traversing
-----------
-
-`traverse_trees`::
-
- Traverse `n` number of trees in parallel. The `fn` callback member of
- `traverse_info` is called once for each tree entry.
-
-`traverse_callback_t`::
- The arguments passed to the traverse callback are as follows:
-+
-* `n` counts the number of trees being traversed.
-
-* `mask` has its nth bit set if something exists in the nth entry.
-
-* `dirmask` has its nth bit set if the nth tree's entry is a directory.
-
-* `entry` is an array of size `n` where the nth entry is from the nth tree.
-
-* `info` maintains the state of the traversal.
-
-+
-Returning a negative value will terminate the traversal. Otherwise the
-return value is treated as an update mask. If the nth bit is set the nth tree
-will be updated and if the bit is not set the nth tree entry will be the
-same in the next callback invocation.
-
-`make_traverse_path`::
-
- Generate the full pathname of a tree entry based from the root of the
- traversal. For example, if the traversal has recursed into another
- tree named "bar" the pathname of an entry "baz" in the "bar"
- tree would be "bar/baz".
-
-`traverse_path_len`::
-
- Calculate the length of a pathname returned by `make_traverse_path`.
- This utilizes the memory structure of a tree entry to avoid the
- overhead of using a generic strlen().
-
-Authors
--------
-
-Written by Junio C Hamano <gitster@pobox.com> and Linus Torvalds
-<torvalds@linux-foundation.org>
diff --git a/Documentation/technical/api-xdiff-interface.txt b/Documentation/technical/api-xdiff-interface.txt
deleted file mode 100644
index 6296ecad1d..0000000000
--- a/Documentation/technical/api-xdiff-interface.txt
+++ /dev/null
@@ -1,7 +0,0 @@
-xdiff interface API
-===================
-
-Talk about our calling convention to xdiff library, including
-xdiff_emit_consume_fn.
-
-(Dscho, JC)
diff --git a/Documentation/technical/bundle-format.txt b/Documentation/technical/bundle-format.txt
new file mode 100644
index 0000000000..bac558d049
--- /dev/null
+++ b/Documentation/technical/bundle-format.txt
@@ -0,0 +1,76 @@
+= Git bundle v2 format
+
+The Git bundle format is a format that represents both refs and Git objects.
+
+== Format
+
+We will use ABNF notation to define the Git bundle format. See
+protocol-common.txt for the details.
+
+A v2 bundle looks like this:
+
+----
+bundle = signature *prerequisite *reference LF pack
+signature = "# v2 git bundle" LF
+
+prerequisite = "-" obj-id SP comment LF
+comment = *CHAR
+reference = obj-id SP refname LF
+
+pack = ... ; packfile
+----
+
+A v3 bundle looks like this:
+
+----
+bundle = signature *capability *prerequisite *reference LF pack
+signature = "# v3 git bundle" LF
+
+capability = "@" key ["=" value] LF
+prerequisite = "-" obj-id SP comment LF
+comment = *CHAR
+reference = obj-id SP refname LF
+key = 1*(ALPHA / DIGIT / "-")
+value = *(%01-09 / %0b-FF)
+
+pack = ... ; packfile
+----
+
+== Semantics
+
+A Git bundle consists of several parts.
+
+* "Capabilities", which are only in the v3 format, indicate functionality that
+ the bundle requires to be read properly.
+
+* "Prerequisites" lists the objects that are NOT included in the bundle and the
+ reader of the bundle MUST already have, in order to use the data in the
+ bundle. The objects stored in the bundle may refer to prerequisite objects and
+ anything reachable from them (e.g. a tree object in the bundle can reference
+ a blob that is reachable from a prerequisite) and/or expressed as a delta
+ against prerequisite objects.
+
+* "References" record the tips of the history graph, iow, what the reader of the
+ bundle CAN "git fetch" from it.
+
+* "Pack" is the pack data stream "git fetch" would send, if you fetch from a
+ repository that has the references recorded in the "References" above into a
+ repository that has references pointing at the objects listed in
+ "Prerequisites" above.
+
+In the bundle format, there can be a comment following a prerequisite obj-id.
+This is a comment and it has no specific meaning. The writer of the bundle MAY
+put any string here. The reader of the bundle MUST ignore the comment.
+
+=== Note on the shallow clone and a Git bundle
+
+Note that the prerequisites does not represent a shallow-clone boundary. The
+semantics of the prerequisites and the shallow-clone boundaries are different,
+and the Git bundle v2 format cannot represent a shallow clone repository.
+
+== Capabilities
+
+Because there is no opportunity for negotiation, unknown capabilities cause 'git
+bundle' to abort. The only known capability is `object-format`, which specifies
+the hash algorithm in use, and can take the same values as the
+`extensions.objectFormat` configuration value.
diff --git a/Documentation/technical/chunk-format.txt b/Documentation/technical/chunk-format.txt
new file mode 100644
index 0000000000..593614fced
--- /dev/null
+++ b/Documentation/technical/chunk-format.txt
@@ -0,0 +1,116 @@
+Chunk-based file formats
+========================
+
+Some file formats in Git use a common concept of "chunks" to describe
+sections of the file. This allows structured access to a large file by
+scanning a small "table of contents" for the remaining data. This common
+format is used by the `commit-graph` and `multi-pack-index` files. See
+link:technical/pack-format.html[the `multi-pack-index` format] and
+link:technical/commit-graph-format.html[the `commit-graph` format] for
+how they use the chunks to describe structured data.
+
+A chunk-based file format begins with some header information custom to
+that format. That header should include enough information to identify
+the file type, format version, and number of chunks in the file. From this
+information, that file can determine the start of the chunk-based region.
+
+The chunk-based region starts with a table of contents describing where
+each chunk starts and ends. This consists of (C+1) rows of 12 bytes each,
+where C is the number of chunks. Consider the following table:
+
+ | Chunk ID (4 bytes) | Chunk Offset (8 bytes) |
+ |--------------------|------------------------|
+ | ID[0] | OFFSET[0] |
+ | ... | ... |
+ | ID[C] | OFFSET[C] |
+ | 0x0000 | OFFSET[C+1] |
+
+Each row consists of a 4-byte chunk identifier (ID) and an 8-byte offset.
+Each integer is stored in network-byte order.
+
+The chunk identifier `ID[i]` is a label for the data stored within this
+fill from `OFFSET[i]` (inclusive) to `OFFSET[i+1]` (exclusive). Thus, the
+size of the `i`th chunk is equal to the difference between `OFFSET[i+1]`
+and `OFFSET[i]`. This requires that the chunk data appears contiguously
+in the same order as the table of contents.
+
+The final entry in the table of contents must be four zero bytes. This
+confirms that the table of contents is ending and provides the offset for
+the end of the chunk-based data.
+
+Note: The chunk-based format expects that the file contains _at least_ a
+trailing hash after `OFFSET[C+1]`.
+
+Functions for working with chunk-based file formats are declared in
+`chunk-format.h`. Using these methods provide extra checks that assist
+developers when creating new file formats.
+
+Writing chunk-based file formats
+--------------------------------
+
+To write a chunk-based file format, create a `struct chunkfile` by
+calling `init_chunkfile()` and pass a `struct hashfile` pointer. The
+caller is responsible for opening the `hashfile` and writing header
+information so the file format is identifiable before the chunk-based
+format begins.
+
+Then, call `add_chunk()` for each chunk that is intended for write. This
+populates the `chunkfile` with information about the order and size of
+each chunk to write. Provide a `chunk_write_fn` function pointer to
+perform the write of the chunk data upon request.
+
+Call `write_chunkfile()` to write the table of contents to the `hashfile`
+followed by each of the chunks. This will verify that each chunk wrote
+the expected amount of data so the table of contents is correct.
+
+Finally, call `free_chunkfile()` to clear the `struct chunkfile` data. The
+caller is responsible for finalizing the `hashfile` by writing the trailing
+hash and closing the file.
+
+Reading chunk-based file formats
+--------------------------------
+
+To read a chunk-based file format, the file must be opened as a
+memory-mapped region. The chunk-format API expects that the entire file
+is mapped as a contiguous memory region.
+
+Initialize a `struct chunkfile` pointer with `init_chunkfile(NULL)`.
+
+After reading the header information from the beginning of the file,
+including the chunk count, call `read_table_of_contents()` to populate
+the `struct chunkfile` with the list of chunks, their offsets, and their
+sizes.
+
+Extract the data information for each chunk using `pair_chunk()` or
+`read_chunk()`:
+
+* `pair_chunk()` assigns a given pointer with the location inside the
+ memory-mapped file corresponding to that chunk's offset. If the chunk
+ does not exist, then the pointer is not modified.
+
+* `read_chunk()` takes a `chunk_read_fn` function pointer and calls it
+ with the appropriate initial pointer and size information. The function
+ is not called if the chunk does not exist. Use this method to read chunks
+ if you need to perform immediate parsing or if you need to execute logic
+ based on the size of the chunk.
+
+After calling these methods, call `free_chunkfile()` to clear the
+`struct chunkfile` data. This will not close the memory-mapped region.
+Callers are expected to own that data for the timeframe the pointers into
+the region are needed.
+
+Examples
+--------
+
+These file formats use the chunk-format API, and can be used as examples
+for future formats:
+
+* *commit-graph:* see `write_commit_graph_file()` and `parse_commit_graph()`
+ in `commit-graph.c` for how the chunk-format API is used to write and
+ parse the commit-graph file format documented in
+ link:technical/commit-graph-format.html[the commit-graph file format].
+
+* *multi-pack-index:* see `write_midx_internal()` and `load_multi_pack_index()`
+ in `midx.c` for how the chunk-format API is used to write and
+ parse the multi-pack-index file format documented in
+ link:technical/pack-format.html[the multi-pack-index file format].
diff --git a/Documentation/technical/commit-graph-format.txt b/Documentation/technical/commit-graph-format.txt
index 16452a0504..87971c27dd 100644
--- a/Documentation/technical/commit-graph-format.txt
+++ b/Documentation/technical/commit-graph-format.txt
@@ -4,11 +4,7 @@ Git commit graph format
The Git commit graph stores a list of commit OIDs and some associated
metadata, including:
-- The generation number of the commit. Commits with no parents have
- generation number 1; commits with parents have generation number
- one more than the maximum generation number of its parents. We
- reserve zero as special, and can be used to mark a generation
- number invalid or as "not computed".
+- The generation number of the commit.
- The root tree OID.
@@ -17,6 +13,9 @@ metadata, including:
- The parents of the commit, stored using positional references within
the graph file.
+- The Bloom filter of the commit carrying the paths that were changed between
+ the commit and its first parent, if requested.
+
These positional references are stored as unsigned 32-bit integers
corresponding to the array position within the list of commit OIDs. Due
to some special constants we use to track parents, we can store at most
@@ -29,7 +28,7 @@ the body into "chunks" and provide a binary lookup table at the beginning
of the body. The header includes certain values, such as number of chunks
and hash type.
-All 4-byte numbers are in network order.
+All multi-byte numbers are in network byte order.
HEADER:
@@ -39,13 +38,19 @@ HEADER:
1-byte version number:
Currently, the only valid version is 1.
- 1-byte Hash Version (1 = SHA-1)
- We infer the hash length (H) from this value.
+ 1-byte Hash Version
+ We infer the hash length (H) from this value:
+ 1 => SHA-1
+ 2 => SHA-256
+ If the hash type does not match the repository's hash algorithm, the
+ commit-graph file should be ignored with a warning presented to the
+ user.
1-byte number (C) of "chunks"
- 1-byte (reserved for later use)
- Current clients should ignore this value.
+ 1-byte number (B) of base commit-graphs
+ We infer the length (H*B) of the Base Graphs chunk
+ from this value.
CHUNK LOOKUP:
@@ -56,6 +61,9 @@ CHUNK LOOKUP:
the length using the next chunk position if necessary.) Each chunk
ID appears at most once.
+ The CHUNK LOOKUP matches the table of contents from
+ link:technical/chunk-format.html[the chunk-based file format].
+
The remaining data in the body is described one chunk at a time, and
these chunks may be given in any order. Chunks are required unless
otherwise specified.
@@ -73,17 +81,37 @@ CHUNK DATA:
Commit Data (ID: {'C', 'D', 'A', 'T' }) (N * (H + 16) bytes)
* The first H bytes are for the OID of the root tree.
* The next 8 bytes are for the positions of the first two parents
- of the ith commit. Stores value 0x7000000 if no parent in that
+ of the ith commit. Stores value 0x70000000 if no parent in that
position. If there are more than two parents, the second value
has its most-significant bit on and the other bits store an array
position into the Extra Edge List chunk.
- * The next 8 bytes store the generation number of the commit and
+ * The next 8 bytes store the topological level (generation number v1)
+ of the commit and
the commit time in seconds since EPOCH. The generation number
uses the higher 30 bits of the first 4 bytes, while the commit
time uses the 32 bits of the second 4 bytes, along with the lowest
2 bits of the lowest byte, storing the 33rd and 34th bit of the
commit time.
+ Generation Data (ID: {'G', 'D', 'A', 'T' }) (N * 4 bytes) [Optional]
+ * This list of 4-byte values store corrected commit date offsets for the
+ commits, arranged in the same order as commit data chunk.
+ * If the corrected commit date offset cannot be stored within 31 bits,
+ the value has its most-significant bit on and the other bits store
+ the position of corrected commit date into the Generation Data Overflow
+ chunk.
+ * Generation Data chunk is present only when commit-graph file is written
+ by compatible versions of Git and in case of split commit-graph chains,
+ the topmost layer also has Generation Data chunk.
+
+ Generation Data Overflow (ID: {'G', 'D', 'O', 'V' }) [Optional]
+ * This list of 8-byte values stores the corrected commit date offsets
+ for commits with corrected commit date offsets that cannot be
+ stored within 31 bits.
+ * Generation Data Overflow chunk is present only when Generation Data
+ chunk is present and atleast one corrected commit date offset cannot
+ be stored within 31 bits.
+
Extra Edge List (ID: {'E', 'D', 'G', 'E'}) [Optional]
This list of 4-byte values store the second through nth parents for
all octopus merges. The second parent value in the commit data stores
@@ -92,6 +120,39 @@ CHUNK DATA:
positions for the parents until reaching a value with the most-significant
bit on. The other bits correspond to the position of the last parent.
+ Bloom Filter Index (ID: {'B', 'I', 'D', 'X'}) (N * 4 bytes) [Optional]
+ * The ith entry, BIDX[i], stores the number of bytes in all Bloom filters
+ from commit 0 to commit i (inclusive) in lexicographic order. The Bloom
+ filter for the i-th commit spans from BIDX[i-1] to BIDX[i] (plus header
+ length), where BIDX[-1] is 0.
+ * The BIDX chunk is ignored if the BDAT chunk is not present.
+
+ Bloom Filter Data (ID: {'B', 'D', 'A', 'T'}) [Optional]
+ * It starts with header consisting of three unsigned 32-bit integers:
+ - Version of the hash algorithm being used. We currently only support
+ value 1 which corresponds to the 32-bit version of the murmur3 hash
+ implemented exactly as described in
+ https://en.wikipedia.org/wiki/MurmurHash#Algorithm and the double
+ hashing technique using seed values 0x293ae76f and 0x7e646e2 as
+ described in https://doi.org/10.1007/978-3-540-30494-4_26 "Bloom Filters
+ in Probabilistic Verification"
+ - The number of times a path is hashed and hence the number of bit positions
+ that cumulatively determine whether a file is present in the commit.
+ - The minimum number of bits 'b' per entry in the Bloom filter. If the filter
+ contains 'n' entries, then the filter size is the minimum number of 64-bit
+ words that contain n*b bits.
+ * The rest of the chunk is the concatenation of all the computed Bloom
+ filters for the commits in lexicographic order.
+ * Note: Commits with no changes or more than 512 changes have Bloom filters
+ of length one, with either all bits set to zero or one respectively.
+ * The BDAT chunk is present if and only if BIDX is present.
+
+ Base Graphs List (ID: {'B', 'A', 'S', 'E'}) [Optional]
+ This list of H-byte hashes describe a set of B commit-graph files that
+ form a commit-graph chain. The graph position for the ith commit in this
+ file's OID Lookup chunk is equal to i plus the number of commits in all
+ base graphs. If B is non-zero, this chunk must exist.
+
TRAILER:
H-byte HASH-checksum of all of the above.
diff --git a/Documentation/technical/commit-graph.txt b/Documentation/technical/commit-graph.txt
index 7805b0968c..f05e7bda1a 100644
--- a/Documentation/technical/commit-graph.txt
+++ b/Documentation/technical/commit-graph.txt
@@ -22,11 +22,11 @@ as "commit-graph" either in the .git/objects/info directory or in the info
directory of an alternate.
The commit-graph file stores the commit graph structure along with some
-extra metadata to speed up graph walks. By listing commit OIDs in lexi-
-cographic order, we can identify an integer position for each commit and
-refer to the parents of a commit using those integer positions. We use
-binary search to find initial commits and then use the integer positions
-for fast lookups during the walk.
+extra metadata to speed up graph walks. By listing commit OIDs in
+lexicographic order, we can identify an integer position for each commit
+and refer to the parents of a commit using those integer positions. We
+use binary search to find initial commits and then use the integer
+positions for fast lookups during the walk.
A consumer may load the following info for a commit from the graph:
@@ -38,14 +38,31 @@ A consumer may load the following info for a commit from the graph:
Values 1-4 satisfy the requirements of parse_commit_gently().
-Define the "generation number" of a commit recursively as follows:
+There are two definitions of generation number:
+1. Corrected committer dates (generation number v2)
+2. Topological levels (generation nummber v1)
- * A commit with no parents (a root commit) has generation number one.
+Define "corrected committer date" of a commit recursively as follows:
- * A commit with at least one parent has generation number one more than
- the largest generation number among its parents.
+ * A commit with no parents (a root commit) has corrected committer date
+ equal to its committer date.
-Equivalently, the generation number of a commit A is one more than the
+ * A commit with at least one parent has corrected committer date equal to
+ the maximum of its commiter date and one more than the largest corrected
+ committer date among its parents.
+
+ * As a special case, a root commit with timestamp zero has corrected commit
+ date of 1, to be able to distinguish it from GENERATION_NUMBER_ZERO
+ (that is, an uncomputed corrected commit date).
+
+Define the "topological level" of a commit recursively as follows:
+
+ * A commit with no parents (a root commit) has topological level of one.
+
+ * A commit with at least one parent has topological level one more than
+ the largest topological level among its parents.
+
+Equivalently, the topological level of a commit A is one more than the
length of a longest path from A to a root commit. The recursive definition
is easier to use for computation and observing the following property:
@@ -60,6 +77,9 @@ is easier to use for computation and observing the following property:
generation numbers, then we always expand the boundary commit with highest
generation number and can easily detect the stopping condition.
+The property applies to both versions of generation number, that is both
+corrected committer dates and topological levels.
+
This property can be used to significantly reduce the time it takes to
walk commits and determine topological relationships. Without generation
numbers, the general heuristic is the following:
@@ -67,7 +87,9 @@ numbers, the general heuristic is the following:
If A and B are commits with commit time X and Y, respectively, and
X < Y, then A _probably_ cannot reach B.
-This heuristic is currently used whenever the computation is allowed to
+In absence of corrected commit dates (for example, old versions of Git or
+mixed generation graph chains),
+this heuristic is currently used whenever the computation is allowed to
violate topological relationships due to clock skew (such as "git log"
with default order), but is not used when the topological order is
required (such as merge base calculations, "git log --graph").
@@ -77,7 +99,7 @@ in the commit graph. We can treat these commits as having "infinite"
generation number and walk until reaching commits with known generation
number.
-We use the macro GENERATION_NUMBER_INFINITY = 0xFFFFFFFF to mark commits not
+We use the macro GENERATION_NUMBER_INFINITY to mark commits not
in the commit-graph file. If a commit-graph file was written by a version
of Git that did not compute generation numbers, then those commits will
have generation number represented by the macro GENERATION_NUMBER_ZERO = 0.
@@ -85,7 +107,7 @@ have generation number represented by the macro GENERATION_NUMBER_ZERO = 0.
Since the commit-graph file is closed under reachability, we can guarantee
the following weaker condition on all commits:
- If A and B are commits with generation numbers N amd M, respectively,
+ If A and B are commits with generation numbers N and M, respectively,
and N < M, then A cannot reach B.
Note how the strict inequality differs from the inequality when we have
@@ -93,12 +115,12 @@ fully-computed generation numbers. Using strict inequality may result in
walking a few extra commits, but the simplicity in dealing with commits
with generation number *_INFINITY or *_ZERO is valuable.
-We use the macro GENERATION_NUMBER_MAX = 0x3FFFFFFF to for commits whose
-generation numbers are computed to be at least this value. We limit at
-this value since it is the largest value that can be stored in the
-commit-graph file using the 30 bits available to generation numbers. This
-presents another case where a commit can have generation number equal to
-that of a parent.
+We use the macro GENERATION_NUMBER_V1_MAX = 0x3FFFFFFF for commits whose
+topological levels (generation number v1) are computed to be at least
+this value. We limit at this value since it is the largest value that
+can be stored in the commit-graph file using the 30 bits available
+to topological levels. This presents another case where a commit can
+have generation number equal to that of a parent.
Design Details
--------------
@@ -127,36 +149,239 @@ Design Details
helpful for these clones, anyway. The commit-graph will not be read or
written when shallow commits are present.
-Future Work
------------
-
-- After computing and storing generation numbers, we must make graph
- walks aware of generation numbers to gain the performance benefits they
- enable. This will mostly be accomplished by swapping a commit-date-ordered
- priority queue with one ordered by generation number. The following
- operations are important candidates:
+Commit Graphs Chains
+--------------------
+
+Typically, repos grow with near-constant velocity (commits per day). Over time,
+the number of commits added by a fetch operation is much smaller than the
+number of commits in the full history. By creating a "chain" of commit-graphs,
+we enable fast writes of new commit data without rewriting the entire commit
+history -- at least, most of the time.
+
+## File Layout
+
+A commit-graph chain uses multiple files, and we use a fixed naming convention
+to organize these files. Each commit-graph file has a name
+`$OBJDIR/info/commit-graphs/graph-{hash}.graph` where `{hash}` is the hex-
+valued hash stored in the footer of that file (which is a hash of the file's
+contents before that hash). For a chain of commit-graph files, a plain-text
+file at `$OBJDIR/info/commit-graphs/commit-graph-chain` contains the
+hashes for the files in order from "lowest" to "highest".
+
+For example, if the `commit-graph-chain` file contains the lines
+
+```
+ {hash0}
+ {hash1}
+ {hash2}
+```
+
+then the commit-graph chain looks like the following diagram:
+
+ +-----------------------+
+ | graph-{hash2}.graph |
+ +-----------------------+
+ |
+ +-----------------------+
+ | |
+ | graph-{hash1}.graph |
+ | |
+ +-----------------------+
+ |
+ +-----------------------+
+ | |
+ | |
+ | |
+ | graph-{hash0}.graph |
+ | |
+ | |
+ | |
+ +-----------------------+
+
+Let X0 be the number of commits in `graph-{hash0}.graph`, X1 be the number of
+commits in `graph-{hash1}.graph`, and X2 be the number of commits in
+`graph-{hash2}.graph`. If a commit appears in position i in `graph-{hash2}.graph`,
+then we interpret this as being the commit in position (X0 + X1 + i), and that
+will be used as its "graph position". The commits in `graph-{hash2}.graph` use these
+positions to refer to their parents, which may be in `graph-{hash1}.graph` or
+`graph-{hash0}.graph`. We can navigate to an arbitrary commit in position j by checking
+its containment in the intervals [0, X0), [X0, X0 + X1), [X0 + X1, X0 + X1 +
+X2).
+
+Each commit-graph file (except the base, `graph-{hash0}.graph`) contains data
+specifying the hashes of all files in the lower layers. In the above example,
+`graph-{hash1}.graph` contains `{hash0}` while `graph-{hash2}.graph` contains
+`{hash0}` and `{hash1}`.
+
+## Merging commit-graph files
+
+If we only added a new commit-graph file on every write, we would run into a
+linear search problem through many commit-graph files. Instead, we use a merge
+strategy to decide when the stack should collapse some number of levels.
+
+The diagram below shows such a collapse. As a set of new commits are added, it
+is determined by the merge strategy that the files should collapse to
+`graph-{hash1}`. Thus, the new commits, the commits in `graph-{hash2}` and
+the commits in `graph-{hash1}` should be combined into a new `graph-{hash3}`
+file.
+
+ +---------------------+
+ | |
+ | (new commits) |
+ | |
+ +---------------------+
+ | |
+ +-----------------------+ +---------------------+
+ | graph-{hash2} |->| |
+ +-----------------------+ +---------------------+
+ | | |
+ +-----------------------+ +---------------------+
+ | | | |
+ | graph-{hash1} |->| |
+ | | | |
+ +-----------------------+ +---------------------+
+ | tmp_graphXXX
+ +-----------------------+
+ | |
+ | |
+ | |
+ | graph-{hash0} |
+ | |
+ | |
+ | |
+ +-----------------------+
+
+During this process, the commits to write are combined, sorted and we write the
+contents to a temporary file, all while holding a `commit-graph-chain.lock`
+lock-file. When the file is flushed, we rename it to `graph-{hash3}`
+according to the computed `{hash3}`. Finally, we write the new chain data to
+`commit-graph-chain.lock`:
+
+```
+ {hash3}
+ {hash0}
+```
+
+We then close the lock-file.
+
+## Merge Strategy
+
+When writing a set of commits that do not exist in the commit-graph stack of
+height N, we default to creating a new file at level N + 1. We then decide to
+merge with the Nth level if one of two conditions hold:
+
+ 1. `--size-multiple=<X>` is specified or X = 2, and the number of commits in
+ level N is less than X times the number of commits in level N + 1.
+
+ 2. `--max-commits=<C>` is specified with non-zero C and the number of commits
+ in level N + 1 is more than C commits.
+
+This decision cascades down the levels: when we merge a level we create a new
+set of commits that then compares to the next level.
+
+The first condition bounds the number of levels to be logarithmic in the total
+number of commits. The second condition bounds the total number of commits in
+a `graph-{hashN}` file and not in the `commit-graph` file, preventing
+significant performance issues when the stack merges and another process only
+partially reads the previous stack.
+
+The merge strategy values (2 for the size multiple, 64,000 for the maximum
+number of commits) could be extracted into config settings for full
+flexibility.
+
+## Handling Mixed Generation Number Chains
+
+With the introduction of generation number v2 and generation data chunk, the
+following scenario is possible:
+
+1. "New" Git writes a commit-graph with the corrected commit dates.
+2. "Old" Git writes a split commit-graph on top without corrected commit dates.
+
+A naive approach of using the newest available generation number from
+each layer would lead to violated expectations: the lower layer would
+use corrected commit dates which are much larger than the topological
+levels of the higher layer. For this reason, Git inspects the topmost
+layer to see if the layer is missing corrected commit dates. In such a case
+Git only uses topological level for generation numbers.
+
+When writing a new layer in split commit-graph, we write corrected commit
+dates if the topmost layer has corrected commit dates written. This
+guarantees that if a layer has corrected commit dates, all lower layers
+must have corrected commit dates as well.
+
+When merging layers, we do not consider whether the merged layers had corrected
+commit dates. Instead, the new layer will have corrected commit dates if the
+layer below the new layer has corrected commit dates.
+
+While writing or merging layers, if the new layer is the only layer, it will
+have corrected commit dates when written by compatible versions of Git. Thus,
+rewriting split commit-graph as a single file (`--split=replace`) creates a
+single layer with corrected commit dates.
+
+## Deleting graph-{hash} files
+
+After a new tip file is written, some `graph-{hash}` files may no longer
+be part of a chain. It is important to remove these files from disk, eventually.
+The main reason to delay removal is that another process could read the
+`commit-graph-chain` file before it is rewritten, but then look for the
+`graph-{hash}` files after they are deleted.
+
+To allow holding old split commit-graphs for a while after they are unreferenced,
+we update the modified times of the files when they become unreferenced. Then,
+we scan the `$OBJDIR/info/commit-graphs/` directory for `graph-{hash}`
+files whose modified times are older than a given expiry window. This window
+defaults to zero, but can be changed using command-line arguments or a config
+setting.
+
+## Chains across multiple object directories
+
+In a repo with alternates, we look for the `commit-graph-chain` file starting
+in the local object directory and then in each alternate. The first file that
+exists defines our chain. As we look for the `graph-{hash}` files for
+each `{hash}` in the chain file, we follow the same pattern for the host
+directories.
+
+This allows commit-graphs to be split across multiple forks in a fork network.
+The typical case is a large "base" repo with many smaller forks.
+
+As the base repo advances, it will likely update and merge its commit-graph
+chain more frequently than the forks. If a fork updates their commit-graph after
+the base repo, then it should "reparent" the commit-graph chain onto the new
+chain in the base repo. When reading each `graph-{hash}` file, we track
+the object directory containing it. During a write of a new commit-graph file,
+we check for any changes in the source object directory and read the
+`commit-graph-chain` file for that source and create a new file based on those
+files. During this "reparent" operation, we necessarily need to collapse all
+levels in the fork, as all of the files are invalid against the new base file.
+
+It is crucial to be careful when cleaning up "unreferenced" `graph-{hash}.graph`
+files in this scenario. It falls to the user to define the proper settings for
+their custom environment:
+
+ 1. When merging levels in the base repo, the unreferenced files may still be
+ referenced by chains from fork repos.
- - 'log --topo-order'
- - 'tag --merged'
-
-- A server could provide a commit-graph file as part of the network protocol
- to avoid extra calculations by clients. This feature is only of benefit if
- the user is willing to trust the file, because verifying the file is correct
- is as hard as computing it from scratch.
+ 2. The expiry time should be set to a length of time such that every fork has
+ time to recompute their commit-graph chain to "reparent" onto the new base
+ file(s).
+
+ 3. If the commit-graph chain is updated in the base, the fork will not have
+ access to the new chain until its chain is updated to reference those files.
+ (This may change in the future [5].)
Related Links
-------------
[0] https://bugs.chromium.org/p/git/issues/detail?id=8
Chromium work item for: Serialized Commit Graph
-[1] https://public-inbox.org/git/20110713070517.GC18566@sigill.intra.peff.net/
+[1] https://lore.kernel.org/git/20110713070517.GC18566@sigill.intra.peff.net/
An abandoned patch that introduced generation numbers.
-[2] https://public-inbox.org/git/20170908033403.q7e6dj7benasrjes@sigill.intra.peff.net/
+[2] https://lore.kernel.org/git/20170908033403.q7e6dj7benasrjes@sigill.intra.peff.net/
Discussion about generation numbers on commits and how they interact
with fsck.
-[3] https://public-inbox.org/git/20170908034739.4op3w4f2ma5s65ku@sigill.intra.peff.net/
+[3] https://lore.kernel.org/git/20170908034739.4op3w4f2ma5s65ku@sigill.intra.peff.net/
More discussion about generation numbers and not storing them inside
commit objects. A valuable quote:
@@ -168,5 +393,9 @@ Related Links
commit objects (i.e., packv4 or something like the "metapacks" I
proposed a few years ago)."
-[4] https://public-inbox.org/git/20180108154822.54829-1-git@jeffhostetler.com/T/#u
+[4] https://lore.kernel.org/git/20180108154822.54829-1-git@jeffhostetler.com/T/#u
A patch to remove the ahead-behind calculation from 'status'.
+
+[5] https://lore.kernel.org/git/f27db281-abad-5043-6d71-cbb083b1c877@gmail.com/
+ A discussion of a "two-dimensional graph position" that can allow reading
+ multiple commit-graph chains at the same time.
diff --git a/Documentation/technical/directory-rename-detection.txt b/Documentation/technical/directory-rename-detection.txt
index 844629c8c4..029ee2cedc 100644
--- a/Documentation/technical/directory-rename-detection.txt
+++ b/Documentation/technical/directory-rename-detection.txt
@@ -2,9 +2,9 @@ Directory rename detection
==========================
Rename detection logic in diffcore-rename that checks for renames of
-individual files is aggregated and analyzed in merge-recursive for cases
-where combinations of renames indicate that a full directory has been
-renamed.
+individual files is also aggregated there and then analyzed in either
+merge-ort or merge-recursive for cases where combinations of renames
+indicate that a full directory has been renamed.
Scope of abilities
------------------
@@ -18,7 +18,8 @@ It is perhaps easiest to start with an example:
More interesting possibilities exist, though, such as:
* one side of history renames x -> z, and the other renames some file to
- x/e, causing the need for the merge to do a transitive rename.
+ x/e, causing the need for the merge to do a transitive rename so that
+ the rename ends up at z/e.
* one side of history renames x -> z, but also renames all files within x.
For example, x/a -> z/alpha, x/b -> z/bravo, etc.
@@ -35,7 +36,7 @@ More interesting possibilities exist, though, such as:
directory itself contained inner directories that were renamed to yet
other locations).
- * combinations of the above; see t/t6043-merge-rename-directories.sh for
+ * combinations of the above; see t/t6423-merge-rename-directories.sh for
various interesting cases.
Limitations -- applicability of directory renames
@@ -62,19 +63,19 @@ directory rename detection applies:
Limitations -- detailed rules and testcases
-------------------------------------------
-t/t6043-merge-rename-directories.sh contains extensive tests and commentary
+t/t6423-merge-rename-directories.sh contains extensive tests and commentary
which generate and explore the rules listed above. It also lists a few
additional rules:
a) If renames split a directory into two or more others, the directory
with the most renames, "wins".
- b) Avoid directory-rename-detection for a path, if that path is the
- source of a rename on either side of a merge.
-
- c) Only apply implicit directory renames to directories if the other side
+ b) Only apply implicit directory renames to directories if the other side
of history is the one doing the renaming.
+ c) Do not perform directory rename detection for directories which had no
+ new paths added to them.
+
Limitations -- support in different commands
--------------------------------------------
@@ -87,9 +88,11 @@ directory rename detection support in:
Folks have requested in the past that `git diff` detect directory
renames and somehow simplify its output. It is not clear whether this
would be desirable or how the output should be simplified, so this was
- simply not implemented. Further, to implement this, directory rename
- detection logic would need to move from merge-recursive to
- diffcore-rename.
+ simply not implemented. Also, while diffcore-rename has most of the
+ logic for detecting directory renames, some of the logic is still found
+ within merge-ort and merge-recursive. Fully supporting directory
+ rename detection in diffs would require copying or moving the remaining
+ bits of logic to the diff machinery.
* am
diff --git a/Documentation/technical/hash-function-transition.txt b/Documentation/technical/hash-function-transition.txt
index bc2ace2a6e..260224b033 100644
--- a/Documentation/technical/hash-function-transition.txt
+++ b/Documentation/technical/hash-function-transition.txt
@@ -33,16 +33,9 @@ researchers. On 23 February 2017 the SHAttered attack
Git v2.13.0 and later subsequently moved to a hardened SHA-1
implementation by default, which isn't vulnerable to the SHAttered
-attack.
+attack, but SHA-1 is still weak.
-Thus Git has in effect already migrated to a new hash that isn't SHA-1
-and doesn't share its vulnerabilities, its new hash function just
-happens to produce exactly the same output for all known inputs,
-except two PDFs published by the SHAttered researchers, and the new
-implementation (written by those researchers) claims to detect future
-cryptanalytic collision attacks.
-
-Regardless, it's considered prudent to move past any variant of SHA-1
+Thus it's considered prudent to move past any variant of SHA-1
to a new hash. There's no guarantee that future attacks on SHA-1 won't
be published in the future, and those attacks may not have viable
mitigations.
@@ -57,6 +50,38 @@ SHA-1 still possesses the other properties such as fast object lookup
and safe error checking, but other hash functions are equally suitable
that are believed to be cryptographically secure.
+Choice of Hash
+--------------
+The hash to replace the hardened SHA-1 should be stronger than SHA-1
+was: we would like it to be trustworthy and useful in practice for at
+least 10 years.
+
+Some other relevant properties:
+
+1. A 256-bit hash (long enough to match common security practice; not
+ excessively long to hurt performance and disk usage).
+
+2. High quality implementations should be widely available (e.g., in
+ OpenSSL and Apple CommonCrypto).
+
+3. The hash function's properties should match Git's needs (e.g. Git
+ requires collision and 2nd preimage resistance and does not require
+ length extension resistance).
+
+4. As a tiebreaker, the hash should be fast to compute (fortunately
+ many contenders are faster than SHA-1).
+
+There were several contenders for a successor hash to SHA-1, including
+SHA-256, SHA-512/256, SHA-256x16, K12, and BLAKE2bp-256.
+
+In late 2018 the project picked SHA-256 as its successor hash.
+
+See 0ed8d8da374 (doc hash-function-transition: pick SHA-256 as
+NewHash, 2018-08-04) and numerous mailing list threads at the time,
+particularly the one starting at
+https://lore.kernel.org/git/20180609224913.GC38834@genre.crustytoothpaste.net/
+for more information.
+
Goals
-----
1. The transition to SHA-256 can be done one local repository at a time.
@@ -94,7 +119,7 @@ Overview
--------
We introduce a new repository format extension. Repositories with this
extension enabled use SHA-256 instead of SHA-1 to name their objects.
-This affects both object names and object content --- both the names
+This affects both object names and object content -- both the names
of objects and all references to other objects within an object are
switched to the new hash function.
@@ -107,7 +132,7 @@ mapping to allow naming objects using either their SHA-1 and SHA-256 names
interchangeably.
"git cat-file" and "git hash-object" gain options to display an object
-in its sha1 form and write an object given its sha1 form. This
+in its SHA-1 form and write an object given its SHA-1 form. This
requires all objects referenced by that object to be present in the
object database so that they can be named using the appropriate name
(using the bidirectional hash mapping).
@@ -115,7 +140,7 @@ object database so that they can be named using the appropriate name
Fetches from a SHA-1 based server convert the fetched objects into
SHA-256 form and record the mapping in the bidirectional mapping table
(see below for details). Pushes to a SHA-1 based server convert the
-objects being pushed into sha1 form so the server does not have to be
+objects being pushed into SHA-1 form so the server does not have to be
aware of the hash function the client is using.
Detailed Design
@@ -151,38 +176,38 @@ repository extensions.
Object names
~~~~~~~~~~~~
-Objects can be named by their 40 hexadecimal digit sha1-name or 64
-hexadecimal digit sha256-name, plus names derived from those (see
+Objects can be named by their 40 hexadecimal digit SHA-1 name or 64
+hexadecimal digit SHA-256 name, plus names derived from those (see
gitrevisions(7)).
-The sha1-name of an object is the SHA-1 of the concatenation of its
-type, length, a nul byte, and the object's sha1-content. This is the
+The SHA-1 name of an object is the SHA-1 of the concatenation of its
+type, length, a nul byte, and the object's SHA-1 content. This is the
traditional <sha1> used in Git to name objects.
-The sha256-name of an object is the SHA-256 of the concatenation of its
-type, length, a nul byte, and the object's sha256-content.
+The SHA-256 name of an object is the SHA-256 of the concatenation of its
+type, length, a nul byte, and the object's SHA-256 content.
Object format
~~~~~~~~~~~~~
The content as a byte sequence of a tag, commit, or tree object named
-by sha1 and sha256 differ because an object named by sha256-name refers to
-other objects by their sha256-names and an object named by sha1-name
-refers to other objects by their sha1-names.
+by SHA-1 and SHA-256 differ because an object named by SHA-256 name refers to
+other objects by their SHA-256 names and an object named by SHA-1 name
+refers to other objects by their SHA-1 names.
-The sha256-content of an object is the same as its sha1-content, except
-that objects referenced by the object are named using their sha256-names
-instead of sha1-names. Because a blob object does not refer to any
-other object, its sha1-content and sha256-content are the same.
+The SHA-256 content of an object is the same as its SHA-1 content, except
+that objects referenced by the object are named using their SHA-256 names
+instead of SHA-1 names. Because a blob object does not refer to any
+other object, its SHA-1 content and SHA-256 content are the same.
-The format allows round-trip conversion between sha256-content and
-sha1-content.
+The format allows round-trip conversion between SHA-256 content and
+SHA-1 content.
Object storage
~~~~~~~~~~~~~~
Loose objects use zlib compression and packed objects use the packed
format described in Documentation/technical/pack-format.txt, just like
-today. The content that is compressed and stored uses sha256-content
-instead of sha1-content.
+today. The content that is compressed and stored uses SHA-256 content
+instead of SHA-1 content.
Pack index
~~~~~~~~~~
@@ -191,21 +216,21 @@ hash functions. They have the following format (all integers are in
network byte order):
- A header appears at the beginning and consists of the following:
- - The 4-byte pack index signature: '\377t0c'
- - 4-byte version number: 3
- - 4-byte length of the header section, including the signature and
+ * The 4-byte pack index signature: '\377t0c'
+ * 4-byte version number: 3
+ * 4-byte length of the header section, including the signature and
version number
- - 4-byte number of objects contained in the pack
- - 4-byte number of object formats in this pack index: 2
- - For each object format:
- - 4-byte format identifier (e.g., 'sha1' for SHA-1)
- - 4-byte length in bytes of shortened object names. This is the
+ * 4-byte number of objects contained in the pack
+ * 4-byte number of object formats in this pack index: 2
+ * For each object format:
+ ** 4-byte format identifier (e.g., 'sha1' for SHA-1)
+ ** 4-byte length in bytes of shortened object names. This is the
shortest possible length needed to make names in the shortened
object name table unambiguous.
- - 4-byte integer, recording where tables relating to this format
+ ** 4-byte integer, recording where tables relating to this format
are stored in this index file, as an offset from the beginning.
- - 4-byte offset to the trailer from the beginning of this file.
- - Zero or more additional key/value pairs (4-byte key, 4-byte
+ * 4-byte offset to the trailer from the beginning of this file.
+ * Zero or more additional key/value pairs (4-byte key, 4-byte
value). Only one key is supported: 'PSRC'. See the "Loose objects
and unreachable objects" section for supported values and how this
is used. All other keys are reserved. Readers must ignore
@@ -213,37 +238,36 @@ network byte order):
- Zero or more NUL bytes. This can optionally be used to improve the
alignment of the full object name table below.
- Tables for the first object format:
- - A sorted table of shortened object names. These are prefixes of
+ * A sorted table of shortened object names. These are prefixes of
the names of all objects in this pack file, packed together
without offset values to reduce the cache footprint of the binary
search for a specific object name.
- - A table of full object names in pack order. This allows resolving
+ * A table of full object names in pack order. This allows resolving
a reference to "the nth object in the pack file" (from a
reachability bitmap or from the next table of another object
format) to its object name.
- - A table of 4-byte values mapping object name order to pack order.
+ * A table of 4-byte values mapping object name order to pack order.
For an object in the table of sorted shortened object names, the
value at the corresponding index in this table is the index in the
previous table for that same object.
-
This can be used to look up the object in reachability bitmaps or
to look up its name in another object format.
- - A table of 4-byte CRC32 values of the packed object data, in the
+ * A table of 4-byte CRC32 values of the packed object data, in the
order that the objects appear in the pack file. This is to allow
compressed data to be copied directly from pack to pack during
repacking without undetected data corruption.
- - A table of 4-byte offset values. For an object in the table of
+ * A table of 4-byte offset values. For an object in the table of
sorted shortened object names, the value at the corresponding
index in this table indicates where that object can be found in
the pack file. These are usually 31-bit pack file offsets, but
large offsets are encoded as an index into the next table with the
most significant bit set.
- - A table of 8-byte offset entries (empty for pack files less than
+ * A table of 8-byte offset entries (empty for pack files less than
2 GiB). Pack files are organized with heavily used objects toward
the front, so most object references should not need to refer to
this table.
@@ -252,10 +276,10 @@ network byte order):
up to and not including the table of CRC32 values.
- Zero or more NUL bytes.
- The trailer consists of the following:
- - A copy of the 20-byte SHA-256 checksum at the end of the
+ * A copy of the 20-byte SHA-256 checksum at the end of the
corresponding packfile.
- - 20-byte SHA-256 checksum of all of the above.
+ * 20-byte SHA-256 checksum of all of the above.
Loose object index
~~~~~~~~~~~~~~~~~~
@@ -288,18 +312,18 @@ To remove entries (e.g. in "git pack-refs" or "git-prune"):
Translation table
~~~~~~~~~~~~~~~~~
-The index files support a bidirectional mapping between sha1-names
-and sha256-names. The lookup proceeds similarly to ordinary object
-lookups. For example, to convert a sha1-name to a sha256-name:
+The index files support a bidirectional mapping between SHA-1 names
+and SHA-256 names. The lookup proceeds similarly to ordinary object
+lookups. For example, to convert a SHA-1 name to a SHA-256 name:
1. Look for the object in idx files. If a match is present in the
- idx's sorted list of truncated sha1-names, then:
- a. Read the corresponding entry in the sha1-name order to pack
+ idx's sorted list of truncated SHA-1 names, then:
+ a. Read the corresponding entry in the SHA-1 name order to pack
name order mapping.
- b. Read the corresponding entry in the full sha1-name table to
+ b. Read the corresponding entry in the full SHA-1 name table to
verify we found the right object. If it is, then
- c. Read the corresponding entry in the full sha256-name table.
- That is the object's sha256-name.
+ c. Read the corresponding entry in the full SHA-256 name table.
+ That is the object's SHA-256 name.
2. Check for a loose object. Read lines from loose-object-idx until
we find a match.
@@ -313,10 +337,10 @@ Since all operations that make new objects (e.g., "git commit") add
the new objects to the corresponding index, this mapping is possible
for all objects in the object store.
-Reading an object's sha1-content
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The sha1-content of an object can be read by converting all sha256-names
-its sha256-content references to sha1-names using the translation table.
+Reading an object's SHA-1 content
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The SHA-1 content of an object can be read by converting all SHA-256 names
+of its SHA-256 content references to SHA-1 names using the translation table.
Fetch
~~~~~
@@ -339,7 +363,7 @@ the following steps:
1. index-pack: inflate each object in the packfile and compute its
SHA-1. Objects can contain deltas in OBJ_REF_DELTA format against
objects the client has locally. These objects can be looked up
- using the translation table and their sha1-content read as
+ using the translation table and their SHA-1 content read as
described above to resolve the deltas.
2. topological sort: starting at the "want"s from the negotiation
phase, walk through objects in the pack and emit a list of them,
@@ -348,12 +372,12 @@ the following steps:
(This list only contains objects reachable from the "wants". If the
pack from the server contained additional extraneous objects, then
they will be discarded.)
-3. convert to sha256: open a new (sha256) packfile. Read the topologically
+3. convert to SHA-256: open a new SHA-256 packfile. Read the topologically
sorted list just generated. For each object, inflate its
- sha1-content, convert to sha256-content, and write it to the sha256
- pack. Record the new sha1<->sha256 mapping entry for use in the idx.
+ SHA-1 content, convert to SHA-256 content, and write it to the SHA-256
+ pack. Record the new SHA-1<-->SHA-256 mapping entry for use in the idx.
4. sort: reorder entries in the new pack to match the order of objects
- in the pack the server generated and include blobs. Write a sha256 idx
+ in the pack the server generated and include blobs. Write a SHA-256 idx
file
5. clean up: remove the SHA-1 based pack file, index, and
topologically sorted list obtained from the server in steps 1
@@ -378,19 +402,20 @@ experimenting to get this to perform well.
Push
~~~~
Push is simpler than fetch because the objects referenced by the
-pushed objects are already in the translation table. The sha1-content
+pushed objects are already in the translation table. The SHA-1 content
of each object being pushed can be read as described in the "Reading
-an object's sha1-content" section to generate the pack written by git
+an object's SHA-1 content" section to generate the pack written by git
send-pack.
Signed Commits
~~~~~~~~~~~~~~
We add a new field "gpgsig-sha256" to the commit object format to allow
signing commits without relying on SHA-1. It is similar to the
-existing "gpgsig" field. Its signed payload is the sha256-content of the
+existing "gpgsig" field. Its signed payload is the SHA-256 content of the
commit object with any "gpgsig" and "gpgsig-sha256" fields removed.
This means commits can be signed
+
1. using SHA-1 only, as in existing signed commit objects
2. using both SHA-1 and SHA-256, by using both gpgsig-sha256 and gpgsig
fields.
@@ -404,10 +429,11 @@ Signed Tags
~~~~~~~~~~~
We add a new field "gpgsig-sha256" to the tag object format to allow
signing tags without relying on SHA-1. Its signed payload is the
-sha256-content of the tag with its gpgsig-sha256 field and "-----BEGIN PGP
+SHA-256 content of the tag with its gpgsig-sha256 field and "-----BEGIN PGP
SIGNATURE-----" delimited in-body signature removed.
This means tags can be signed
+
1. using SHA-1 only, as in existing signed tag objects
2. using both SHA-1 and SHA-256, by using gpgsig-sha256 and an in-body
signature.
@@ -415,11 +441,11 @@ This means tags can be signed
Mergetag embedding
~~~~~~~~~~~~~~~~~~
-The mergetag field in the sha1-content of a commit contains the
-sha1-content of a tag that was merged by that commit.
+The mergetag field in the SHA-1 content of a commit contains the
+SHA-1 content of a tag that was merged by that commit.
-The mergetag field in the sha256-content of the same commit contains the
-sha256-content of the same tag.
+The mergetag field in the SHA-256 content of the same commit contains the
+SHA-256 content of the same tag.
Submodules
~~~~~~~~~~
@@ -456,7 +482,7 @@ packfile marked as UNREACHABLE_GARBAGE (using the PSRC field; see
below). To avoid the race when writing new objects referring to an
about-to-be-deleted object, code paths that write new objects will
need to copy any objects from UNREACHABLE_GARBAGE packs that they
-refer to to new, non-UNREACHABLE_GARBAGE packs (or loose objects).
+refer to new, non-UNREACHABLE_GARBAGE packs (or loose objects).
UNREACHABLE_GARBAGE are then safe to delete if their creation time (as
indicated by the file's mtime) is long enough ago.
@@ -494,7 +520,7 @@ Caveats
-------
Invalid objects
~~~~~~~~~~~~~~~
-The conversion from sha1-content to sha256-content retains any
+The conversion from SHA-1 content to SHA-256 content retains any
brokenness in the original object (e.g., tree entry modes encoded with
leading 0, tree objects whose paths are not sorted correctly, and
commit objects without an author or committer). This is a deliberate
@@ -513,15 +539,15 @@ allow lifting this restriction.
Alternates
~~~~~~~~~~
-For the same reason, a sha256 repository cannot borrow objects from a
-sha1 repository using objects/info/alternates or
+For the same reason, a SHA-256 repository cannot borrow objects from a
+SHA-1 repository using objects/info/alternates or
$GIT_ALTERNATE_OBJECT_REPOSITORIES.
git notes
~~~~~~~~~
-The "git notes" tool annotates objects using their sha1-name as key.
+The "git notes" tool annotates objects using their SHA-1 name as key.
This design does not describe a way to migrate notes trees to use
-sha256-names. That migration is expected to happen separately (for
+SHA-256 names. That migration is expected to happen separately (for
example using a file at the root of the notes tree to describe which
hash it uses).
@@ -531,7 +557,7 @@ Until Git protocol gains SHA-256 support, using SHA-256 based storage
on public-facing Git servers is strongly discouraged. Once Git
protocol gains SHA-256 support, SHA-256 based servers are likely not
to support SHA-1 compatibility, to avoid what may be a very expensive
-hash reencode during clone and to encourage peers to modernize.
+hash re-encode during clone and to encourage peers to modernize.
The design described here allows fetches by SHA-1 clients of a
personal SHA-256 repository because it's not much more difficult than
@@ -555,7 +581,7 @@ unclear:
Git 2.12
-Does this mean Git v2.12.0 is the commit with sha1-name
+Does this mean Git v2.12.0 is the commit with SHA-1 name
e7e07d5a4fcc2a203d9873968ad3e6bd4d7419d7 or the commit with
new-40-digit-hash-name e7e07d5a4fcc2a203d9873968ad3e6bd4d7419d7?
@@ -573,7 +599,7 @@ supports four different modes of operation:
convert any object names written to output to SHA-1, but store
objects using SHA-256. This allows users to test the code with no
visible behavior change except for performance. This allows
- allows running even tests that assume the SHA-1 hash function, to
+ running even tests that assume the SHA-1 hash function, to
sanity-check the behavior of the new mode.
2. ("early transition") Allow both SHA-1 and SHA-256 object names in
@@ -598,44 +624,12 @@ The user can also explicitly specify which format to use for a
particular revision specifier and for output, overriding the mode. For
example:
-git --output-format=sha1 log abac87a^{sha1}..f787cac^{sha256}
-
-Choice of Hash
---------------
-In early 2005, around the time that Git was written, Xiaoyun Wang,
-Yiqun Lisa Yin, and Hongbo Yu announced an attack finding SHA-1
-collisions in 2^69 operations. In August they published details.
-Luckily, no practical demonstrations of a collision in full SHA-1 were
-published until 10 years later, in 2017.
-
-Git v2.13.0 and later subsequently moved to a hardened SHA-1
-implementation by default that mitigates the SHAttered attack, but
-SHA-1 is still believed to be weak.
-
-The hash to replace this hardened SHA-1 should be stronger than SHA-1
-was: we would like it to be trustworthy and useful in practice for at
-least 10 years.
-
-Some other relevant properties:
-
-1. A 256-bit hash (long enough to match common security practice; not
- excessively long to hurt performance and disk usage).
-
-2. High quality implementations should be widely available (e.g., in
- OpenSSL and Apple CommonCrypto).
-
-3. The hash function's properties should match Git's needs (e.g. Git
- requires collision and 2nd preimage resistance and does not require
- length extension resistance).
-
-4. As a tiebreaker, the hash should be fast to compute (fortunately
- many contenders are faster than SHA-1).
-
-We choose SHA-256.
+ git --output-format=sha1 log abac87a^{sha1}..f787cac^{sha256}
Transition plan
---------------
Some initial steps can be implemented independently of one another:
+
- adding a hash function API (vtable)
- teaching fsck to tolerate the gpgsig-sha256 field
- excluding gpgsig-* from the fields copied by "git commit --amend"
@@ -647,10 +641,9 @@ Some initial steps can be implemented independently of one another:
- introducing index v3
- adding support for the PSRC field and safer object pruning
-
The first user-visible change is the introduction of the objectFormat
extension (without compatObjectFormat). This requires:
-- implementing the loose-object-idx
+
- teaching fsck about this mode of operation
- using the hash function API (vtable) when computing object names
- signing objects and verifying signatures
@@ -658,6 +651,8 @@ extension (without compatObjectFormat). This requires:
repository
Next comes introduction of compatObjectFormat:
+
+- implementing the loose-object-idx
- translating object names between object formats
- translating object content between object formats
- generating and verifying signatures in the compat format
@@ -669,10 +664,11 @@ Next comes introduction of compatObjectFormat:
"Object names on the command line" above)
The next step is supporting fetches and pushes to SHA-1 repositories:
+
- allow pushes to a repository using the compat format
- generate a topologically sorted list of the SHA-1 names of fetched
objects
-- convert the fetched packfile to sha256 format and generate an idx
+- convert the fetched packfile to SHA-256 format and generate an idx
file
- re-sort to match the order of objects in the fetched packfile
@@ -730,10 +726,11 @@ adoption.
Using hash functions in parallel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-(e.g. https://public-inbox.org/git/22708.8913.864049.452252@chiark.greenend.org.uk/ )
+(e.g. https://lore.kernel.org/git/22708.8913.864049.452252@chiark.greenend.org.uk/ )
Objects newly created would be addressed by the new hash, but inside
such an object (e.g. commit) it is still possible to address objects
using the old hash function.
+
* You cannot trust its history (needed for bisectability) in the
future without further work
* Maintenance burden as the number of supported hash functions grows
@@ -743,36 +740,38 @@ using the old hash function.
Signed objects with multiple hashes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Instead of introducing the gpgsig-sha256 field in commit and tag objects
-for sha256-content based signatures, an earlier version of this design
-added "hash sha256 <sha256-name>" fields to strengthen the existing
-sha1-content based signatures.
+for SHA-256 content based signatures, an earlier version of this design
+added "hash sha256 <SHA-256 name>" fields to strengthen the existing
+SHA-1 content based signatures.
In other words, a single signature was used to attest to the object
content using both hash functions. This had some advantages:
+
* Using one signature instead of two speeds up the signing process.
* Having one signed payload with both hashes allows the signer to
- attest to the sha1-name and sha256-name referring to the same object.
+ attest to the SHA-1 name and SHA-256 name referring to the same object.
* All users consume the same signature. Broken signatures are likely
to be detected quickly using current versions of git.
However, it also came with disadvantages:
-* Verifying a signed object requires access to the sha1-names of all
+
+* Verifying a signed object requires access to the SHA-1 names of all
objects it references, even after the transition is complete and
translation table is no longer needed for anything else. To support
- this, the design added fields such as "hash sha1 tree <sha1-name>"
- and "hash sha1 parent <sha1-name>" to the sha256-content of a signed
+ this, the design added fields such as "hash sha1 tree <SHA-1 name>"
+ and "hash sha1 parent <SHA-1 name>" to the SHA-256 content of a signed
commit, complicating the conversion process.
-* Allowing signed objects without a sha1 (for after the transition is
+* Allowing signed objects without a SHA-1 (for after the transition is
complete) complicated the design further, requiring a "nohash sha1"
- field to suppress including "hash sha1" fields in the sha256-content
+ field to suppress including "hash sha1" fields in the SHA-256 content
and signed payload.
Lazily populated translation table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some of the work of building the translation table could be deferred to
push time, but that would significantly complicate and slow down pushes.
-Calculating the sha1-name at object creation time at the same time it is
-being streamed to disk and having its sha256-name calculated should be
+Calculating the SHA-1 name at object creation time at the same time it is
+being streamed to disk and having its SHA-256 name calculated should be
an acceptable cost.
Document History
@@ -782,18 +781,19 @@ Document History
bmwill@google.com, jonathantanmy@google.com, jrnieder@gmail.com,
sbeller@google.com
-Initial version sent to
-http://public-inbox.org/git/20170304011251.GA26789@aiede.mtv.corp.google.com
+* Initial version sent to https://lore.kernel.org/git/20170304011251.GA26789@aiede.mtv.corp.google.com
2017-03-03 jrnieder@gmail.com
Incorporated suggestions from jonathantanmy and sbeller:
-* describe purpose of signed objects with each hash type
-* redefine signed object verification using object content under the
+
+* Describe purpose of signed objects with each hash type
+* Redefine signed object verification using object content under the
first hash function
2017-03-06 jrnieder@gmail.com
+
* Use SHA3-256 instead of SHA2 (thanks, Linus and brian m. carlson).[1][2]
-* Make sha3-based signatures a separate field, avoiding the need for
+* Make SHA3-based signatures a separate field, avoiding the need for
"hash" and "nohash" fields (thanks to peff[3]).
* Add a sorting phase to fetch (thanks to Junio for noticing the need
for this).
@@ -805,23 +805,26 @@ Incorporated suggestions from jonathantanmy and sbeller:
especially Junio).
2017-09-27 jrnieder@gmail.com, sbeller@google.com
-* use placeholder NewHash instead of SHA3-256
-* describe criteria for picking a hash function.
-* include a transition plan (thanks especially to Brandon Williams
+
+* Use placeholder NewHash instead of SHA3-256
+* Describe criteria for picking a hash function.
+* Include a transition plan (thanks especially to Brandon Williams
for fleshing these ideas out)
-* define the translation table (thanks, Shawn Pearce[5], Jonathan
+* Define the translation table (thanks, Shawn Pearce[5], Jonathan
Tan, and Masaya Suzuki)
-* avoid loose object overhead by packing more aggressively in
+* Avoid loose object overhead by packing more aggressively in
"git gc --auto"
Later history:
- See the history of this file in git.git for the history of subsequent
- edits. This document history is no longer being maintained as it
- would now be superfluous to the commit log
+* See the history of this file in git.git for the history of subsequent
+ edits. This document history is no longer being maintained as it
+ would now be superfluous to the commit log
+
+References:
-[1] http://public-inbox.org/git/CA+55aFzJtejiCjV0e43+9oR3QuJK2PiFiLQemytoLpyJWe6P9w@mail.gmail.com/
-[2] http://public-inbox.org/git/CA+55aFz+gkAsDZ24zmePQuEs1XPS9BP_s8O7Q4wQ7LV7X5-oDA@mail.gmail.com/
-[3] http://public-inbox.org/git/20170306084353.nrns455dvkdsfgo5@sigill.intra.peff.net/
-[4] http://public-inbox.org/git/20170304224936.rqqtkdvfjgyezsht@genre.crustytoothpaste.net
-[5] https://public-inbox.org/git/CAJo=hJtoX9=AyLHHpUJS7fueV9ciZ_MNpnEPHUz8Whui6g9F0A@mail.gmail.com/
+ [1] https://lore.kernel.org/git/CA+55aFzJtejiCjV0e43+9oR3QuJK2PiFiLQemytoLpyJWe6P9w@mail.gmail.com/
+ [2] https://lore.kernel.org/git/CA+55aFz+gkAsDZ24zmePQuEs1XPS9BP_s8O7Q4wQ7LV7X5-oDA@mail.gmail.com/
+ [3] https://lore.kernel.org/git/20170306084353.nrns455dvkdsfgo5@sigill.intra.peff.net/
+ [4] https://lore.kernel.org/git/20170304224936.rqqtkdvfjgyezsht@genre.crustytoothpaste.net
+ [5] https://lore.kernel.org/git/CAJo=hJtoX9=AyLHHpUJS7fueV9ciZ_MNpnEPHUz8Whui6g9F0A@mail.gmail.com/
diff --git a/Documentation/technical/http-protocol.txt b/Documentation/technical/http-protocol.txt
index 9c5b6f0fac..96d89ea9b2 100644
--- a/Documentation/technical/http-protocol.txt
+++ b/Documentation/technical/http-protocol.txt
@@ -216,7 +216,7 @@ smart server reply:
S: 001e# service=git-upload-pack\n
S: 0000
S: 004895dcfa3633004da0049d3d0fa03f80589cbcaf31 refs/heads/maint\0multi_ack\n
- S: 0042d049f6c27a2244e12041955e262a404c7faba355 refs/heads/master\n
+ S: 003fd049f6c27a2244e12041955e262a404c7faba355 refs/heads/master\n
S: 003c2cb58b79488a98d2721cea644875a8dd0026b115 refs/tags/v1.0\n
S: 003fa3c2e2402b99163d1d59756e5f207ae21cccba4c refs/tags/v1.0^{}\n
S: 0000
@@ -401,8 +401,9 @@ at all in the request stream:
The stream is terminated by a pkt-line flush (`0000`).
A single "want" or "have" command MUST have one hex formatted
-SHA-1 as its value. Multiple SHA-1s MUST be sent by sending
-multiple commands.
+object name as its value. Multiple object names MUST be sent by sending
+multiple commands. Object names MUST be given using the object format
+negotiated through the `object-format` capability (default SHA-1).
The `have` list is created by popping the first 32 commits
from `c_pending`. Less can be supplied if `c_pending` empties.
diff --git a/Documentation/technical/index-format.txt b/Documentation/technical/index-format.txt
index 7c4d67aa6a..65da0daaa5 100644
--- a/Documentation/technical/index-format.txt
+++ b/Documentation/technical/index-format.txt
@@ -3,8 +3,11 @@ Git index format
== The Git index file has the following format
- All binary numbers are in network byte order. Version 2 is described
- here unless stated otherwise.
+ All binary numbers are in network byte order.
+ In a repository using the traditional SHA-1, checksums and object IDs
+ (object names) mentioned below are all computed using SHA-1. Similarly,
+ in SHA-256 repositories, these values are computed using SHA-256.
+ Version 2 is described here unless stated otherwise.
- A 12-byte header consisting of
@@ -23,7 +26,7 @@ Git index format
Extensions are identified by signature. Optional extensions can
be ignored if Git does not understand them.
- Git currently supports cached tree and resolve undo extensions.
+ Git currently supports cache tree and resolve undo extensions.
4-byte extension signature. If the first byte is 'A'..'Z' the
extension is optional and can be ignored.
@@ -32,8 +35,7 @@ Git index format
Extension data
- - 160-bit SHA-1 over the content of the index file before this
- checksum.
+ - Hash checksum over the content of the index file before this checksum.
== Index entry
@@ -42,6 +44,13 @@ Git index format
localization, no special casing of directory separator '/'). Entries
with the same name are sorted by their stage field.
+ An index entry typically represents a file. However, if sparse-checkout
+ is enabled in cone mode (`core.sparseCheckoutCone` is enabled) and the
+ `extensions.sparseIndex` extension is enabled, then the index may
+ contain entries for directories outside of the sparse-checkout definition.
+ These entries have mode `040000`, include the `SKIP_WORKTREE` bit, and
+ the path ends in a directory separator.
+
32-bit ctime seconds, the last time a file's metadata changed
this is stat(2) data
@@ -80,7 +89,7 @@ Git index format
32-bit file size
This is the on-disk size from stat(2), truncated to 32-bit.
- 160-bit SHA-1 for the represented object
+ Object name for the represented object
A 16-bit 'flags' field split into (high to low bits)
@@ -134,14 +143,35 @@ Git index format
== Extensions
-=== Cached tree
-
- Cached tree extension contains pre-computed hashes for trees that can
- be derived from the index. It helps speed up tree object generation
- from index for a new commit.
-
- When a path is updated in index, the path must be invalidated and
- removed from tree cache.
+=== Cache tree
+
+ Since the index does not record entries for directories, the cache
+ entries cannot describe tree objects that already exist in the object
+ database for regions of the index that are unchanged from an existing
+ commit. The cache tree extension stores a recursive tree structure that
+ describes the trees that already exist and completely match sections of
+ the cache entries. This speeds up tree object generation from the index
+ for a new commit by only computing the trees that are "new" to that
+ commit. It also assists when comparing the index to another tree, such
+ as `HEAD^{tree}`, since sections of the index can be skipped when a tree
+ comparison demonstrates equality.
+
+ The recursive tree structure uses nodes that store a number of cache
+ entries, a list of subnodes, and an object ID (OID). The OID references
+ the existing tree for that node, if it is known to exist. The subnodes
+ correspond to subdirectories that themselves have cache tree nodes. The
+ number of cache entries corresponds to the number of cache entries in
+ the index that describe paths within that tree's directory.
+
+ The extension tracks the full directory structure in the cache tree
+ extension, but this is generally smaller than the full cache entry list.
+
+ When a path is updated in index, Git invalidates all nodes of the
+ recursive cache tree corresponding to the parent directories of that
+ path. We store these tree nodes as being "invalid" by using "-1" as the
+ number of cache entries. Invalid nodes still store a span of index
+ entries, allowing Git to focus its efforts when reconstructing a full
+ cache tree.
The signature for this extension is { 'T', 'R', 'E', 'E' }.
@@ -160,8 +190,8 @@ Git index format
- A newline (ASCII 10); and
- - 160-bit object name for the object that would result from writing
- this span of index as a tree.
+ - Object name for the object that would result from writing this span
+ of index as a tree.
An entry can be in an invalidated state and is represented by having
a negative number in the entry_count field. In this case, there is no
@@ -172,7 +202,8 @@ Git index format
first entry represents the root level of the repository, followed by the
first subtree--let's call this A--of the root level (with its name
relative to the root level), followed by the first subtree of A (with
- its name relative to A), ...
+ its name relative to A), and so on. The specified number of subtrees
+ indicates when the current level of the recursive stack is complete.
=== Resolve undo
@@ -198,7 +229,7 @@ Git index format
stage 1 to 3 (a missing stage is represented by "0" in this field);
and
- - At most three 160-bit object names of the entry in stages from 1 to 3
+ - At most three object names of the entry in stages from 1 to 3
(nothing is written for a missing stage).
=== Split index
@@ -211,8 +242,8 @@ Git index format
The extension consists of:
- - 160-bit SHA-1 of the shared index file. The shared index file path
- is $GIT_DIR/sharedindex.<SHA-1>. If all 160 bits are zero, the
+ - Hash of the shared index file. The shared index file path
+ is $GIT_DIR/sharedindex.<hash>. If all bits are zero, the
index does not require a shared index file.
- An ewah-encoded delete bitmap, each bit represents an entry in the
@@ -249,14 +280,14 @@ Git index format
- Stat data of $GIT_DIR/info/exclude. See "Index entry" section from
ctime field until "file size".
- - Stat data of core.excludesfile
+ - Stat data of core.excludesFile
- 32-bit dir_flags (see struct dir_struct)
- - 160-bit SHA-1 of $GIT_DIR/info/exclude. Null SHA-1 means the file
+ - Hash of $GIT_DIR/info/exclude. A null hash means the file
does not exist.
- - 160-bit SHA-1 of core.excludesfile. Null SHA-1 means the file does
+ - Hash of core.excludesFile. A null hash means the file does
not exist.
- NUL-terminated string of per-dir exclude file name. This usually
@@ -285,13 +316,13 @@ The remaining data of each directory block is grouped by type:
- An ewah bitmap, the n-th bit records "check-only" bit of
read_directory_recursive() for the n-th directory.
- - An ewah bitmap, the n-th bit indicates whether SHA-1 and stat data
+ - An ewah bitmap, the n-th bit indicates whether hash and stat data
is valid for the n-th directory and exists in the next data.
- An array of stat data. The n-th data corresponds with the n-th
"one" bit in the previous ewah bitmap.
- - An array of SHA-1. The n-th SHA-1 corresponds with the n-th "one" bit
+ - An array of hashes. The n-th hash corresponds with the n-th "one" bit
in the previous ewah bitmap.
- One NUL.
@@ -304,12 +335,18 @@ The remaining data of each directory block is grouped by type:
The extension starts with
- - 32-bit version number: the current supported version is 1.
+ - 32-bit version number: the current supported versions are 1 and 2.
- - 64-bit time: the extension data reflects all changes through the given
+ - (Version 1)
+ 64-bit time: the extension data reflects all changes through the given
time which is stored as the nanoseconds elapsed since midnight,
January 1, 1970.
+ - (Version 2)
+ A null terminated string: an opaque token defined by the file system
+ monitor application. The extension data reflects all changes relative
+ to that token.
+
- 32-bit bitmap size: the size of the CE_FSMONITOR_VALID bitmap.
- An ewah bitmap, the n-th bit indicates whether the n-th index entry
@@ -318,7 +355,7 @@ The remaining data of each directory block is grouped by type:
== End of Index Entry
The End of Index Entry (EOIE) is used to locate the end of the variable
- length index entries and the begining of the extensions. Code can take
+ length index entries and the beginning of the extensions. Code can take
advantage of this to quickly locate the index extensions without having
to parse through all of the index entries.
@@ -330,12 +367,12 @@ The remaining data of each directory block is grouped by type:
- 32-bit offset to the end of the index entries
- - 160-bit SHA-1 over the extension types and their sizes (but not
+ - Hash over the extension types and their sizes (but not
their contents). E.g. if we have "TREE" extension that is N-bytes
long, "REUC" extension that is M-bytes long, followed by "EOIE",
then the hash would be:
- SHA-1("TREE" + <binary representation of N> +
+ Hash("TREE" + <binary representation of N> +
"REUC" + <binary representation of M>)
== Index Entry Offset Table
@@ -351,7 +388,19 @@ The remaining data of each directory block is grouped by type:
- A number of index offset entries each consisting of:
- - 32-bit offset from the begining of the file to the first cache entry
+ - 32-bit offset from the beginning of the file to the first cache entry
in this block of entries.
- 32-bit count of cache entries in this block
+
+== Sparse Directory Entries
+
+ When using sparse-checkout in cone mode, some entire directories within
+ the index can be summarized by pointing to a tree object instead of the
+ entire expanded list of paths within that tree. An index containing such
+ entries is a "sparse index". Index format versions 4 and less were not
+ implemented with such entries in mind. Thus, for these versions, an
+ index containing sparse directory entries will include this extension
+ with signature { 's', 'd', 'i', 'r' }. Like the split-index extension,
+ tools should avoid interacting with a sparse index unless they understand
+ this extension.
diff --git a/Documentation/technical/multi-pack-index.txt b/Documentation/technical/multi-pack-index.txt
index d7e57639f7..fb688976c4 100644
--- a/Documentation/technical/multi-pack-index.txt
+++ b/Documentation/technical/multi-pack-index.txt
@@ -36,15 +36,16 @@ Design Details
directory of an alternate. It refers only to packfiles in that
same directory.
-- The pack.multiIndex config setting must be on to consume MIDX files.
+- The core.multiPackIndex config setting must be on to consume MIDX files.
- The file format includes parameters for the object ID hash
function, so a future change of hash algorithm does not require
a change in format.
- The MIDX keeps only one record per object ID. If an object appears
- in multiple packfiles, then the MIDX selects the copy in the most-
- recently modified packfile.
+ in multiple packfiles, then the MIDX selects the copy in the
+ preferred packfile, otherwise selecting from the most-recently
+ modified packfile.
- If there exist packfiles in the pack directory not registered in
the MIDX, then those packfiles are loaded into the `packed_git`
@@ -60,10 +61,6 @@ Design Details
Future Work
-----------
-- Add a 'verify' subcommand to the 'git midx' builtin to verify the
- contents of the multi-pack-index file match the offsets listed in
- the corresponding pack-indexes.
-
- The multi-pack-index allows many packfiles, especially in a context
where repacking is expensive (such as a very large repo), or
unexpected maintenance time is unacceptable (such as a high-demand
@@ -102,8 +99,8 @@ Related Links
[0] https://bugs.chromium.org/p/git/issues/detail?id=6
Chromium work item for: Multi-Pack Index (MIDX)
-[1] https://public-inbox.org/git/20180107181459.222909-1-dstolee@microsoft.com/
+[1] https://lore.kernel.org/git/20180107181459.222909-1-dstolee@microsoft.com/
An earlier RFC for the multi-pack-index feature
-[2] https://public-inbox.org/git/alpine.DEB.2.20.1803091557510.23109@alexmv-linux/
+[2] https://lore.kernel.org/git/alpine.DEB.2.20.1803091557510.23109@alexmv-linux/
Git Merge 2018 Contributor's summit notes (includes discussion of MIDX)
diff --git a/Documentation/technical/pack-format.txt b/Documentation/technical/pack-format.txt
index cab5bdd2ff..8d2f42f29e 100644
--- a/Documentation/technical/pack-format.txt
+++ b/Documentation/technical/pack-format.txt
@@ -1,6 +1,12 @@
Git pack format
===============
+== Checksums and object IDs
+
+In a repository using the traditional SHA-1, pack checksums, index checksums,
+and object IDs (object names) mentioned below are all computed using SHA-1.
+Similarly, in SHA-256 repositories, these values are computed using SHA-256.
+
== pack-*.pack files have the following format:
- A header appears at the beginning and consists of the following:
@@ -26,7 +32,7 @@ Git pack format
(deltified representation)
n-byte type and length (3-bit type, (n-1)*7+4-bit length)
- 20-byte base object name if OBJ_REF_DELTA or a negative relative
+ base object name if OBJ_REF_DELTA or a negative relative
offset from the delta object's position in the pack if this
is an OBJ_OFS_DELTA object
compressed delta data
@@ -34,7 +40,7 @@ Git pack format
Observation: length of each object is encoded in a variable
length format and is not constrained to 32-bit or anything.
- - The trailer records 20-byte SHA-1 checksum of all of the above.
+ - The trailer records a pack checksum of all of the above.
=== Object types
@@ -49,6 +55,18 @@ Valid object types are:
Type 5 is reserved for future expansion. Type 0 is invalid.
+=== Size encoding
+
+This document uses the following "size encoding" of non-negative
+integers: From each byte, the seven least significant bits are
+used to form the resulting integer. As long as the most significant
+bit is 1, this process continues; the byte with MSB 0 provides the
+last seven bits. The seven-bit chunks are concatenated. Later
+values are more significant.
+
+This size encoding should not be confused with the "offset encoding",
+which is also used in this document.
+
=== Deltified representation
Conceptually there are only four object types: commit, tree, tag and
@@ -58,8 +76,8 @@ ofs-delta and ref-delta, which is only valid in a pack file.
Both ofs-delta and ref-delta store the "delta" to be applied to
another object (called 'base object') to reconstruct the object. The
-difference between them is, ref-delta directly encodes 20-byte base
-object name. If the base object is in the same pack, ofs-delta encodes
+difference between them is, ref-delta directly encodes base object
+name. If the base object is in the same pack, ofs-delta encodes
the offset of the base object in the pack instead.
The base object could also be deltified if it's in the same pack.
@@ -67,7 +85,10 @@ Ref-delta can also refer to an object outside the pack (i.e. the
so-called "thin pack"). When stored on disk however, the pack should
be self contained to avoid cyclic dependency.
-The delta data is a sequence of instructions to reconstruct an object
+The delta data starts with the size of the base object and the
+size of the object to be reconstructed. These sizes are
+encoded using the size encoding from above. The remainder of
+the delta data is a sequence of instructions to reconstruct the object
from the base object. If the base object is deltified, it must be
converted to canonical form first. Each instruction appends more and
more data to the target object until it's complete. There are two
@@ -143,14 +164,14 @@ This is the instruction reserved for future expansion.
object is stored in the packfile as the offset from the
beginning.
- 20-byte object name.
+ one object name of the appropriate size.
- The file is concluded with a trailer:
- A copy of the 20-byte SHA-1 checksum at the end of
- corresponding packfile.
+ A copy of the pack checksum at the end of the corresponding
+ packfile.
- 20-byte SHA-1-checksum of all of the above.
+ Index checksum of all of the above.
Pack Idx file:
@@ -198,7 +219,7 @@ Pack file entry: <+
If it is not DELTA, then deflated bytes (the size above
is the size before compression).
If it is REF_DELTA, then
- 20-byte base object name SHA-1 (the size above is the
+ base object name (the size above is the
size of the delta data that follows).
delta data, deflated.
If it is OFS_DELTA, then
@@ -227,9 +248,9 @@ Pack file entry: <+
- A 256-entry fan-out table just like v1.
- - A table of sorted 20-byte SHA-1 object names. These are
- packed together without offset values to reduce the cache
- footprint of the binary search for a specific object name.
+ - A table of sorted object names. These are packed together
+ without offset values to reduce the cache footprint of the
+ binary search for a specific object name.
- A table of 4-byte CRC32 values of the packed object data.
This is new in v2 so compressed data can be copied directly
@@ -248,10 +269,30 @@ Pack file entry: <+
- The same trailer as a v1 pack file:
- A copy of the 20-byte SHA-1 checksum at the end of
+ A copy of the pack checksum at the end of
corresponding packfile.
- 20-byte SHA-1-checksum of all of the above.
+ Index checksum of all of the above.
+
+== pack-*.rev files have the format:
+
+ - A 4-byte magic number '0x52494458' ('RIDX').
+
+ - A 4-byte version identifier (= 1).
+
+ - A 4-byte hash function identifier (= 1 for SHA-1, 2 for SHA-256).
+
+ - A table of index positions (one per packed object, num_objects in
+ total, each a 4-byte unsigned integer in network order), sorted by
+ their corresponding offsets in the packfile.
+
+ - A trailer, containing a:
+
+ checksum of the corresponding packfile, and
+
+ a checksum of all of the above.
+
+All 4-byte numbers are in network order.
== multi-pack-index (MIDX) files have the following format:
@@ -273,7 +314,12 @@ HEADER:
Git only writes or recognizes version 1.
1-byte Object Id Version
- Git only writes or recognizes version 1 (SHA1).
+ We infer the length of object IDs (OIDs) from this value:
+ 1 => SHA-1
+ 2 => SHA-256
+ If the hash type does not match the repository's hash algorithm,
+ the multi-pack-index file should be ignored with a warning
+ presented to the user.
1-byte number of "chunks"
@@ -290,6 +336,9 @@ CHUNK LOOKUP:
(Chunks are provided in file-order, so you can infer the length
using the next chunk position if necessary.)
+ The CHUNK LOOKUP matches the table of contents from
+ link:technical/chunk-format.html[the chunk-based file format].
+
The remaining data in the body is described one chunk at a time, and
these chunks may be given in any order. Chunks are required unless
otherwise specified.
@@ -315,10 +364,11 @@ CHUNK DATA:
Stores two 4-byte values for every object.
1: The pack-int-id for the pack storing this object.
2: The offset within the pack.
- If all offsets are less than 2^31, then the large offset chunk
+ If all offsets are less than 2^32, then the large offset chunk
will not exist and offsets are stored as in IDX v1.
If there is at least one offset value larger than 2^32-1, then
- the large offset chunk must exist. If the large offset chunk
+ the large offset chunk must exist, and offsets larger than
+ 2^31-1 must be stored in it instead. If the large offset chunk
exists and the 31st bit is on, then removing that bit reveals
the row in the large offsets containing the 8-byte offset of
this object.
@@ -328,4 +378,87 @@ CHUNK DATA:
TRAILER:
- 20-byte SHA1-checksum of the above contents.
+ Index checksum of the above contents.
+
+== multi-pack-index reverse indexes
+
+Similar to the pack-based reverse index, the multi-pack index can also
+be used to generate a reverse index.
+
+Instead of mapping between offset, pack-, and index position, this
+reverse index maps between an object's position within the MIDX, and
+that object's position within a pseudo-pack that the MIDX describes
+(i.e., the ith entry of the multi-pack reverse index holds the MIDX
+position of ith object in pseudo-pack order).
+
+To clarify the difference between these orderings, consider a multi-pack
+reachability bitmap (which does not yet exist, but is what we are
+building towards here). Each bit needs to correspond to an object in the
+MIDX, and so we need an efficient mapping from bit position to MIDX
+position.
+
+One solution is to let bits occupy the same position in the oid-sorted
+index stored by the MIDX. But because oids are effectively random, their
+resulting reachability bitmaps would have no locality, and thus compress
+poorly. (This is the reason that single-pack bitmaps use the pack
+ordering, and not the .idx ordering, for the same purpose.)
+
+So we'd like to define an ordering for the whole MIDX based around
+pack ordering, which has far better locality (and thus compresses more
+efficiently). We can think of a pseudo-pack created by the concatenation
+of all of the packs in the MIDX. E.g., if we had a MIDX with three packs
+(a, b, c), with 10, 15, and 20 objects respectively, we can imagine an
+ordering of the objects like:
+
+ |a,0|a,1|...|a,9|b,0|b,1|...|b,14|c,0|c,1|...|c,19|
+
+where the ordering of the packs is defined by the MIDX's pack list,
+and then the ordering of objects within each pack is the same as the
+order in the actual packfile.
+
+Given the list of packs and their counts of objects, you can
+naïvely reconstruct that pseudo-pack ordering (e.g., the object at
+position 27 must be (c,1) because packs "a" and "b" consumed 25 of the
+slots). But there's a catch. Objects may be duplicated between packs, in
+which case the MIDX only stores one pointer to the object (and thus we'd
+want only one slot in the bitmap).
+
+Callers could handle duplicates themselves by reading objects in order
+of their bit-position, but that's linear in the number of objects, and
+much too expensive for ordinary bitmap lookups. Building a reverse index
+solves this, since it is the logical inverse of the index, and that
+index has already removed duplicates. But, building a reverse index on
+the fly can be expensive. Since we already have an on-disk format for
+pack-based reverse indexes, let's reuse it for the MIDX's pseudo-pack,
+too.
+
+Objects from the MIDX are ordered as follows to string together the
+pseudo-pack. Let `pack(o)` return the pack from which `o` was selected
+by the MIDX, and define an ordering of packs based on their numeric ID
+(as stored by the MIDX). Let `offset(o)` return the object offset of `o`
+within `pack(o)`. Then, compare `o1` and `o2` as follows:
+
+ - If one of `pack(o1)` and `pack(o2)` is preferred and the other
+ is not, then the preferred one sorts first.
++
+(This is a detail that allows the MIDX bitmap to determine which
+pack should be used by the pack-reuse mechanism, since it can ask
+the MIDX for the pack containing the object at bit position 0).
+
+ - If `pack(o1) ≠ pack(o2)`, then sort the two objects in descending
+ order based on the pack ID.
+
+ - Otherwise, `pack(o1) = pack(o2)`, and the objects are sorted in
+ pack-order (i.e., `o1` sorts ahead of `o2` exactly when `offset(o1)
+ < offset(o2)`).
+
+In short, a MIDX's pseudo-pack is the de-duplicated concatenation of
+objects in packs stored by the MIDX, laid out in pack order, and the
+packs arranged in MIDX order (with the preferred pack coming first).
+
+Finally, note that the MIDX's reverse index is not stored as a chunk in
+the multi-pack-index itself. This is done because the reverse index
+includes the checksum of the pack or MIDX to which it belongs, which
+makes it impossible to write in the MIDX. To avoid races when rewriting
+the MIDX, a MIDX reverse index includes the MIDX's checksum in its
+filename (e.g., `multi-pack-index-xyz.rev`).
diff --git a/Documentation/technical/pack-protocol.txt b/Documentation/technical/pack-protocol.txt
index c73e72de0e..e13a2c064d 100644
--- a/Documentation/technical/pack-protocol.txt
+++ b/Documentation/technical/pack-protocol.txt
@@ -96,7 +96,7 @@ Basically what the Git client is doing to connect to an 'upload-pack'
process on the server side over the Git protocol is this:
$ echo -e -n \
- "0039git-upload-pack /schacon/gitbook.git\0host=example.com\0" |
+ "003agit-upload-pack /schacon/gitbook.git\0host=example.com\0" |
nc -v example.com 9418
@@ -171,9 +171,9 @@ with a version number (if "version=1" is sent as an Extra Parameter),
and a listing of each reference it has (all branches and tags) along
with the object name that each reference currently points to.
- $ echo -e -n "0044git-upload-pack /schacon/gitbook.git\0host=example.com\0\0version=1\0" |
+ $ echo -e -n "0045git-upload-pack /schacon/gitbook.git\0host=example.com\0\0version=1\0" |
nc -v example.com 9418
- 000aversion 1
+ 000eversion 1
00887217a7c7e582c46cec22a130adf4b9d7d950fba0 HEAD\0multi_ack thin-pack
side-band side-band-64k ofs-delta shallow no-progress include-tag
00441d3fcd5ced445d1abc402225c0b8a1299641f497 refs/heads/integration
@@ -503,8 +503,8 @@ The reference discovery phase is done nearly the same way as it is in the
fetching protocol. Each reference obj-id and name on the server is sent
in packet-line format to the client, followed by a flush-pkt. The only
real difference is that the capability listing is different - the only
-possible values are 'report-status', 'delete-refs', 'ofs-delta' and
-'push-options'.
+possible values are 'report-status', 'report-status-v2', 'delete-refs',
+'ofs-delta', 'atomic' and 'push-options'.
Reference Update Request and Packfile Transfer
----------------------------------------------
@@ -625,7 +625,7 @@ Report Status
-------------
After receiving the pack data from the sender, the receiver sends a
-report if 'report-status' capability is in effect.
+report if 'report-status' or 'report-status-v2' capability is in effect.
It is a short listing of what happened in that update. It will first
list the status of the packfile unpacking as either 'unpack ok' or
'unpack [error]'. Then it will list the status for each of the references
@@ -644,7 +644,42 @@ update was successful, or 'ng [refname] [error]' if the update was not.
command-ok = PKT-LINE("ok" SP refname)
command-fail = PKT-LINE("ng" SP refname SP error-msg)
- error-msg = 1*(OCTECT) ; where not "ok"
+ error-msg = 1*(OCTET) ; where not "ok"
+----
+
+The 'report-status-v2' capability extends the protocol by adding new option
+lines in order to support reporting of reference rewritten by the
+'proc-receive' hook. The 'proc-receive' hook may handle a command for a
+pseudo-reference which may create or update one or more references, and each
+reference may have different name, different new-oid, and different old-oid.
+
+----
+ report-status-v2 = unpack-status
+ 1*(command-status-v2)
+ flush-pkt
+
+ unpack-status = PKT-LINE("unpack" SP unpack-result)
+ unpack-result = "ok" / error-msg
+
+ command-status-v2 = command-ok-v2 / command-fail
+ command-ok-v2 = command-ok
+ *option-line
+
+ command-ok = PKT-LINE("ok" SP refname)
+ command-fail = PKT-LINE("ng" SP refname SP error-msg)
+
+ error-msg = 1*(OCTET) ; where not "ok"
+
+ option-line = *1(option-refname)
+ *1(option-old-oid)
+ *1(option-new-oid)
+ *1(option-forced-update)
+
+ option-refname = PKT-LINE("option" SP "refname" SP refname)
+ option-old-oid = PKT-LINE("option" SP "old-oid" SP obj-id)
+ option-new-oid = PKT-LINE("option" SP "new-oid" SP obj-id)
+ option-force = PKT-LINE("option" SP "forced-update")
+
----
Updates can be unsuccessful for a number of reasons. The reference can have
diff --git a/Documentation/technical/packfile-uri.txt b/Documentation/technical/packfile-uri.txt
new file mode 100644
index 0000000000..1eb525fe76
--- /dev/null
+++ b/Documentation/technical/packfile-uri.txt
@@ -0,0 +1,82 @@
+Packfile URIs
+=============
+
+This feature allows servers to serve part of their packfile response as URIs.
+This allows server designs that improve scalability in bandwidth and CPU usage
+(for example, by serving some data through a CDN), and (in the future) provides
+some measure of resumability to clients.
+
+This feature is available only in protocol version 2.
+
+Protocol
+--------
+
+The server advertises the `packfile-uris` capability.
+
+If the client then communicates which protocols (HTTPS, etc.) it supports with
+a `packfile-uris` argument, the server MAY send a `packfile-uris` section
+directly before the `packfile` section (right after `wanted-refs` if it is
+sent) containing URIs of any of the given protocols. The URIs point to
+packfiles that use only features that the client has declared that it supports
+(e.g. ofs-delta and thin-pack). See protocol-v2.txt for the documentation of
+this section.
+
+Clients should then download and index all the given URIs (in addition to
+downloading and indexing the packfile given in the `packfile` section of the
+response) before performing the connectivity check.
+
+Server design
+-------------
+
+The server can be trivially made compatible with the proposed protocol by
+having it advertise `packfile-uris`, tolerating the client sending
+`packfile-uris`, and never sending any `packfile-uris` section. But we should
+include some sort of non-trivial implementation in the Minimum Viable Product,
+at least so that we can test the client.
+
+This is the implementation: a feature, marked experimental, that allows the
+server to be configured by one or more `uploadpack.blobPackfileUri=
+<object-hash> <pack-hash> <uri>` entries. Whenever the list of objects to be
+sent is assembled, all such blobs are excluded, replaced with URIs. As noted
+in "Future work" below, the server can evolve in the future to support
+excluding other objects (or other implementations of servers could be made
+that support excluding other objects) without needing a protocol change, so
+clients should not expect that packfiles downloaded in this way only contain
+single blobs.
+
+Client design
+-------------
+
+The client has a config variable `fetch.uriprotocols` that determines which
+protocols the end user is willing to use. By default, this is empty.
+
+When the client downloads the given URIs, it should store them with "keep"
+files, just like it does with the packfile in the `packfile` section. These
+additional "keep" files can only be removed after the refs have been updated -
+just like the "keep" file for the packfile in the `packfile` section.
+
+The division of work (initial fetch + additional URIs) introduces convenient
+points for resumption of an interrupted clone - such resumption can be done
+after the Minimum Viable Product (see "Future work").
+
+Future work
+-----------
+
+The protocol design allows some evolution of the server and client without any
+need for protocol changes, so only a small-scoped design is included here to
+form the MVP. For example, the following can be done:
+
+ * On the server, more sophisticated means of excluding objects (e.g. by
+ specifying a commit to represent that commit and all objects that it
+ references).
+ * On the client, resumption of clone. If a clone is interrupted, information
+ could be recorded in the repository's config and a "clone-resume" command
+ can resume the clone in progress. (Resumption of subsequent fetches is more
+ difficult because that must deal with the user wanting to use the repository
+ even after the fetch was interrupted.)
+
+There are some possible features that will require a change in protocol:
+
+ * Additional HTTP headers (e.g. authentication)
+ * Byte range support
+ * Different file formats referenced by URIs (e.g. raw object)
diff --git a/Documentation/technical/parallel-checkout.txt b/Documentation/technical/parallel-checkout.txt
new file mode 100644
index 0000000000..e790258a1a
--- /dev/null
+++ b/Documentation/technical/parallel-checkout.txt
@@ -0,0 +1,270 @@
+Parallel Checkout Design Notes
+==============================
+
+The "Parallel Checkout" feature attempts to use multiple processes to
+parallelize the work of uncompressing the blobs, applying in-core
+filters, and writing the resulting contents to the working tree during a
+checkout operation. It can be used by all checkout-related commands,
+such as `clone`, `checkout`, `reset`, `sparse-checkout`, and others.
+
+These commands share the following basic structure:
+
+* Step 1: Read the current index file into memory.
+
+* Step 2: Modify the in-memory index based upon the command, and
+ temporarily mark all cache entries that need to be updated.
+
+* Step 3: Populate the working tree to match the new candidate index.
+ This includes iterating over all of the to-be-updated cache entries
+ and delete, create, or overwrite the associated files in the working
+ tree.
+
+* Step 4: Write the new index to disk.
+
+Step 3 is the focus of the "parallel checkout" effort described here.
+
+Sequential Implementation
+-------------------------
+
+For the purposes of discussion here, the current sequential
+implementation of Step 3 is divided in 3 parts, each one implemented in
+its own function:
+
+* Step 3a: `unpack-trees.c:check_updates()` contains a series of
+ sequential loops iterating over the `cache_entry`'s array. The main
+ loop in this function calls the Step 3b function for each of the
+ to-be-updated entries.
+
+* Step 3b: `entry.c:checkout_entry()` examines the existing working tree
+ for file conflicts, collisions, and unsaved changes. It removes files
+ and creates leading directories as necessary. It calls the Step 3c
+ function for each entry to be written.
+
+* Step 3c: `entry.c:write_entry()` loads the blob into memory, smudges
+ it if necessary, creates the file in the working tree, writes the
+ smudged contents, calls `fstat()` or `lstat()`, and updates the
+ associated `cache_entry` struct with the stat information gathered.
+
+It wouldn't be safe to perform Step 3b in parallel, as there could be
+race conditions between file creations and removals. Instead, the
+parallel checkout framework lets the sequential code handle Step 3b,
+and uses parallel workers to replace the sequential
+`entry.c:write_entry()` calls from Step 3c.
+
+Rejected Multi-Threaded Solution
+--------------------------------
+
+The most "straightforward" implementation would be to spread the set of
+to-be-updated cache entries across multiple threads. But due to the
+thread-unsafe functions in the ODB code, we would have to use locks to
+coordinate the parallel operation. An early prototype of this solution
+showed that the multi-threaded checkout would bring performance
+improvements over the sequential code, but there was still too much lock
+contention. A `perf` profiling indicated that around 20% of the runtime
+during a local Linux clone (on an SSD) was spent in locking functions.
+For this reason this approach was rejected in favor of using multiple
+child processes, which led to a better performance.
+
+Multi-Process Solution
+----------------------
+
+Parallel checkout alters the aforementioned Step 3 to use multiple
+`checkout--worker` background processes to distribute the work. The
+long-running worker processes are controlled by the foreground Git
+command using the existing run-command API.
+
+Overview
+~~~~~~~~
+
+Step 3b is only slightly altered; for each entry to be checked out, the
+main process performs the following steps:
+
+* M1: Check whether there is any untracked or unclean file in the
+ working tree which would be overwritten by this entry, and decide
+ whether to proceed (removing the file(s)) or not.
+
+* M2: Create the leading directories.
+
+* M3: Load the conversion attributes for the entry's path.
+
+* M4: Check, based on the entry's type and conversion attributes,
+ whether the entry is eligible for parallel checkout (more on this
+ later). If it is eligible, enqueue the entry and the loaded
+ attributes to later write the entry in parallel. If not, write the
+ entry right away, using the default sequential code.
+
+Note: we save the conversion attributes associated with each entry
+because the workers don't have access to the main process' index state,
+so they can't load the attributes by themselves (and the attributes are
+needed to properly smudge the entry). Additionally, this has a positive
+impact on performance as (1) we don't need to load the attributes twice
+and (2) the attributes machinery is optimized to handle paths in
+sequential order.
+
+After all entries have passed through the above steps, the main process
+checks if the number of enqueued entries is sufficient to spread among
+the workers. If not, it just writes them sequentially. Otherwise, it
+spawns the workers and distributes the queued entries uniformly in
+continuous chunks. This aims to minimize the chances of two workers
+writing to the same directory simultaneously, which could increase lock
+contention in the kernel.
+
+Then, for each assigned item, each worker:
+
+* W1: Checks if there is any non-directory file in the leading part of
+ the entry's path or if there already exists a file at the entry' path.
+ If so, mark the entry with `PC_ITEM_COLLIDED` and skip it (more on
+ this later).
+
+* W2: Creates the file (with O_CREAT and O_EXCL).
+
+* W3: Loads the blob into memory (inflating and delta reconstructing
+ it).
+
+* W4: Applies any required in-process filter, like end-of-line
+ conversion and re-encoding.
+
+* W5: Writes the result to the file descriptor opened at W2.
+
+* W6: Calls `fstat()` or lstat()` on the just-written path, and sends
+ the result back to the main process, together with the end status of
+ the operation and the item's identification number.
+
+Note that, when possible, steps W3 to W5 are delegated to the streaming
+machinery, removing the need to keep the entire blob in memory.
+
+If the worker fails to read the blob or to write it to the working tree,
+it removes the created file to avoid leaving empty files behind. This is
+the *only* time a worker is allowed to remove a file.
+
+As mentioned earlier, it is the responsibility of the main process to
+remove any file that blocks the checkout operation (or abort if the
+removal(s) would cause data loss and the user didn't ask to `--force`).
+This is crucial to avoid race conditions and also to properly detect
+path collisions at Step W1.
+
+After the workers finish writing the items and sending back the required
+information, the main process handles the results in two steps:
+
+- First, it updates the in-memory index with the `lstat()` information
+ sent by the workers. (This must be done first as this information
+ might me required in the following step.)
+
+- Then it writes the items which collided on disk (i.e. items marked
+ with `PC_ITEM_COLLIDED`). More on this below.
+
+Path Collisions
+---------------
+
+Path collisions happen when two different paths correspond to the same
+entry in the file system. E.g. the paths 'a' and 'A' would collide in a
+case-insensitive file system.
+
+The sequential checkout deals with collisions in the same way that it
+deals with files that were already present in the working tree before
+checkout. Basically, it checks if the path that it wants to write
+already exists on disk, makes sure the existing file doesn't have
+unsaved data, and then overwrites it. (To be more pedantic: it deletes
+the existing file and creates the new one.) So, if there are multiple
+colliding files to be checked out, the sequential code will write each
+one of them but only the last will actually survive on disk.
+
+Parallel checkout aims to reproduce the same behavior. However, we
+cannot let the workers racily write to the same file on disk. Instead,
+the workers detect when the entry that they want to check out would
+collide with an existing file, and mark it with `PC_ITEM_COLLIDED`.
+Later, the main process can sequentially feed these entries back to
+`checkout_entry()` without the risk of race conditions. On clone, this
+also has the effect of marking the colliding entries to later emit a
+warning for the user, like the classic sequential checkout does.
+
+The workers are able to detect both collisions among the entries being
+concurrently written and collisions between a parallel-eligible entry
+and an ineligible entry. The general idea for collision detection is
+quite straightforward: for each parallel-eligible entry, the main
+process must remove all files that prevent this entry from being written
+(before enqueueing it). This includes any non-directory file in the
+leading path of the entry. Later, when a worker gets assigned the entry,
+it looks again for the non-directories files and for an already existing
+file at the entry's path. If any of these checks finds something, the
+worker knows that there was a path collision.
+
+Because parallel checkout can distinguish path collisions from the case
+where the file was already present in the working tree before checkout,
+we could alternatively choose to skip the checkout of colliding entries.
+However, each entry that doesn't get written would have NULL `lstat()`
+fields on the index. This could cause performance penalties for
+subsequent commands that need to refresh the index, as they would have
+to go to the file system to see if the entry is dirty. Thus, if we have
+N entries in a colliding group and we decide to write and `lstat()` only
+one of them, every subsequent `git-status` will have to read, convert,
+and hash the written file N - 1 times. By checking out all colliding
+entries (like the sequential code does), we only pay the overhead once,
+during checkout.
+
+Eligible Entries for Parallel Checkout
+--------------------------------------
+
+As previously mentioned, not all entries passed to `checkout_entry()`
+will be considered eligible for parallel checkout. More specifically, we
+exclude:
+
+- Symbolic links; to avoid race conditions that, in combination with
+ path collisions, could cause workers to write files at the wrong
+ place. For example, if we were to concurrently check out a symlink
+ 'a' -> 'b' and a regular file 'A/f' in a case-insensitive file system,
+ we could potentially end up writing the file 'A/f' at 'a/f', due to a
+ race condition.
+
+- Regular files that require external filters (either "one shot" filters
+ or long-running process filters). These filters are black-boxes to Git
+ and may have their own internal locking or non-concurrent assumptions.
+ So it might not be safe to run multiple instances in parallel.
++
+Besides, long-running filters may use the delayed checkout feature to
+postpone the return of some filtered blobs. The delayed checkout queue
+and the parallel checkout queue are not compatible and should remain
+separate.
++
+Note: regular files that only require internal filters, like end-of-line
+conversion and re-encoding, are eligible for parallel checkout.
+
+Ineligible entries are checked out by the classic sequential codepath
+*before* spawning workers.
+
+Note: submodules's files are also eligible for parallel checkout (as
+long as they don't fall into any of the excluding categories mentioned
+above). But since each submodule is checked out in its own child
+process, we don't mix the superproject's and the submodules' files in
+the same parallel checkout process or queue.
+
+The API
+-------
+
+The parallel checkout API was designed with the goal of minimizing
+changes to the current users of the checkout machinery. This means that
+they don't have to call a different function for sequential or parallel
+checkout. As already mentioned, `checkout_entry()` will automatically
+insert the given entry in the parallel checkout queue when this feature
+is enabled and the entry is eligible; otherwise, it will just write the
+entry right away, using the sequential code. In general, callers of the
+parallel checkout API should look similar to this:
+
+----------------------------------------------
+int pc_workers, pc_threshold, err = 0;
+struct checkout state;
+
+get_parallel_checkout_configs(&pc_workers, &pc_threshold);
+
+/*
+ * This check is not strictly required, but it
+ * should save some time in sequential mode.
+ */
+if (pc_workers > 1)
+ init_parallel_checkout();
+
+for (each cache_entry ce to-be-updated)
+ err |= checkout_entry(ce, &state, NULL, NULL);
+
+err |= run_parallel_checkout(&state, pc_workers, pc_threshold, NULL, NULL);
+----------------------------------------------
diff --git a/Documentation/technical/partial-clone.txt b/Documentation/technical/partial-clone.txt
index 896c7b3878..a0dd7c66f2 100644
--- a/Documentation/technical/partial-clone.txt
+++ b/Documentation/technical/partial-clone.txt
@@ -30,12 +30,20 @@ advance* during clone and fetch operations and thereby reduce download
times and disk usage. Missing objects can later be "demand fetched"
if/when needed.
+A remote that can later provide the missing objects is called a
+promisor remote, as it promises to send the objects when
+requested. Initially Git supported only one promisor remote, the origin
+remote from which the user cloned and that was configured in the
+"extensions.partialClone" config option. Later support for more than
+one promisor remote has been implemented.
+
Use of partial clone requires that the user be online and the origin
-remote be available for on-demand fetching of missing objects. This may
-or may not be problematic for the user. For example, if the user can
-stay within the pre-selected subset of the source tree, they may not
-encounter any missing objects. Alternatively, the user could try to
-pre-fetch various objects if they know that they are going offline.
+remote or other promisor remotes be available for on-demand fetching
+of missing objects. This may or may not be problematic for the user.
+For example, if the user can stay within the pre-selected subset of
+the source tree, they may not encounter any missing objects.
+Alternatively, the user could try to pre-fetch various objects if they
+know that they are going offline.
Non-Goals
@@ -100,18 +108,18 @@ or commits that reference missing trees.
Handling Missing Objects
------------------------
-- An object may be missing due to a partial clone or fetch, or missing due
- to repository corruption. To differentiate these cases, the local
- repository specially indicates such filtered packfiles obtained from the
- promisor remote as "promisor packfiles".
+- An object may be missing due to a partial clone or fetch, or missing
+ due to repository corruption. To differentiate these cases, the
+ local repository specially indicates such filtered packfiles
+ obtained from promisor remotes as "promisor packfiles".
+
These promisor packfiles consist of a "<name>.promisor" file with
arbitrary contents (like the "<name>.keep" files), in addition to
their "<name>.pack" and "<name>.idx" files.
- The local repository considers a "promisor object" to be an object that
- it knows (to the best of its ability) that the promisor remote has promised
- that it has, either because the local repository has that object in one of
+ it knows (to the best of its ability) that promisor remotes have promised
+ that they have, either because the local repository has that object in one of
its promisor packfiles, or because another promisor object refers to it.
+
When Git encounters a missing object, Git can see if it is a promisor object
@@ -123,12 +131,12 @@ expensive-to-modify list of missing objects.[a]
- Since almost all Git code currently expects any referenced object to be
present locally and because we do not want to force every command to do
a dry-run first, a fallback mechanism is added to allow Git to attempt
- to dynamically fetch missing objects from the promisor remote.
+ to dynamically fetch missing objects from promisor remotes.
+
When the normal object lookup fails to find an object, Git invokes
-fetch-object to try to get the object from the server and then retry
-the object lookup. This allows objects to be "faulted in" without
-complicated prediction algorithms.
+promisor_remote_get_direct() to try to get the object from a promisor
+remote and then retry the object lookup. This allows objects to be
+"faulted in" without complicated prediction algorithms.
+
For efficiency reasons, no check as to whether the missing object is
actually a promisor object is performed.
@@ -157,51 +165,84 @@ and prefetch those objects in bulk.
+
We are not happy with this global variable and would like to remove it,
but that requires significant refactoring of the object code to pass an
-additional flag. We hope that concurrent efforts to add an ODB API can
-encompass this.
+additional flag.
Fetching Missing Objects
------------------------
-- Fetching of objects is done using the existing transport mechanism using
- transport_fetch_refs(), setting a new transport option
- TRANS_OPT_NO_DEPENDENTS to indicate that only the objects themselves are
- desired, not any object that they refer to.
-+
-Because some transports invoke fetch_pack() in the same process, fetch_pack()
-has been updated to not use any object flags when the corresponding argument
-(no_dependents) is set.
+- Fetching of objects is done by invoking a "git fetch" subprocess.
- The local repository sends a request with the hashes of all requested
- objects as "want" lines, and does not perform any packfile negotiation.
+ objects, and does not perform any packfile negotiation.
It then receives a packfile.
-- Because we are reusing the existing fetch-pack mechanism, fetching
+- Because we are reusing the existing fetch mechanism, fetching
currently fetches all objects referred to by the requested objects, even
though they are not necessary.
+Using many promisor remotes
+---------------------------
+
+Many promisor remotes can be configured and used.
+
+This allows for example a user to have multiple geographically-close
+cache servers for fetching missing blobs while continuing to do
+filtered `git-fetch` commands from the central server.
+
+When fetching objects, promisor remotes are tried one after the other
+until all the objects have been fetched.
+
+Remotes that are considered "promisor" remotes are those specified by
+the following configuration variables:
+
+- `extensions.partialClone = <name>`
+
+- `remote.<name>.promisor = true`
+
+- `remote.<name>.partialCloneFilter = ...`
+
+Only one promisor remote can be configured using the
+`extensions.partialClone` config variable. This promisor remote will
+be the last one tried when fetching objects.
+
+We decided to make it the last one we try, because it is likely that
+someone using many promisor remotes is doing so because the other
+promisor remotes are better for some reason (maybe they are closer or
+faster for some kind of objects) than the origin, and the origin is
+likely to be the remote specified by extensions.partialClone.
+
+This justification is not very strong, but one choice had to be made,
+and anyway the long term plan should be to make the order somehow
+fully configurable.
+
+For now though the other promisor remotes will be tried in the order
+they appear in the config file.
+
Current Limitations
-------------------
-- The remote used for a partial clone (or the first partial fetch
- following a regular clone) is marked as the "promisor remote".
+- It is not possible to specify the order in which the promisor
+ remotes are tried in other ways than the order in which they appear
+ in the config file.
+
-We are currently limited to a single promisor remote and only that
-remote may be used for subsequent partial fetches.
+It is also not possible to specify an order to be used when fetching
+from one remote and a different order when fetching from another
+remote.
+
+- It is not possible to push only specific objects to a promisor
+ remote.
+
-We accept this limitation because we believe initial users of this
-feature will be using it on repositories with a strong single central
-server.
+It is not possible to push at the same time to multiple promisor
+remote in a specific order.
-- Dynamic object fetching will only ask the promisor remote for missing
- objects. We assume that the promisor remote has a complete view of the
+- Dynamic object fetching will only ask promisor remotes for missing
+ objects. We assume that promisor remotes have a complete view of the
repository and can satisfy all such requests.
- Repack essentially treats promisor and non-promisor packfiles as 2
- distinct partitions and does not mix them. Repack currently only works
- on non-promisor packfiles and loose objects.
+ distinct partitions and does not mix them.
- Dynamic object fetching invokes fetch-pack once *for each item*
because most algorithms stumble upon a missing object and need to have
@@ -218,20 +259,19 @@ server.
Future Work
-----------
-- Allow more than one promisor remote and define a strategy for fetching
- missing objects from specific promisor remotes or of iterating over the
- set of promisor remotes until a missing object is found.
+- Improve the way to specify the order in which promisor remotes are
+ tried.
+
-A user might want to have multiple geographically-close cache servers
-for fetching missing blobs while continuing to do filtered `git-fetch`
-commands from the central server, for example.
+For example this could allow to specify explicitly something like:
+"When fetching from this remote, I want to use these promisor remotes
+in this order, though, when pushing or fetching to that remote, I want
+to use those promisor remotes in that order."
+
+- Allow pushing to promisor remotes.
+
-Or the user might want to work in a triangular work flow with multiple
+The user might want to work in a triangular work flow with multiple
promisor remotes that each have an incomplete view of the repository.
-- Allow repack to work on promisor packfiles (while keeping them distinct
- from non-promisor packfiles).
-
- Allow non-pathname-based filters to make use of packfile bitmaps (when
present). This was just an omission during the initial implementation.
@@ -299,26 +339,26 @@ Related Links
[0] https://crbug.com/git/2
Bug#2: Partial Clone
-[1] https://public-inbox.org/git/20170113155253.1644-1-benpeart@microsoft.com/ +
+[1] https://lore.kernel.org/git/20170113155253.1644-1-benpeart@microsoft.com/ +
Subject: [RFC] Add support for downloading blobs on demand +
Date: Fri, 13 Jan 2017 10:52:53 -0500
-[2] https://public-inbox.org/git/cover.1506714999.git.jonathantanmy@google.com/ +
+[2] https://lore.kernel.org/git/cover.1506714999.git.jonathantanmy@google.com/ +
Subject: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches) +
Date: Fri, 29 Sep 2017 13:11:36 -0700
-[3] https://public-inbox.org/git/20170426221346.25337-1-jonathantanmy@google.com/ +
+[3] https://lore.kernel.org/git/20170426221346.25337-1-jonathantanmy@google.com/ +
Subject: Proposal for missing blob support in Git repos +
Date: Wed, 26 Apr 2017 15:13:46 -0700
-[4] https://public-inbox.org/git/1488999039-37631-1-git-send-email-git@jeffhostetler.com/ +
+[4] https://lore.kernel.org/git/1488999039-37631-1-git-send-email-git@jeffhostetler.com/ +
Subject: [PATCH 00/10] RFC Partial Clone and Fetch +
Date: Wed, 8 Mar 2017 18:50:29 +0000
-[5] https://public-inbox.org/git/20170505152802.6724-1-benpeart@microsoft.com/ +
+[5] https://lore.kernel.org/git/20170505152802.6724-1-benpeart@microsoft.com/ +
Subject: [PATCH v7 00/10] refactor the filter process code into a reusable module +
Date: Fri, 5 May 2017 11:27:52 -0400
-[6] https://public-inbox.org/git/20170714132651.170708-1-benpeart@microsoft.com/ +
+[6] https://lore.kernel.org/git/20170714132651.170708-1-benpeart@microsoft.com/ +
Subject: [RFC/PATCH v2 0/1] Add support for downloading blobs on demand +
Date: Fri, 14 Jul 2017 09:26:50 -0400
diff --git a/Documentation/technical/protocol-capabilities.txt b/Documentation/technical/protocol-capabilities.txt
index 2b267c0da6..9dfade930d 100644
--- a/Documentation/technical/protocol-capabilities.txt
+++ b/Documentation/technical/protocol-capabilities.txt
@@ -22,13 +22,13 @@ was sent. Server MUST NOT ignore capabilities that client requested
and server advertised. As a consequence of these rules, server MUST
NOT advertise capabilities it does not understand.
-The 'atomic', 'report-status', 'delete-refs', 'quiet', and 'push-cert'
-capabilities are sent and recognized by the receive-pack (push to server)
-process.
+The 'atomic', 'report-status', 'report-status-v2', 'delete-refs', 'quiet',
+and 'push-cert' capabilities are sent and recognized by the receive-pack
+(push to server) process.
The 'ofs-delta' and 'side-band-64k' capabilities are sent and recognized
-by both upload-pack and receive-pack protocols. The 'agent' capability
-may optionally be sent in both protocols.
+by both upload-pack and receive-pack protocols. The 'agent' and 'session-id'
+capabilities may optionally be sent in both protocols.
All other capabilities are only recognized by the upload-pack (fetch
from server) process.
@@ -176,6 +176,21 @@ agent strings are purely informative for statistics and debugging
purposes, and MUST NOT be used to programmatically assume the presence
or absence of particular features.
+object-format
+-------------
+
+This capability, which takes a hash algorithm as an argument, indicates
+that the server supports the given hash algorithms. It may be sent
+multiple times; if so, the first one given is the one used in the ref
+advertisement.
+
+When provided by the client, this indicates that it intends to use the
+given hash algorithm to communicate. The algorithm provided must be one
+that the server supports.
+
+If this capability is not provided, it is assumed that the only
+supported algorithm is SHA-1.
+
symref
------
@@ -269,6 +284,17 @@ each reference was updated successfully. If any of those were not
successful, it will send back an error message. See pack-protocol.txt
for example messages.
+report-status-v2
+----------------
+
+Capability 'report-status-v2' extends capability 'report-status' by
+adding new "option" directives in order to support reference rewritten by
+the "proc-receive" hook. The "proc-receive" hook may handle a command
+for a pseudo-reference which may create or update a reference with
+different name, new-oid, and old-oid. While the capability
+'report-status' cannot report for such case. See pack-protocol.txt
+for details.
+
delete-refs
-----------
@@ -309,15 +335,19 @@ allow-tip-sha1-in-want
----------------------
If the upload-pack server advertises this capability, fetch-pack may
-send "want" lines with SHA-1s that exist at the server but are not
-advertised by upload-pack.
+send "want" lines with object names that exist at the server but are not
+advertised by upload-pack. For historical reasons, the name of this
+capability contains "sha1". Object names are always given using the
+object format negotiated through the 'object-format' capability.
allow-reachable-sha1-in-want
----------------------------
If the upload-pack server advertises this capability, fetch-pack may
-send "want" lines with SHA-1s that exist at the server but are not
-advertised by upload-pack.
+send "want" lines with object names that exist at the server but are not
+advertised by upload-pack. For historical reasons, the name of this
+capability contains "sha1". Object names are always given using the
+object format negotiated through the 'object-format' capability.
push-cert=<nonce>
-----------------
@@ -335,3 +365,16 @@ If the upload-pack server advertises the 'filter' capability,
fetch-pack may send "filter" commands to request a partial clone
or partial fetch and request that the server omit various objects
from the packfile.
+
+session-id=<session id>
+-----------------------
+
+The server may advertise a session ID that can be used to identify this process
+across multiple requests. The client may advertise its own session ID back to
+the server as well.
+
+Session IDs should be unique to a given process. They must fit within a
+packet-line, and must not contain non-printable or whitespace characters. The
+current implementation uses trace2 session IDs (see
+link:api-trace2.html[api-trace2] for details), but this may change and users of
+the session ID should not rely on this fact.
diff --git a/Documentation/technical/protocol-v2.txt b/Documentation/technical/protocol-v2.txt
index ead85ce35c..1040d85319 100644
--- a/Documentation/technical/protocol-v2.txt
+++ b/Documentation/technical/protocol-v2.txt
@@ -1,5 +1,5 @@
- Git Wire Protocol, Version 2
-==============================
+Git Wire Protocol, Version 2
+============================
This document presents a specification for a version 2 of Git's wire
protocol. Protocol v2 will improve upon v1 in the following ways:
@@ -22,8 +22,8 @@ will be commands which a client can request be executed. Once a command
has completed, a client can reuse the connection and request that other
commands be executed.
- Packet-Line Framing
----------------------
+Packet-Line Framing
+-------------------
All communication is done using packet-line framing, just as in v1. See
`Documentation/technical/pack-protocol.txt` and
@@ -33,9 +33,11 @@ In protocol v2 these special packets will have the following semantics:
* '0000' Flush Packet (flush-pkt) - indicates the end of a message
* '0001' Delimiter Packet (delim-pkt) - separates sections of a message
+ * '0002' Response End Packet (response-end-pkt) - indicates the end of a
+ response for stateless connections
- Initial Client Request
-------------------------
+Initial Client Request
+----------------------
In general a client can request to speak protocol v2 by sending
`version=2` through the respective side-channel for the transport being
@@ -43,22 +45,22 @@ used which inevitably sets `GIT_PROTOCOL`. More information can be
found in `pack-protocol.txt` and `http-protocol.txt`. In all cases the
response from the server is the capability advertisement.
- Git Transport
-~~~~~~~~~~~~~~~
+Git Transport
+~~~~~~~~~~~~~
When using the git:// transport, you can request to use protocol v2 by
sending "version=2" as an extra parameter:
003egit-upload-pack /project.git\0host=myserver.com\0\0version=2\0
- SSH and File Transport
-~~~~~~~~~~~~~~~~~~~~~~~~
+SSH and File Transport
+~~~~~~~~~~~~~~~~~~~~~~
When using either the ssh:// or file:// transport, the GIT_PROTOCOL
environment variable must be set explicitly to include "version=2".
- HTTP Transport
-~~~~~~~~~~~~~~~~
+HTTP Transport
+~~~~~~~~~~~~~~
When using the http:// or https:// transport a client makes a "smart"
info/refs request as described in `http-protocol.txt` and requests that
@@ -79,8 +81,8 @@ A v2 server would reply:
Subsequent requests are then made directly to the service
`$GIT_URL/git-upload-pack`. (This works the same for git-receive-pack).
- Capability Advertisement
---------------------------
+Capability Advertisement
+------------------------
A server which decides to communicate (based on a request from a client)
using protocol version 2, notifies the client by sending a version string
@@ -101,8 +103,8 @@ to be executed by the client.
key = 1*(ALPHA | DIGIT | "-_")
value = 1*(ALPHA | DIGIT | " -_.,?\/{}[]()<>!@#$%^&*+=:;")
- Command Request
------------------
+Command Request
+---------------
After receiving the capability advertisement, a client can then issue a
request to select the command it wants with any particular capabilities
@@ -137,11 +139,11 @@ command be executed or can terminate the connection. A client may
optionally send an empty request consisting of just a flush-pkt to
indicate that no more requests will be made.
- Capabilities
---------------
+Capabilities
+------------
There are two different types of capabilities: normal capabilities,
-which can be used to to convey information or alter the behavior of a
+which can be used to convey information or alter the behavior of a
request, and commands, which are the core actions that a client wants to
perform (fetch, push, etc).
@@ -153,8 +155,8 @@ management on the server side in order to function correctly. This
permits simple round-robin load-balancing on the server side, without
needing to worry about state management.
- agent
-~~~~~~~
+agent
+~~~~~
The server can advertise the `agent` capability with a value `X` (in the
form `agent=X`) to notify the client that the server is running version
@@ -168,8 +170,8 @@ printable ASCII characters except space (i.e., the byte range 32 < x <
and debugging purposes, and MUST NOT be used to programmatically assume
the presence or absence of particular features.
- ls-refs
-~~~~~~~~~
+ls-refs
+~~~~~~~
`ls-refs` is the command used to request a reference advertisement in v2.
Unlike the current reference advertisement, ls-refs takes in arguments
@@ -190,17 +192,26 @@ ls-refs takes in the following arguments:
When specified, only references having a prefix matching one of
the provided prefixes are displayed.
+If the 'unborn' feature is advertised the following argument can be
+included in the client's request.
+
+ unborn
+ The server will send information about HEAD even if it is a symref
+ pointing to an unborn branch in the form "unborn HEAD
+ symref-target:<target>".
+
The output of ls-refs is as follows:
output = *ref
flush-pkt
- ref = PKT-LINE(obj-id SP refname *(SP ref-attribute) LF)
+ obj-id-or-unborn = (obj-id | "unborn")
+ ref = PKT-LINE(obj-id-or-unborn SP refname *(SP ref-attribute) LF)
ref-attribute = (symref | peeled)
symref = "symref-target:" symref-target
peeled = "peeled:" obj-id
- fetch
-~~~~~~~
+fetch
+~~~~~
`fetch` is the command used to fetch a packfile in v2. It can be looked
at as a modified version of the v1 fetch where the ref-advertisement is
@@ -252,7 +263,7 @@ A `fetch` request can take the following arguments:
ofs-delta
Indicate that the client understands PACKv2 with delta referring
to its base by position in pack rather than by an oid. That is,
- they can read OBJ_OFS_DELTA (ake type 6) in a packfile.
+ they can read OBJ_OFS_DELTA (aka type 6) in a packfile.
If the 'shallow' feature is advertised the following arguments can be
included in the clients request as well as the potential addition of the
@@ -323,13 +334,34 @@ included in the client's request:
indicating its sideband (1, 2, or 3), and the server may send "0005\2"
(a PKT-LINE of sideband 2 with no payload) as a keepalive packet.
+If the 'packfile-uris' feature is advertised, the following argument
+can be included in the client's request as well as the potential
+addition of the 'packfile-uris' section in the server's response as
+explained below.
+
+ packfile-uris <comma-separated list of protocols>
+ Indicates to the server that the client is willing to receive
+ URIs of any of the given protocols in place of objects in the
+ sent packfile. Before performing the connectivity check, the
+ client should download from all given URIs. Currently, the
+ protocols supported are "http" and "https".
+
+If the 'wait-for-done' feature is advertised, the following argument
+can be included in the client's request.
+
+ wait-for-done
+ Indicates to the server that it should never send "ready", but
+ should wait for the client to say "done" before sending the
+ packfile.
+
The response of `fetch` is broken into a number of sections separated by
delimiter packets (0001), with each section beginning with its section
-header.
+header. Most sections are sent only when the packfile is sent.
- output = *section
- section = (acknowledgments | shallow-info | wanted-refs | packfile)
- (flush-pkt | delim-pkt)
+ output = acknowledgements flush-pkt |
+ [acknowledgments delim-pkt] [shallow-info delim-pkt]
+ [wanted-refs delim-pkt] [packfile-uris delim-pkt]
+ packfile flush-pkt
acknowledgments = PKT-LINE("acknowledgments" LF)
(nak | *ack)
@@ -347,13 +379,17 @@ header.
*PKT-LINE(wanted-ref LF)
wanted-ref = obj-id SP refname
+ packfile-uris = PKT-LINE("packfile-uris" LF) *packfile-uri
+ packfile-uri = PKT-LINE(40*(HEXDIGIT) SP *%x20-ff LF)
+
packfile = PKT-LINE("packfile" LF)
*PKT-LINE(%x01-03 *%x00-ff)
acknowledgments section
- * If the client determines that it is finished with negotiations
- by sending a "done" line, the acknowledgments sections MUST be
- omitted from the server's response.
+ * If the client determines that it is finished with negotiations by
+ sending a "done" line (thus requiring the server to send a packfile),
+ the acknowledgments sections MUST be omitted from the server's
+ response.
* Always begins with the section header "acknowledgments"
@@ -404,9 +440,6 @@ header.
which the client has not indicated was shallow as a part of
its request.
- * This section is only included if a packfile section is also
- included in the response.
-
wanted-refs section
* This section is only included if the client has requested a
ref using a 'want-ref' line and if a packfile section is also
@@ -420,6 +453,20 @@ header.
* The server MUST NOT send any refs which were not requested
using 'want-ref' lines.
+ packfile-uris section
+ * This section is only included if the client sent
+ 'packfile-uris' and the server has at least one such URI to
+ send.
+
+ * Always begins with the section header "packfile-uris".
+
+ * For each URI the server sends, it sends a hash of the pack's
+ contents (as output by git index-pack) followed by the URI.
+
+ * The hashes are 40 hex characters long. When Git upgrades to a new
+ hash algorithm, this might need to be updated. (It should match
+ whatever index-pack outputs after "pack\t" or "keep\t".
+
packfile section
* This section is only included if the client has sent 'want'
lines in its request and either requested that no more
@@ -444,8 +491,8 @@ header.
2 - progress messages
3 - fatal error message just before stream aborts
- server-option
-~~~~~~~~~~~~~~~
+server-option
+~~~~~~~~~~~~~
If advertised, indicates that any number of server specific options can be
included in a request. This is done by sending each option as a
@@ -453,3 +500,56 @@ included in a request. This is done by sending each option as a
a request.
The provided options must not contain a NUL or LF character.
+
+ object-format
+~~~~~~~~~~~~~~~
+
+The server can advertise the `object-format` capability with a value `X` (in the
+form `object-format=X`) to notify the client that the server is able to deal
+with objects using hash algorithm X. If not specified, the server is assumed to
+only handle SHA-1. If the client would like to use a hash algorithm other than
+SHA-1, it should specify its object-format string.
+
+session-id=<session id>
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The server may advertise a session ID that can be used to identify this process
+across multiple requests. The client may advertise its own session ID back to
+the server as well.
+
+Session IDs should be unique to a given process. They must fit within a
+packet-line, and must not contain non-printable or whitespace characters. The
+current implementation uses trace2 session IDs (see
+link:api-trace2.html[api-trace2] for details), but this may change and users of
+the session ID should not rely on this fact.
+
+object-info
+~~~~~~~~~~~
+
+`object-info` is the command to retrieve information about one or more objects.
+Its main purpose is to allow a client to make decisions based on this
+information without having to fully fetch objects. Object size is the only
+information that is currently supported.
+
+An `object-info` request takes the following arguments:
+
+ size
+ Requests size information to be returned for each listed object id.
+
+ oid <oid>
+ Indicates to the server an object which the client wants to obtain
+ information for.
+
+The response of `object-info` is a list of the requested object ids
+and associated requested information, each separated by a single space.
+
+ output = info flush-pkt
+
+ info = PKT-LINE(attrs) LF)
+ *PKT-LINE(obj-info LF)
+
+ attrs = attr | attrs SP attrs
+
+ attr = "size"
+
+ obj-info = obj-id SP obj-size
diff --git a/Documentation/technical/racy-git.txt b/Documentation/technical/racy-git.txt
index 4a8be4d144..ceda4bbfda 100644
--- a/Documentation/technical/racy-git.txt
+++ b/Documentation/technical/racy-git.txt
@@ -51,7 +51,7 @@ of git://git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git
only fixes the issue for file systems with exactly 1 ns or 1 s
resolution. Other file systems are still broken in current Linux
kernels (e.g. CEPH, CIFS, NTFS, UDF), see
-https://lkml.org/lkml/2015/6/9/714
+https://lore.kernel.org/lkml/5577240D.7020309@gmail.com/
Racy Git
--------
diff --git a/Documentation/technical/reftable.txt b/Documentation/technical/reftable.txt
new file mode 100644
index 0000000000..d7c3b645cf
--- /dev/null
+++ b/Documentation/technical/reftable.txt
@@ -0,0 +1,1098 @@
+reftable
+--------
+
+Overview
+~~~~~~~~
+
+Problem statement
+^^^^^^^^^^^^^^^^^
+
+Some repositories contain a lot of references (e.g. android at 866k,
+rails at 31k). The existing packed-refs format takes up a lot of space
+(e.g. 62M), and does not scale with additional references. Lookup of a
+single reference requires linearly scanning the file.
+
+Atomic pushes modifying multiple references require copying the entire
+packed-refs file, which can be a considerable amount of data moved
+(e.g. 62M in, 62M out) for even small transactions (2 refs modified).
+
+Repositories with many loose references occupy a large number of disk
+blocks from the local file system, as each reference is its own file
+storing 41 bytes (and another file for the corresponding reflog). This
+negatively affects the number of inodes available when a large number of
+repositories are stored on the same filesystem. Readers can be penalized
+due to the larger number of syscalls required to traverse and read the
+`$GIT_DIR/refs` directory.
+
+
+Objectives
+^^^^^^^^^^
+
+* Near constant time lookup for any single reference, even when the
+repository is cold and not in process or kernel cache.
+* Near constant time verification if an object name is referred to by at least
+one reference (for allow-tip-sha1-in-want).
+* Efficient enumeration of an entire namespace, such as `refs/tags/`.
+* Support atomic push with `O(size_of_update)` operations.
+* Combine reflog storage with ref storage for small transactions.
+* Separate reflog storage for base refs and historical logs.
+
+Description
+^^^^^^^^^^^
+
+A reftable file is a portable binary file format customized for
+reference storage. References are sorted, enabling linear scans, binary
+search lookup, and range scans.
+
+Storage in the file is organized into variable sized blocks. Prefix
+compression is used within a single block to reduce disk space. Block
+size and alignment is tunable by the writer.
+
+Performance
+^^^^^^^^^^^
+
+Space used, packed-refs vs. reftable:
+
+[cols=",>,>,>,>,>",options="header",]
+|===============================================================
+|repository |packed-refs |reftable |% original |avg ref |avg obj
+|android |62.2 M |36.1 M |58.0% |33 bytes |5 bytes
+|rails |1.8 M |1.1 M |57.7% |29 bytes |4 bytes
+|git |78.7 K |48.1 K |61.0% |50 bytes |4 bytes
+|git (heads) |332 b |269 b |81.0% |33 bytes |0 bytes
+|===============================================================
+
+Scan (read 866k refs), by reference name lookup (single ref from 866k
+refs), and by SHA-1 lookup (refs with that SHA-1, from 866k refs):
+
+[cols=",>,>,>,>",options="header",]
+|=========================================================
+|format |cache |scan |by name |by SHA-1
+|packed-refs |cold |402 ms |409,660.1 usec |412,535.8 usec
+|packed-refs |hot | |6,844.6 usec |20,110.1 usec
+|reftable |cold |112 ms |33.9 usec |323.2 usec
+|reftable |hot | |20.2 usec |320.8 usec
+|=========================================================
+
+Space used for 149,932 log entries for 43,061 refs, reflog vs. reftable:
+
+[cols=",>,>",options="header",]
+|================================
+|format |size |avg entry
+|$GIT_DIR/logs |173 M |1209 bytes
+|reftable |5 M |37 bytes
+|================================
+
+Details
+~~~~~~~
+
+Peeling
+^^^^^^^
+
+References stored in a reftable are peeled, a record for an annotated
+(or signed) tag records both the tag object, and the object it refers
+to. This is analogous to storage in the packed-refs format.
+
+Reference name encoding
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Reference names are an uninterpreted sequence of bytes that must pass
+linkgit:git-check-ref-format[1] as a valid reference name.
+
+Key unicity
+^^^^^^^^^^^
+
+Each entry must have a unique key; repeated keys are disallowed.
+
+Network byte order
+^^^^^^^^^^^^^^^^^^
+
+All multi-byte, fixed width fields are in network byte order.
+
+Varint encoding
+^^^^^^^^^^^^^^^
+
+Varint encoding is identical to the ofs-delta encoding method used
+within pack files.
+
+Decoder works such as:
+
+....
+val = buf[ptr] & 0x7f
+while (buf[ptr] & 0x80) {
+ ptr++
+ val = ((val + 1) << 7) | (buf[ptr] & 0x7f)
+}
+....
+
+Ordering
+^^^^^^^^
+
+Blocks are lexicographically ordered by their first reference.
+
+Directory/file conflicts
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The reftable format accepts both `refs/heads/foo` and
+`refs/heads/foo/bar` as distinct references.
+
+This property is useful for retaining log records in reftable, but may
+confuse versions of Git using `$GIT_DIR/refs` directory tree to maintain
+references. Users of reftable may choose to continue to reject `foo` and
+`foo/bar` type conflicts to prevent problems for peers.
+
+File format
+~~~~~~~~~~~
+
+Structure
+^^^^^^^^^
+
+A reftable file has the following high-level structure:
+
+....
+first_block {
+ header
+ first_ref_block
+}
+ref_block*
+ref_index*
+obj_block*
+obj_index*
+log_block*
+log_index*
+footer
+....
+
+A log-only file omits the `ref_block`, `ref_index`, `obj_block` and
+`obj_index` sections, containing only the file header and log block:
+
+....
+first_block {
+ header
+}
+log_block*
+log_index*
+footer
+....
+
+in a log-only file the first log block immediately follows the file
+header, without padding to block alignment.
+
+Block size
+^^^^^^^^^^
+
+The file's block size is arbitrarily determined by the writer, and does
+not have to be a power of 2. The block size must be larger than the
+longest reference name or log entry used in the repository, as
+references cannot span blocks.
+
+Powers of two that are friendly to the virtual memory system or
+filesystem (such as 4k or 8k) are recommended. Larger sizes (64k) can
+yield better compression, with a possible increased cost incurred by
+readers during access.
+
+The largest block size is `16777215` bytes (15.99 MiB).
+
+Block alignment
+^^^^^^^^^^^^^^^
+
+Writers may choose to align blocks at multiples of the block size by
+including `padding` filled with NUL bytes at the end of a block to round
+out to the chosen alignment. When alignment is used, writers must
+specify the alignment with the file header's `block_size` field.
+
+Block alignment is not required by the file format. Unaligned files must
+set `block_size = 0` in the file header, and omit `padding`. Unaligned
+files with more than one ref block must include the link:#Ref-index[ref
+index] to support fast lookup. Readers must be able to read both aligned
+and non-aligned files.
+
+Very small files (e.g. a single ref block) may omit `padding` and the ref
+index to reduce total file size.
+
+Header (version 1)
+^^^^^^^^^^^^^^^^^^
+
+A 24-byte header appears at the beginning of the file:
+
+....
+'REFT'
+uint8( version_number = 1 )
+uint24( block_size )
+uint64( min_update_index )
+uint64( max_update_index )
+....
+
+Aligned files must specify `block_size` to configure readers with the
+expected block alignment. Unaligned files must set `block_size = 0`.
+
+The `min_update_index` and `max_update_index` describe bounds for the
+`update_index` field of all log records in this file. When reftables are
+used in a stack for link:#Update-transactions[transactions], these
+fields can order the files such that the prior file's
+`max_update_index + 1` is the next file's `min_update_index`.
+
+Header (version 2)
+^^^^^^^^^^^^^^^^^^
+
+A 28-byte header appears at the beginning of the file:
+
+....
+'REFT'
+uint8( version_number = 2 )
+uint24( block_size )
+uint64( min_update_index )
+uint64( max_update_index )
+uint32( hash_id )
+....
+
+The header is identical to `version_number=1`, with the 4-byte hash ID
+("sha1" for SHA1 and "s256" for SHA-256) append to the header.
+
+For maximum backward compatibility, it is recommended to use version 1 when
+writing SHA1 reftables.
+
+First ref block
+^^^^^^^^^^^^^^^
+
+The first ref block shares the same block as the file header, and is 24
+bytes smaller than all other blocks in the file. The first block
+immediately begins after the file header, at position 24.
+
+If the first block is a log block (a log-only file), its block header
+begins immediately at position 24.
+
+Ref block format
+^^^^^^^^^^^^^^^^
+
+A ref block is written as:
+
+....
+'r'
+uint24( block_len )
+ref_record+
+uint24( restart_offset )+
+uint16( restart_count )
+
+padding?
+....
+
+Blocks begin with `block_type = 'r'` and a 3-byte `block_len` which
+encodes the number of bytes in the block up to, but not including the
+optional `padding`. This is always less than or equal to the file's
+block size. In the first ref block, `block_len` includes 24 bytes for
+the file header.
+
+The 2-byte `restart_count` stores the number of entries in the
+`restart_offset` list, which must not be empty. Readers can use
+`restart_count` to binary search between restarts before starting a
+linear scan.
+
+Exactly `restart_count` 3-byte `restart_offset` values precedes the
+`restart_count`. Offsets are relative to the start of the block and
+refer to the first byte of any `ref_record` whose name has not been
+prefix compressed. Entries in the `restart_offset` list must be sorted,
+ascending. Readers can start linear scans from any of these records.
+
+A variable number of `ref_record` fill the middle of the block,
+describing reference names and values. The format is described below.
+
+As the first ref block shares the first file block with the file header,
+all `restart_offset` in the first block are relative to the start of the
+file (position 0), and include the file header. This forces the first
+`restart_offset` to be `28`.
+
+ref record
+++++++++++
+
+A `ref_record` describes a single reference, storing both the name and
+its value(s). Records are formatted as:
+
+....
+varint( prefix_length )
+varint( (suffix_length << 3) | value_type )
+suffix
+varint( update_index_delta )
+value?
+....
+
+The `prefix_length` field specifies how many leading bytes of the prior
+reference record's name should be copied to obtain this reference's
+name. This must be 0 for the first reference in any block, and also must
+be 0 for any `ref_record` whose offset is listed in the `restart_offset`
+table at the end of the block.
+
+Recovering a reference name from any `ref_record` is a simple concat:
+
+....
+this_name = prior_name[0..prefix_length] + suffix
+....
+
+The `suffix_length` value provides the number of bytes available in
+`suffix` to copy from `suffix` to complete the reference name.
+
+The `update_index` that last modified the reference can be obtained by
+adding `update_index_delta` to the `min_update_index` from the file
+header: `min_update_index + update_index_delta`.
+
+The `value` follows. Its format is determined by `value_type`, one of
+the following:
+
+* `0x0`: deletion; no value data (see transactions, below)
+* `0x1`: one object name; value of the ref
+* `0x2`: two object names; value of the ref, peeled target
+* `0x3`: symbolic reference: `varint( target_len ) target`
+
+Symbolic references use `0x3`, followed by the complete name of the
+reference target. No compression is applied to the target name.
+
+Types `0x4..0x7` are reserved for future use.
+
+Ref index
+^^^^^^^^^
+
+The ref index stores the name of the last reference from every ref block
+in the file, enabling reduced disk seeks for lookups. Any reference can
+be found by searching the index, identifying the containing block, and
+searching within that block.
+
+The index may be organized into a multi-level index, where the 1st level
+index block points to additional ref index blocks (2nd level), which may
+in turn point to either additional index blocks (e.g. 3rd level) or ref
+blocks (leaf level). Disk reads required to access a ref go up with
+higher index levels. Multi-level indexes may be required to ensure no
+single index block exceeds the file format's max block size of
+`16777215` bytes (15.99 MiB). To achieve constant O(1) disk seeks for
+lookups the index must be a single level, which is permitted to exceed
+the file's configured block size, but not the format's max block size of
+15.99 MiB.
+
+If present, the ref index block(s) appears after the last ref block.
+
+If there are at least 4 ref blocks, a ref index block should be written
+to improve lookup times. Cold reads using the index require 2 disk reads
+(read index, read block), and binary searching < 4 blocks also requires
+<= 2 reads. Omitting the index block from smaller files saves space.
+
+If the file is unaligned and contains more than one ref block, the ref
+index must be written.
+
+Index block format:
+
+....
+'i'
+uint24( block_len )
+index_record+
+uint24( restart_offset )+
+uint16( restart_count )
+
+padding?
+....
+
+The index blocks begin with `block_type = 'i'` and a 3-byte `block_len`
+which encodes the number of bytes in the block, up to but not including
+the optional `padding`.
+
+The `restart_offset` and `restart_count` fields are identical in format,
+meaning and usage as in ref blocks.
+
+To reduce the number of reads required for random access in very large
+files the index block may be larger than other blocks. However, readers
+must hold the entire index in memory to benefit from this, so it's a
+time-space tradeoff in both file size and reader memory.
+
+Increasing the file's block size decreases the index size. Alternatively
+a multi-level index may be used, keeping index blocks within the file's
+block size, but increasing the number of blocks that need to be
+accessed.
+
+index record
+++++++++++++
+
+An index record describes the last entry in another block. Index records
+are written as:
+
+....
+varint( prefix_length )
+varint( (suffix_length << 3) | 0 )
+suffix
+varint( block_position )
+....
+
+Index records use prefix compression exactly like `ref_record`.
+
+Index records store `block_position` after the suffix, specifying the
+absolute position in bytes (from the start of the file) of the block
+that ends with this reference. Readers can seek to `block_position` to
+begin reading the block header.
+
+Readers must examine the block header at `block_position` to determine
+if the next block is another level index block, or the leaf-level ref
+block.
+
+Reading the index
++++++++++++++++++
+
+Readers loading the ref index must first read the footer (below) to
+obtain `ref_index_position`. If not present, the position will be 0. The
+`ref_index_position` is for the 1st level root of the ref index.
+
+Obj block format
+^^^^^^^^^^^^^^^^
+
+Object blocks are optional. Writers may choose to omit object blocks,
+especially if readers will not use the object name to ref mapping.
+
+Object blocks use unique, abbreviated 2-32 object name keys, mapping to
+ref blocks containing references pointing to that object directly, or as
+the peeled value of an annotated tag. Like ref blocks, object blocks use
+the file's standard block size. The abbreviation length is available in
+the footer as `obj_id_len`.
+
+To save space in small files, object blocks may be omitted if the ref
+index is not present, as brute force search will only need to read a few
+ref blocks. When missing, readers should brute force a linear search of
+all references to lookup by object name.
+
+An object block is written as:
+
+....
+'o'
+uint24( block_len )
+obj_record+
+uint24( restart_offset )+
+uint16( restart_count )
+
+padding?
+....
+
+Fields are identical to ref block. Binary search using the restart table
+works the same as in reference blocks.
+
+Because object names are abbreviated by writers to the shortest unique
+abbreviation within the reftable, obj key lengths have a variable length. Their
+length must be at least 2 bytes. Readers must compare only for common prefix
+match within an obj block or obj index.
+
+obj record
+++++++++++
+
+An `obj_record` describes a single object abbreviation, and the blocks
+containing references using that unique abbreviation:
+
+....
+varint( prefix_length )
+varint( (suffix_length << 3) | cnt_3 )
+suffix
+varint( cnt_large )?
+varint( position_delta )*
+....
+
+Like in reference blocks, abbreviations are prefix compressed within an
+obj block. On large reftables with many unique objects, higher block
+sizes (64k), and higher restart interval (128), a `prefix_length` of 2
+or 3 and `suffix_length` of 3 may be common in obj records (unique
+abbreviation of 5-6 raw bytes, 10-12 hex digits).
+
+Each record contains `position_count` number of positions for matching
+ref blocks. For 1-7 positions the count is stored in `cnt_3`. When
+`cnt_3 = 0` the actual count follows in a varint, `cnt_large`.
+
+The use of `cnt_3` bets most objects are pointed to by only a single
+reference, some may be pointed to by a couple of references, and very
+few (if any) are pointed to by more than 7 references.
+
+A special case exists when `cnt_3 = 0` and `cnt_large = 0`: there are no
+`position_delta`, but at least one reference starts with this
+abbreviation. A reader that needs exact reference names must scan all
+references to find which specific references have the desired object.
+Writers should use this format when the `position_delta` list would have
+overflowed the file's block size due to a high number of references
+pointing to the same object.
+
+The first `position_delta` is the position from the start of the file.
+Additional `position_delta` entries are sorted ascending and relative to
+the prior entry, e.g. a reader would perform:
+
+....
+pos = position_delta[0]
+prior = pos
+for (j = 1; j < position_count; j++) {
+ pos = prior + position_delta[j]
+ prior = pos
+}
+....
+
+With a position in hand, a reader must linearly scan the ref block,
+starting from the first `ref_record`, testing each reference's object names
+(for `value_type = 0x1` or `0x2`) for full equality. Faster searching by
+object name within a single ref block is not supported by the reftable format.
+Smaller block sizes reduce the number of candidates this step must
+consider.
+
+Obj index
+^^^^^^^^^
+
+The obj index stores the abbreviation from the last entry for every obj
+block in the file, enabling reduced disk seeks for all lookups. It is
+formatted exactly the same as the ref index, but refers to obj blocks.
+
+The obj index should be present if obj blocks are present, as obj blocks
+should only be written in larger files.
+
+Readers loading the obj index must first read the footer (below) to
+obtain `obj_index_position`. If not present, the position will be 0.
+
+Log block format
+^^^^^^^^^^^^^^^^
+
+Unlike ref and obj blocks, log blocks are always unaligned.
+
+Log blocks are variable in size, and do not match the `block_size`
+specified in the file header or footer. Writers should choose an
+appropriate buffer size to prepare a log block for deflation, such as
+`2 * block_size`.
+
+A log block is written as:
+
+....
+'g'
+uint24( block_len )
+zlib_deflate {
+ log_record+
+ uint24( restart_offset )+
+ uint16( restart_count )
+}
+....
+
+Log blocks look similar to ref blocks, except `block_type = 'g'`.
+
+The 4-byte block header is followed by the deflated block contents using
+zlib deflate. The `block_len` in the header is the inflated size
+(including 4-byte block header), and should be used by readers to
+preallocate the inflation output buffer. A log block's `block_len` may
+exceed the file's block size.
+
+Offsets within the log block (e.g. `restart_offset`) still include the
+4-byte header. Readers may prefer prefixing the inflation output buffer
+with the 4-byte header.
+
+Within the deflate container, a variable number of `log_record` describe
+reference changes. The log record format is described below. See ref
+block format (above) for a description of `restart_offset` and
+`restart_count`.
+
+Because log blocks have no alignment or padding between blocks, readers
+must keep track of the bytes consumed by the inflater to know where the
+next log block begins.
+
+log record
+++++++++++
+
+Log record keys are structured as:
+
+....
+ref_name '\0' reverse_int64( update_index )
+....
+
+where `update_index` is the unique transaction identifier. The
+`update_index` field must be unique within the scope of a `ref_name`.
+See the update transactions section below for further details.
+
+The `reverse_int64` function inverses the value so lexicographical
+ordering the network byte order encoding sorts the more recent records
+with higher `update_index` values first:
+
+....
+reverse_int64(int64 t) {
+ return 0xffffffffffffffff - t;
+}
+....
+
+Log records have a similar starting structure to ref and index records,
+utilizing the same prefix compression scheme applied to the log record
+key described above.
+
+....
+ varint( prefix_length )
+ varint( (suffix_length << 3) | log_type )
+ suffix
+ log_data {
+ old_id
+ new_id
+ varint( name_length ) name
+ varint( email_length ) email
+ varint( time_seconds )
+ sint16( tz_offset )
+ varint( message_length ) message
+ }?
+....
+
+Log record entries use `log_type` to indicate what follows:
+
+* `0x0`: deletion; no log data.
+* `0x1`: standard git reflog data using `log_data` above.
+
+The `log_type = 0x0` is mostly useful for `git stash drop`, removing an
+entry from the reflog of `refs/stash` in a transaction file (below),
+without needing to rewrite larger files. Readers reading a stack of
+reflogs must treat this as a deletion.
+
+For `log_type = 0x1`, the `log_data` section follows
+linkgit:git-update-ref[1] logging and includes:
+
+* two object names (old id, new id)
+* varint string of committer's name
+* varint string of committer's email
+* varint time in seconds since epoch (Jan 1, 1970)
+* 2-byte timezone offset in minutes (signed)
+* varint string of message
+
+`tz_offset` is the absolute number of minutes from GMT the committer was
+at the time of the update. For example `GMT-0800` is encoded in reftable
+as `sint16(-480)` and `GMT+0230` is `sint16(150)`.
+
+The committer email does not contain `<` or `>`, it's the value normally
+found between the `<>` in a git commit object header.
+
+The `message_length` may be 0, in which case there was no message
+supplied for the update.
+
+Contrary to traditional reflog (which is a file), renames are encoded as
+a combination of ref deletion and ref creation. A deletion is a log
+record with a zero new_id, and a creation is a log record with a zero old_id.
+
+Reading the log
++++++++++++++++
+
+Readers accessing the log must first read the footer (below) to
+determine the `log_position`. The first block of the log begins at
+`log_position` bytes since the start of the file. The `log_position` is
+not block aligned.
+
+Importing logs
+++++++++++++++
+
+When importing from `$GIT_DIR/logs` writers should globally order all
+log records roughly by timestamp while preserving file order, and assign
+unique, increasing `update_index` values for each log line. Newer log
+records get higher `update_index` values.
+
+Although an import may write only a single reftable file, the reftable
+file must span many unique `update_index`, as each log line requires its
+own `update_index` to preserve semantics.
+
+Log index
+^^^^^^^^^
+
+The log index stores the log key
+(`refname \0 reverse_int64(update_index)`) for the last log record of
+every log block in the file, supporting bounded-time lookup.
+
+A log index block must be written if 2 or more log blocks are written to
+the file. If present, the log index appears after the last log block.
+There is no padding used to align the log index to block alignment.
+
+Log index format is identical to ref index, except the keys are 9 bytes
+longer to include `'\0'` and the 8-byte `reverse_int64(update_index)`.
+Records use `block_position` to refer to the start of a log block.
+
+Reading the index
++++++++++++++++++
+
+Readers loading the log index must first read the footer (below) to
+obtain `log_index_position`. If not present, the position will be 0.
+
+Footer
+^^^^^^
+
+After the last block of the file, a file footer is written. It begins
+like the file header, but is extended with additional data.
+
+....
+ HEADER
+
+ uint64( ref_index_position )
+ uint64( (obj_position << 5) | obj_id_len )
+ uint64( obj_index_position )
+
+ uint64( log_position )
+ uint64( log_index_position )
+
+ uint32( CRC-32 of above )
+....
+
+If a section is missing (e.g. ref index) the corresponding position
+field (e.g. `ref_index_position`) will be 0.
+
+* `obj_position`: byte position for the first obj block.
+* `obj_id_len`: number of bytes used to abbreviate object names in
+obj blocks.
+* `log_position`: byte position for the first log block.
+* `ref_index_position`: byte position for the start of the ref index.
+* `obj_index_position`: byte position for the start of the obj index.
+* `log_index_position`: byte position for the start of the log index.
+
+The size of the footer is 68 bytes for version 1, and 72 bytes for
+version 2.
+
+Reading the footer
+++++++++++++++++++
+
+Readers must first read the file start to determine the version
+number. Then they seek to `file_length - FOOTER_LENGTH` to access the
+footer. A trusted external source (such as `stat(2)`) is necessary to
+obtain `file_length`. When reading the footer, readers must verify:
+
+* 4-byte magic is correct
+* 1-byte version number is recognized
+* 4-byte CRC-32 matches the other 64 bytes (including magic, and
+version)
+
+Once verified, the other fields of the footer can be accessed.
+
+Empty tables
+++++++++++++
+
+A reftable may be empty. In this case, the file starts with a header
+and is immediately followed by a footer.
+
+Binary search
+^^^^^^^^^^^^^
+
+Binary search within a block is supported by the `restart_offset` fields
+at the end of the block. Readers can binary search through the restart
+table to locate between which two restart points the sought reference or
+key should appear.
+
+Each record identified by a `restart_offset` stores the complete key in
+the `suffix` field of the record, making the compare operation during
+binary search straightforward.
+
+Once a restart point lexicographically before the sought reference has
+been identified, readers can linearly scan through the following record
+entries to locate the sought record, terminating if the current record
+sorts after (and therefore the sought key is not present).
+
+Restart point selection
++++++++++++++++++++++++
+
+Writers determine the restart points at file creation. The process is
+arbitrary, but every 16 or 64 records is recommended. Every 16 may be
+more suitable for smaller block sizes (4k or 8k), every 64 for larger
+block sizes (64k).
+
+More frequent restart points reduces prefix compression and increases
+space consumed by the restart table, both of which increase file size.
+
+Less frequent restart points makes prefix compression more effective,
+decreasing overall file size, with increased penalties for readers
+walking through more records after the binary search step.
+
+A maximum of `65535` restart points per block is supported.
+
+Considerations
+~~~~~~~~~~~~~~
+
+Lightweight refs dominate
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The reftable format assumes the vast majority of references are single
+object names valued with common prefixes, such as Gerrit Code Review's
+`refs/changes/` namespace, GitHub's `refs/pulls/` namespace, or many
+lightweight tags in the `refs/tags/` namespace.
+
+Annotated tags storing the peeled object cost an additional object name per
+reference.
+
+Low overhead
+^^^^^^^^^^^^
+
+A reftable with very few references (e.g. git.git with 5 heads) is 269
+bytes for reftable, vs. 332 bytes for packed-refs. This supports
+reftable scaling down for transaction logs (below).
+
+Block size
+^^^^^^^^^^
+
+For a Gerrit Code Review type repository with many change refs, larger
+block sizes (64 KiB) and less frequent restart points (every 64) yield
+better compression due to more references within the block compressing
+against the prior reference.
+
+Larger block sizes reduce the index size, as the reftable will require
+fewer blocks to store the same number of references.
+
+Minimal disk seeks
+^^^^^^^^^^^^^^^^^^
+
+Assuming the index block has been loaded into memory, binary searching
+for any single reference requires exactly 1 disk seek to load the
+containing block.
+
+Scans and lookups dominate
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Scanning all references and lookup by name (or namespace such as
+`refs/heads/`) are the most common activities performed on repositories.
+Object names are stored directly with references to optimize this use case.
+
+Logs are infrequently read
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Logs are infrequently accessed, but can be large. Deflating log blocks
+saves disk space, with some increased penalty at read time.
+
+Logs are stored in an isolated section from refs, reducing the burden on
+reference readers that want to ignore logs. Further, historical logs can
+be isolated into log-only files.
+
+Logs are read backwards
+^^^^^^^^^^^^^^^^^^^^^^^
+
+Logs are frequently accessed backwards (most recent N records for master
+to answer `master@{4}`), so log records are grouped by reference, and
+sorted descending by update index.
+
+Repository format
+~~~~~~~~~~~~~~~~~
+
+Version 1
+^^^^^^^^^
+
+A repository must set its `$GIT_DIR/config` to configure reftable:
+
+....
+[core]
+ repositoryformatversion = 1
+[extensions]
+ refStorage = reftable
+....
+
+Layout
+^^^^^^
+
+A collection of reftable files are stored in the `$GIT_DIR/reftable/` directory.
+Their names should have a random element, such that each filename is globally
+unique; this helps avoid spurious failures on Windows, where open files cannot
+be removed or overwritten. It suggested to use
+`${min_update_index}-${max_update_index}-${random}.ref` as a naming convention.
+
+Log-only files use the `.log` extension, while ref-only and mixed ref
+and log files use `.ref`. extension.
+
+The stack ordering file is `$GIT_DIR/reftable/tables.list` and lists the
+current files, one per line, in order, from oldest (base) to newest
+(most recent):
+
+....
+$ cat .git/reftable/tables.list
+00000001-00000001-RANDOM1.log
+00000002-00000002-RANDOM2.ref
+00000003-00000003-RANDOM3.ref
+....
+
+Readers must read `$GIT_DIR/reftable/tables.list` to determine which
+files are relevant right now, and search through the stack in reverse
+order (last reftable is examined first).
+
+Reftable files not listed in `tables.list` may be new (and about to be
+added to the stack by the active writer), or ancient and ready to be
+pruned.
+
+Backward compatibility
+^^^^^^^^^^^^^^^^^^^^^^
+
+Older clients should continue to recognize the directory as a git
+repository so they don't look for an enclosing repository in parent
+directories. To this end, a reftable-enabled repository must contain the
+following dummy files
+
+* `.git/HEAD`, a regular file containing `ref: refs/heads/.invalid`.
+* `.git/refs/`, a directory
+* `.git/refs/heads`, a regular file
+
+Readers
+^^^^^^^
+
+Readers can obtain a consistent snapshot of the reference space by
+following:
+
+1. Open and read the `tables.list` file.
+2. Open each of the reftable files that it mentions.
+3. If any of the files is missing, goto 1.
+4. Read from the now-open files as long as necessary.
+
+Update transactions
+^^^^^^^^^^^^^^^^^^^
+
+Although reftables are immutable, mutations are supported by writing a
+new reftable and atomically appending it to the stack:
+
+1. Acquire `tables.list.lock`.
+2. Read `tables.list` to determine current reftables.
+3. Select `update_index` to be most recent file's
+`max_update_index + 1`.
+4. Prepare temp reftable `tmp_XXXXXX`, including log entries.
+5. Rename `tmp_XXXXXX` to `${update_index}-${update_index}-${random}.ref`.
+6. Copy `tables.list` to `tables.list.lock`, appending file from (5).
+7. Rename `tables.list.lock` to `tables.list`.
+
+During step 4 the new file's `min_update_index` and `max_update_index`
+are both set to the `update_index` selected by step 3. All log records
+for the transaction use the same `update_index` in their keys. This
+enables later correlation of which references were updated by the same
+transaction.
+
+Because a single `tables.list.lock` file is used to manage locking, the
+repository is single-threaded for writers. Writers may have to busy-spin
+(with backoff) around creating `tables.list.lock`, for up to an
+acceptable wait period, aborting if the repository is too busy to
+mutate. Application servers wrapped around repositories (e.g. Gerrit
+Code Review) can layer their own lock/wait queue to improve fairness to
+writers.
+
+Reference deletions
+^^^^^^^^^^^^^^^^^^^
+
+Deletion of any reference can be explicitly stored by setting the `type`
+to `0x0` and omitting the `value` field of the `ref_record`. This serves
+as a tombstone, overriding any assertions about the existence of the
+reference from earlier files in the stack.
+
+Compaction
+^^^^^^^^^^
+
+A partial stack of reftables can be compacted by merging references
+using a straightforward merge join across reftables, selecting the most
+recent value for output, and omitting deleted references that do not
+appear in remaining, lower reftables.
+
+A compacted reftable should set its `min_update_index` to the smallest
+of the input files' `min_update_index`, and its `max_update_index`
+likewise to the largest input `max_update_index`.
+
+For sake of illustration, assume the stack currently consists of
+reftable files (from oldest to newest): A, B, C, and D. The compactor is
+going to compact B and C, leaving A and D alone.
+
+1. Obtain lock `tables.list.lock` and read the `tables.list` file.
+2. Obtain locks `B.lock` and `C.lock`. Ownership of these locks
+prevents other processes from trying to compact these files.
+3. Release `tables.list.lock`.
+4. Compact `B` and `C` into a temp file
+`${min_update_index}-${max_update_index}_XXXXXX`.
+5. Reacquire lock `tables.list.lock`.
+6. Verify that `B` and `C` are still in the stack, in that order. This
+should always be the case, assuming that other processes are adhering to
+the locking protocol.
+7. Rename `${min_update_index}-${max_update_index}_XXXXXX` to
+`${min_update_index}-${max_update_index}-${random}.ref`.
+8. Write the new stack to `tables.list.lock`, replacing `B` and `C`
+with the file from (4).
+9. Rename `tables.list.lock` to `tables.list`.
+10. Delete `B` and `C`, perhaps after a short sleep to avoid forcing
+readers to backtrack.
+
+This strategy permits compactions to proceed independently of updates.
+
+Each reftable (compacted or not) is uniquely identified by its name, so
+open reftables can be cached by their name.
+
+Windows
+^^^^^^^
+
+On windows, and other systems that do not allow deleting or renaming to open
+files, compaction may succeed, but other readers may prevent obsolete tables
+from being deleted.
+
+On these platforms, the following strategy can be followed: on closing a
+reftable stack, reload `tables.list`, and delete any tables no longer mentioned
+in `tables.list`.
+
+Irregular program exit may still leave about unused files. In this case, a
+cleanup operation should proceed as follows:
+
+* take a lock `tables.list.lock` to prevent concurrent modifications
+* refresh the reftable stack, by reading `tables.list`
+* for each `*.ref` file, remove it if
+** it is not mentioned in `tables.list`, and
+** its max update_index is not beyond the max update_index of the stack
+
+
+Alternatives considered
+~~~~~~~~~~~~~~~~~~~~~~~
+
+bzip packed-refs
+^^^^^^^^^^^^^^^^
+
+`bzip2` can significantly shrink a large packed-refs file (e.g. 62 MiB
+compresses to 23 MiB, 37%). However the bzip format does not support
+random access to a single reference. Readers must inflate and discard
+while performing a linear scan.
+
+Breaking packed-refs into chunks (individually compressing each chunk)
+would reduce the amount of data a reader must inflate, but still leaves
+the problem of indexing chunks to support readers efficiently locating
+the correct chunk.
+
+Given the compression achieved by reftable's encoding, it does not seem
+necessary to add the complexity of bzip/gzip/zlib.
+
+Michael Haggerty's alternate format
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Michael Haggerty proposed
+link:https://lore.kernel.org/git/CAMy9T_HCnyc1g8XWOOWhe7nN0aEFyyBskV2aOMb_fe%2BwGvEJ7A%40mail.gmail.com/[an
+alternate] format to reftable on the Git mailing list. This format uses
+smaller chunks, without the restart table, and avoids block alignment
+with padding. Reflog entries immediately follow each ref, and are thus
+interleaved between refs.
+
+Performance testing indicates reftable is faster for lookups (51%
+faster, 11.2 usec vs. 5.4 usec), although reftable produces a slightly
+larger file (+ ~3.2%, 28.3M vs 29.2M):
+
+[cols=">,>,>,>",options="header",]
+|=====================================
+|format |size |seek cold |seek hot
+|mh-alt |28.3 M |23.4 usec |11.2 usec
+|reftable |29.2 M |19.9 usec |5.4 usec
+|=====================================
+
+JGit Ketch RefTree
+^^^^^^^^^^^^^^^^^^
+
+https://dev.eclipse.org/mhonarc/lists/jgit-dev/msg03073.html[JGit Ketch]
+proposed
+link:https://lore.kernel.org/git/CAJo%3DhJvnAPNAdDcAAwAvU9C4RVeQdoS3Ev9WTguHx4fD0V_nOg%40mail.gmail.com/[RefTree],
+an encoding of references inside Git tree objects stored as part of the
+repository's object database.
+
+The RefTree format adds additional load on the object database storage
+layer (more loose objects, more objects in packs), and relies heavily on
+the packer's delta compression to save space. Namespaces which are flat
+(e.g. thousands of tags in refs/tags) initially create very large loose
+objects, and so RefTree does not address the problem of copying many
+references to modify a handful.
+
+Flat namespaces are not efficiently searchable in RefTree, as tree
+objects in canonical formatting cannot be binary searched. This fails
+the need to handle a large number of references in a single namespace,
+such as GitHub's `refs/pulls`, or a project with many tags.
+
+LMDB
+^^^^
+
+David Turner proposed
+https://lore.kernel.org/git/1455772670-21142-26-git-send-email-dturner@twopensource.com/[using
+LMDB], as LMDB is lightweight (64k of runtime code) and GPL-compatible
+license.
+
+A downside of LMDB is its reliance on a single C implementation. This
+makes embedding inside JGit (a popular reimplementation of Git)
+difficult, and hoisting onto virtual storage (for JGit DFS) virtually
+impossible.
+
+A common format that can be supported by all major Git implementations
+(git-core, JGit, libgit2) is strongly preferred.
diff --git a/Documentation/technical/remembering-renames.txt b/Documentation/technical/remembering-renames.txt
new file mode 100644
index 0000000000..2fd5cc88e0
--- /dev/null
+++ b/Documentation/technical/remembering-renames.txt
@@ -0,0 +1,671 @@
+Rebases and cherry-picks involve a sequence of merges whose results are
+recorded as new single-parent commits. The first parent side of those
+merges represent the "upstream" side, and often include a far larger set of
+changes than the second parent side. Traditionally, the renames on the
+first-parent side of that sequence of merges were repeatedly re-detected
+for every merge. This file explains why it is safe and effective during
+rebases and cherry-picks to remember renames on the upstream side of
+history as an optimization, assuming all merges are automatic and clean
+(i.e. no conflicts and not interrupted for user input or editing).
+
+Outline:
+
+ 0. Assumptions
+
+ 1. How rebasing and cherry-picking work
+
+ 2. Why the renames on MERGE_SIDE1 in any given pick are *always* a
+ superset of the renames on MERGE_SIDE1 for the next pick.
+
+ 3. Why any rename on MERGE_SIDE1 in any given pick is _almost_ always also
+ a rename on MERGE_SIDE1 for the next pick
+
+ 4. A detailed description of the the counter-examples to #3.
+
+ 5. Why the special cases in #4 are still fully reasonable to use to pair
+ up files for three-way content merging in the merge machinery, and why
+ they do not affect the correctness of the merge.
+
+ 6. Interaction with skipping of "irrelevant" renames
+
+ 7. Additional items that need to be cached
+
+ 8. How directory rename detection interacts with the above and why this
+ optimization is still safe even if merge.directoryRenames is set to
+ "true".
+
+
+=== 0. Assumptions ===
+
+There are two assumptions that will hold throughout this document:
+
+ * The upstream side where commits are transplanted to is treated as the
+ first parent side when rebase/cherry-pick call the merge machinery
+
+ * All merges are fully automatic
+
+and a third that will hold in sections 2-5 for simplicity, that I'll later
+address in section 8:
+
+ * No directory renames occur
+
+
+Let me explain more about each assumption and why I include it:
+
+
+The first assumption is merely for the purposes of making this document
+clearer; the optimization implementation does not actually depend upon it.
+However, the assumption does hold in all cases because it reflects the way
+that both rebase and cherry-pick were implemented; and the implementation
+of cherry-pick and rebase are not readily changeable for backwards
+compatibility reasons (see for example the discussion of the --ours and
+--theirs flag in the documentation of `git checkout`, particularly the
+comments about how they behave with rebase). The optimization avoids
+checking first-parent-ness, though. It checks the conditions that make the
+optimization valid instead, so it would still continue working if someone
+changed the parent ordering that cherry-pick and rebase use. But making
+this assumption does make this document much clearer and prevents me from
+having to repeat every example twice.
+
+If the second assumption is violated, then the optimization simply is
+turned off and thus isn't relevant to consider. The second assumption can
+also be stated as "there is no interruption for a user to resolve conflicts
+or to just further edit or tweak files". While real rebases and
+cherry-picks are often interrupted (either because it's an interactive
+rebase where the user requested to stop and edit, or because there were
+conflicts that the user needs to resolve), the cache of renames is not
+stored on disk, and thus is thrown away as soon as the rebase or cherry
+pick stops for the user to resolve the operation.
+
+The third assumption makes sections 2-5 simpler, and allows people to
+understand the basics of why this optimization is safe and effective, and
+then I can go back and address the specifics in section 8. It is probably
+also worth noting that if directory renames do occur, then the default of
+merge.directoryRenames being set to "conflict" means that the operation
+will stop for users to resolve the conflicts and the cache will be thrown
+away, and thus that there won't be an optimization to apply. So, the only
+reason we need to address directory renames specifically, is that some
+users will have set merge.directoryRenames to "true" to allow the merges to
+continue to proceed automatically. The optimization is still safe with
+this config setting, but we have to discuss a few more cases to show why;
+this discussion is deferred until section 8.
+
+
+=== 1. How rebasing and cherry-picking work ===
+
+Consider the following setup (from the git-rebase manpage):
+
+ A---B---C topic
+ /
+ D---E---F---G main
+
+After rebasing or cherry-picking topic onto main, this will appear as:
+
+ A'--B'--C' topic
+ /
+ D---E---F---G main
+
+The way the commits A', B', and C' are created is through a series of
+merges, where rebase or cherry-pick sequentially uses each of the three
+A-B-C commits in a special merge operation. Let's label the three commits
+in the merge operation as MERGE_BASE, MERGE_SIDE1, and MERGE_SIDE2. For
+this picture, the three commits for each of the three merges would be:
+
+To create A':
+ MERGE_BASE: E
+ MERGE_SIDE1: G
+ MERGE_SIDE2: A
+
+To create B':
+ MERGE_BASE: A
+ MERGE_SIDE1: A'
+ MERGE_SIDE2: B
+
+To create C':
+ MERGE_BASE: B
+ MERGE_SIDE1: B'
+ MERGE_SIDE2: C
+
+Sometimes, folks are surprised that these three-way merges are done. It
+can be useful in understanding these three-way merges to view them in a
+slightly different light. For example, in creating C', you can view it as
+either:
+
+ * Apply the changes between B & C to B'
+ * Apply the changes between B & B' to C
+
+Conceptually the two statements above are the same as a three-way merge of
+B, B', and C, at least the parts before you decide to record a commit.
+
+
+=== 2. Why the renames on MERGE_SIDE1 in any given pick are always a ===
+=== superset of the renames on MERGE_SIDE1 for the next pick. ===
+
+The merge machinery uses the filenames it is fed from MERGE_BASE,
+MERGE_SIDE1, and MERGE_SIDE2. It will only move content to a different
+filename under one of three conditions:
+
+ * To make both pieces of a conflict available to a user during conflict
+ resolution (examples: directory/file conflict, add/add type conflict
+ such as symlink vs. regular file)
+
+ * When MERGE_SIDE1 renames the file.
+
+ * When MERGE_SIDE2 renames the file.
+
+First, let's remember what commits are involved in the first and second
+picks of the cherry-pick or rebase sequence:
+
+To create A':
+ MERGE_BASE: E
+ MERGE_SIDE1: G
+ MERGE_SIDE2: A
+
+To create B':
+ MERGE_BASE: A
+ MERGE_SIDE1: A'
+ MERGE_SIDE2: B
+
+So, in particular, we need to show that the renames between E and G are a
+superset of those between A and A'.
+
+A' is created by the first merge. A' will only have renames for one of the
+three reasons listed above. The first case, a conflict, results in a
+situation where the cache is dropped and thus this optimization doesn't
+take effect, so we need not consider that case. The third case, a rename
+on MERGE_SIDE2 (i.e. from G to A), will show up in A' but it also shows up
+in A -- therefore when diffing A and A' that path does not show up as a
+rename. The only remaining way for renames to show up in A' is for the
+rename to come from MERGE_SIDE1. Therefore, all renames between A and A'
+are a subset of those between E and G. Equivalently, all renames between E
+and G are a superset of those between A and A'.
+
+
+=== 3. Why any rename on MERGE_SIDE1 in any given pick is _almost_ ===
+=== always also a rename on MERGE_SIDE1 for the next pick. ===
+
+Let's again look at the first two picks:
+
+To create A':
+ MERGE_BASE: E
+ MERGE_SIDE1: G
+ MERGE_SIDE2: A
+
+To create B':
+ MERGE_BASE: A
+ MERGE_SIDE1: A'
+ MERGE_SIDE2: B
+
+Now let's look at any given rename from MERGE_SIDE1 of the first pick, i.e.
+any given rename from E to G. Let's use the filenames 'oldfile' and
+'newfile' for demonstration purposes. That first pick will function as
+follows; when the rename is detected, the merge machinery will do a
+three-way content merge of the following:
+ E:oldfile
+ G:newfile
+ A:oldfile
+and produce a new result:
+ A':newfile
+
+Note above that I've assumed that E->A did not rename oldfile. If that
+side did rename, then we most likely have a rename/rename(1to2) conflict
+that will cause the rebase or cherry-pick operation to halt and drop the
+in-memory cache of renames and thus doesn't need to be considered further.
+In the special case that E->A does rename the file but also renames it to
+newfile, then there is no conflict from the renaming and the merge can
+succeed. In this special case, the rename is not valid to cache because
+the second merge will find A:newfile in the MERGE_BASE (see also the new
+testcases in t6429 with "rename same file identically" in their
+description). So a rename/rename(1to1) needs to be specially handled by
+pruning renames from the cache and decrementing the dir_rename_counts in
+the current and leading directories associated with those renames. Or,
+since these are really rare, one could just take the easy way out and
+disable the remembering renames optimization when a rename/rename(1to1)
+happens.
+
+The previous paragraph handled the cases for E->A renaming oldfile, let's
+continue assuming that oldfile is not renamed in A.
+
+As per the diagram for creating B', MERGE_SIDE1 involves the changes from A
+to A'. So, we are curious whether A:oldfile and A':newfile will be viewed
+as renames. Note that:
+
+ * There will be no A':oldfile (because there could not have been a
+ G:oldfile as we do not do break detection in the merge machinery and
+ G:newfile was detected as a rename, and by the construction of the
+ rename above that merged cleanly, the merge machinery will ensure there
+ is no 'oldfile' in the result).
+
+ * There will be no A:newfile (if there had been, we would have had a
+ rename/add conflict).
+
+ * Clearly A:oldfile and A':newfile are "related" (A':newfile came from a
+ clean three-way content merge involving A:oldfile).
+
+We can also expound on the third point above, by noting that three-way
+content merges can also be viewed as applying the differences between the
+base and one side to the other side. Thus we can view A':newfile as
+having been created by taking the changes between E:oldfile and G:newfile
+(which were detected as being related, i.e. <50% changed) to A:oldfile.
+
+Thus A:oldfile and A':newfile are just as related as E:oldfile and
+G:newfile are -- they have exactly identical differences. Since the latter
+were detected as renames, A:oldfile and A':newfile should also be
+detectable as renames almost always.
+
+
+=== 4. A detailed description of the counter-examples to #3. ===
+
+We already noted in section 3 that rename/rename(1to1) (i.e. both sides
+renaming a file the same way) was one counter-example. The more
+interesting bit, though, is why did we need to use the "almost" qualifier
+when stating that A:oldfile and A':newfile are "almost" always detectable
+as renames?
+
+Let's repeat an earlier point that section 3 made:
+
+ A':newfile was created by applying the changes between E:oldfile and
+ G:newfile to A:oldfile. The changes between E:oldfile and G:newfile were
+ <50% of the size of E:oldfile.
+
+If those changes that were <50% of the size of E:oldfile are also <50% of
+the size of A:oldfile, then A:oldfile and A':newfile will be detectable as
+renames. However, if there is a dramatic size reduction between E:oldfile
+and A:oldfile (but the changes between E:oldfile, G:newfile, and A:oldfile
+still somehow merge cleanly), then traditional rename detection would not
+detect A:oldfile and A':newfile as renames.
+
+Here's an example where that can happen:
+ * E:oldfile had 20 lines
+ * G:newfile added 10 new lines at the beginning of the file
+ * A:oldfile kept the first 3 lines of the file, and deleted all the rest
+then
+ => A':newfile would have 13 lines, 3 of which matches those in A:oldfile.
+E:oldfile -> G:newfile would be detected as a rename, but A:oldfile and
+A':newfile would not be.
+
+
+=== 5. Why the special cases in #4 are still fully reasonable to use to ===
+=== pair up files for three-way content merging in the merge machinery, ===
+=== and why they do not affect the correctness of the merge. ===
+
+In the rename/rename(1to1) case, A:newfile and A':newfile are not renames
+since they use the *same* filename. However, files with the same filename
+are obviously fine to pair up for three-way content merging (the merge
+machinery has never employed break detection). The interesting
+counter-example case is thus not the rename/rename(1to1) case, but the case
+where A did not rename oldfile. That was the case that we spent most of
+the time discussing in sections 3 and 4. The remainder of this section
+will be devoted to that case as well.
+
+So, even if A:oldfile and A':newfile aren't detectable as renames, why is
+it still reasonable to pair them up for three-way content merging in the
+merge machinery? There are multiple reasons:
+
+ * As noted in sections 3 and 4, the diff between A:oldfile and A':newfile
+ is *exactly* the same as the diff between E:oldfile and G:newfile. The
+ latter pair were detected as renames, so it seems unlikely to surprise
+ users for us to treat A:oldfile and A':newfile as renames.
+
+ * In fact, "oldfile" and "newfile" were at one point detected as renames
+ due to how they were constructed in the E..G chain. And we used that
+ information once already in this rebase/cherry-pick. I think users
+ would be unlikely to be surprised at us continuing to treat the files
+ as renames and would quickly understand why we had done so.
+
+ * Marking or declaring files as renames is *not* the end goal for merges.
+ Merges use renames to determine which files make sense to be paired up
+ for three-way content merges.
+
+ * A:oldfile and A':newfile were _already_ paired up in a three-way
+ content merge; that is how A':newfile was created. In fact, that
+ three-way content merge was clean. So using them again in a later
+ three-way content merge seems very reasonable.
+
+However, the above is focusing on the common scenarios. Let's try to look
+at all possible unusual scenarios and compare without the optimization to
+with the optimization. Consider the following theoretical cases; we will
+then dive into each to determine which of them are possible,
+and if so, what they mean:
+
+ 1. Without the optimization, the second merge results in a conflict.
+ With the optimization, the second merge also results in a conflict.
+ Questions: Are the conflicts confusingly different? Better in one case?
+
+ 2. Without the optimization, the second merge results in NO conflict.
+ With the optimization, the second merge also results in NO conflict.
+ Questions: Are the merges the same?
+
+ 3. Without the optimization, the second merge results in a conflict.
+ With the optimization, the second merge results in NO conflict.
+ Questions: Possible? Bug, bugfix, or something else?
+
+ 4. Without the optimization, the second merge results in NO conflict.
+ With the optimization, the second merge results in a conflict.
+ Questions: Possible? Bug, bugfix, or something else?
+
+I'll consider all four cases, but out of order.
+
+The fourth case is impossible. For the code without the remembering
+renames optimization to not get a conflict, B:oldfile would need to exactly
+match A:oldfile -- if it doesn't, there would be a modify/delete conflict.
+If A:oldfile matches B:oldfile exactly, then a three-way content merge
+between A:oldfile, A':newfile, and B:oldfile would have no conflict and
+just give us the version of newfile from A' as the result.
+
+From the same logic as the above paragraph, the second case would indeed
+result in identical merges. When A:oldfile exactly matches B:oldfile, an
+undetected rename would say, "Oh, I see one side didn't modify 'oldfile'
+and the other side deleted it. I'll delete it. And I see you have this
+brand new file named 'newfile' in A', so I'll keep it." That gives the
+same results as three-way content merging A:oldfile, A':newfile, and
+B:oldfile -- a removal of oldfile with the version of newfile from A'
+showing up in the result.
+
+The third case is interesting. It means that A:oldfile and A':newfile were
+not just similar enough, but that the changes between them did not conflict
+with the changes between A:oldfile and B:oldfile. This would validate our
+hunch that the files were similar enough to be used in a three-way content
+merge, and thus seems entirely correct for us to have used them that way.
+(Sidenote: One particular example here may be enlightening. Let's say that
+B was an immediate revert of A. B clearly would have been a clean revert
+of A, since A was B's immediate parent. One would assume that if you can
+pick a commit, you should also be able to cherry-pick its immediate revert.
+However, this is one of those funny corner cases; without this
+optimization, we just successfully picked a commit cleanly, but we are
+unable to cherry-pick its immediate revert due to the size differences
+between E:oldfile and A:oldfile.)
+
+That leaves only the first case to consider -- when we get conflicts both
+with or without the optimization. Without the optimization, we'll have a
+modify/delete conflict, where both A':newfile and B:oldfile are left in the
+tree for the user to deal with and no hints about the potential similarity
+between the two. With the optimization, we'll have a three-way content
+merged A:oldfile, A':newfile, and B:oldfile with conflict markers
+suggesting we thought the files were related but giving the user the chance
+to resolve. As noted above, I don't think users will find us treating
+'oldfile' and 'newfile' as related as a surprise since they were between E
+and G. In any event, though, this case shouldn't be concerning since we
+hit a conflict in both cases, told the user what we know, and asked them to
+resolve it.
+
+So, in summary, case 4 is impossible, case 2 yields the same behavior, and
+cases 1 and 3 seem to provide as good or better behavior with the
+optimization than without.
+
+
+=== 6. Interaction with skipping of "irrelevant" renames ===
+
+Previous optimizations involved skipping rename detection for paths
+considered to be "irrelevant". See for example the following commits:
+
+ * 32a56dfb99 ("merge-ort: precompute subset of sources for which we
+ need rename detection", 2021-03-11)
+ * 2fd9eda462 ("merge-ort: precompute whether directory rename
+ detection is needed", 2021-03-11)
+ * 9bd342137e ("diffcore-rename: determine which relevant_sources are
+ no longer relevant", 2021-03-13)
+
+Relevance is always determined by what the _other_ side of history has
+done, in terms of modifing a file that our side renamed, or adding a
+file to a directory which our side renamed. This means that a path
+that is "irrelevant" when picking the first commit of a series in a
+rebase or cherry-pick, may suddenly become "relevant" when picking the
+next commit.
+
+The upshot of this is that we can only cache rename detection results
+for relevant paths, and need to re-check relevance in subsequent
+commits. If those subsequent commits have additional paths that are
+relevant for rename detection, then we will need to redo rename
+detection -- though we can limit it to the paths for which we have not
+already detected renames.
+
+
+=== 7. Additional items that need to be cached ===
+
+It turns out we have to cache more than just renames; we also cache:
+
+ A) non-renames (i.e. unpaired deletes)
+ B) counts of renames within directories
+ C) sources that were marked as RELEVANT_LOCATION, but which were
+ downgraded to RELEVANT_NO_MORE
+ D) the toplevel trees involved in the merge
+
+These are all stored in struct rename_info, and respectively appear in
+ * cached_pairs (along side actual renames, just with a value of NULL)
+ * dir_rename_counts
+ * cached_irrelevant
+ * merge_trees
+
+The reason for (A) comes from the irrelevant renames skipping
+optimization discussed in section 6. The fact that irrelevant renames
+are skipped means we only get a subset of the potential renames
+detected and subsequent commits may need to run rename detection on
+the upstream side on a subset of the remaining renames (to get the
+renames that are relevant for that later commit). Since unpaired
+deletes are involved in rename detection too, we don't want to
+repeatedly check that those paths remain unpaired on the upstream side
+with every commit we are transplanting.
+
+The reason for (B) is that diffcore_rename_extended() is what
+generates the counts of renames by directory which is needed in
+directory rename detection, and if we don't run
+diffcore_rename_extended() again then we need to have the output from
+it, including dir_rename_counts, from the previous run.
+
+The reason for (C) is that merge-ort's tree traversal will again think
+those paths are relevant (marking them as RELEVANT_LOCATION), but the
+fact that they were downgraded to RELEVANT_NO_MORE means that
+dir_rename_counts already has the information we need for directory
+rename detection. (A path which becomes RELEVANT_CONTENT in a
+subsequent commit will be removed from cached_irrelevant.)
+
+The reason for (D) is that is how we determine whether the remember
+renames optimization can be used. In particular, remembering that our
+sequence of merges looks like:
+
+ Merge 1:
+ MERGE_BASE: E
+ MERGE_SIDE1: G
+ MERGE_SIDE2: A
+ => Creates A'
+
+ Merge 2:
+ MERGE_BASE: A
+ MERGE_SIDE1: A'
+ MERGE_SIDE2: B
+ => Creates B'
+
+It is the fact that the trees A and A' appear both in Merge 1 and in
+Merge 2, with A as a parent of A' that allows this optimization. So
+we store the trees to compare with what we are asked to merge next
+time.
+
+
+=== 8. How directory rename detection interacts with the above and ===
+=== why this optimization is still safe even if ===
+=== merge.directoryRenames is set to "true". ===
+
+As noted in the assumptions section:
+
+ """
+ ...if directory renames do occur, then the default of
+ merge.directoryRenames being set to "conflict" means that the operation
+ will stop for users to resolve the conflicts and the cache will be
+ thrown away, and thus that there won't be an optimization to apply.
+ So, the only reason we need to address directory renames specifically,
+ is that some users will have set merge.directoryRenames to "true" to
+ allow the merges to continue to proceed automatically.
+ """
+
+Let's remember that we need to look at how any given pick affects the next
+one. So let's again use the first two picks from the diagram in section
+one:
+
+ First pick does this three-way merge:
+ MERGE_BASE: E
+ MERGE_SIDE1: G
+ MERGE_SIDE2: A
+ => creates A'
+
+ Second pick does this three-way merge:
+ MERGE_BASE: A
+ MERGE_SIDE1: A'
+ MERGE_SIDE2: B
+ => creates B'
+
+Now, directory rename detection exists so that if one side of history
+renames a directory, and the other side adds a new file to the old
+directory, then the merge (with merge.directoryRenames=true) can move the
+file into the new directory. There are two qualitatively different ways to
+add a new file to an old directory: create a new file, or rename a file
+into that directory. Also, directory renames can be done on either side of
+history, so there are four cases to consider:
+
+ * MERGE_SIDE1 renames old dir, MERGE_SIDE2 adds new file to old dir
+ * MERGE_SIDE1 renames old dir, MERGE_SIDE2 renames file into old dir
+ * MERGE_SIDE1 adds new file to old dir, MERGE_SIDE2 renames old dir
+ * MERGE_SIDE1 renames file into old dir, MERGE_SIDE2 renames old dir
+
+One last note before we consider these four cases: There are some
+important properties about how we implement this optimization with
+respect to directory rename detection that we need to bear in mind
+while considering all of these cases:
+
+ * rename caching occurs *after* applying directory renames
+
+ * a rename created by directory rename detection is recorded for the side
+ of history that did the directory rename.
+
+ * dir_rename_counts, the nested map of
+ {oldname => {newname => count}},
+ is cached between runs as well. This basically means that directory
+ rename detection is also cached, though only on the side of history
+ that we cache renames for (MERGE_SIDE1 as far as this document is
+ concerned; see the assumptions section). Two interesting sub-notes
+ about these counts:
+
+ * If we need to perform rename-detection again on the given side (e.g.
+ some paths are relevant for rename detection that weren't before),
+ then we clear dir_rename_counts and recompute it, making use of
+ cached_pairs. The reason it is important to do this is optimizations
+ around RELEVANT_LOCATION exist to prevent us from computing
+ unnecessary renames for directory rename detection and from computing
+ dir_rename_counts for irrelevant directories; but those same renames
+ or directories may become necessary for subsequent merges. The
+ easiest way to "fix up" dir_rename_counts in such cases is to just
+ recompute it.
+
+ * If we prune rename/rename(1to1) entries from the cache, then we also
+ need to update dir_rename_counts to decrement the counts for the
+ involved directory and any relevant parent directories (to undo what
+ update_dir_rename_counts() in diffcore-rename.c incremented when the
+ rename was initially found). If we instead just disable the
+ remembering renames optimization when the exceedingly rare
+ rename/rename(1to1) cases occur, then dir_rename_counts will get
+ re-computed the next time rename detection occurs, as noted above.
+
+ * the side with multiple commits to pick, is the side of history that we
+ do NOT cache renames for. Thus, there are no additional commits to
+ change the number of renames in a directory, except for those done by
+ directory rename detection (which always pad the majority).
+
+ * the "renames" we cache are modified slightly by any directory rename,
+ as noted below.
+
+Now, with those notes out of the way, let's go through the four cases
+in order:
+
+Case 1: MERGE_SIDE1 renames old dir, MERGE_SIDE2 adds new file to old dir
+
+ This case looks like this:
+
+ MERGE_BASE: E, Has olddir/
+ MERGE_SIDE1: G, Renames olddir/ -> newdir/
+ MERGE_SIDE2: A, Adds olddir/newfile
+ => creates A', With newdir/newfile
+
+ MERGE_BASE: A, Has olddir/newfile
+ MERGE_SIDE1: A', Has newdir/newfile
+ MERGE_SIDE2: B, Modifies olddir/newfile
+ => expected B', with threeway-merged newdir/newfile from above
+
+ In this case, with the optimization, note that after the first commit:
+ * MERGE_SIDE1 remembers olddir/ -> newdir/
+ * MERGE_SIDE1 has cached olddir/newfile -> newdir/newfile
+ Given the cached rename noted above, the second merge can proceed as
+ expected without needing to perform rename detection from A -> A'.
+
+Case 2: MERGE_SIDE1 renames old dir, MERGE_SIDE2 renames file into old dir
+
+ This case looks like this:
+ MERGE_BASE: E oldfile, olddir/
+ MERGE_SIDE1: G oldfile, olddir/ -> newdir/
+ MERGE_SIDE2: A oldfile -> olddir/newfile
+ => creates A', With newdir/newfile representing original oldfile
+
+ MERGE_BASE: A olddir/newfile
+ MERGE_SIDE1: A' newdir/newfile
+ MERGE_SIDE2: B modify olddir/newfile
+ => expected B', with threeway-merged newdir/newfile from above
+
+ In this case, with the optimization, note that after the first commit:
+ * MERGE_SIDE1 remembers olddir/ -> newdir/
+ * MERGE_SIDE1 has cached olddir/newfile -> newdir/newfile
+ (NOT oldfile -> newdir/newfile; compare to case with
+ (p->status == 'R' && new_path) in possibly_cache_new_pair())
+
+ Given the cached rename noted above, the second merge can proceed as
+ expected without needing to perform rename detection from A -> A'.
+
+Case 3: MERGE_SIDE1 adds new file to old dir, MERGE_SIDE2 renames old dir
+
+ This case looks like this:
+
+ MERGE_BASE: E, Has olddir/
+ MERGE_SIDE1: G, Adds olddir/newfile
+ MERGE_SIDE2: A, Renames olddir/ -> newdir/
+ => creates A', With newdir/newfile
+
+ MERGE_BASE: A, Has newdir/, but no notion of newdir/newfile
+ MERGE_SIDE1: A', Has newdir/newfile
+ MERGE_SIDE2: B, Has newdir/, but no notion of newdir/newfile
+ => expected B', with newdir/newfile from A'
+
+ In this case, with the optimization, note that after the first commit there
+ were no renames on MERGE_SIDE1, and any renames on MERGE_SIDE2 are tossed.
+ But the second merge didn't need any renames so this is fine.
+
+Case 4: MERGE_SIDE1 renames file into old dir, MERGE_SIDE2 renames old dir
+
+ This case looks like this:
+
+ MERGE_BASE: E, Has olddir/
+ MERGE_SIDE1: G, Renames oldfile -> olddir/newfile
+ MERGE_SIDE2: A, Renames olddir/ -> newdir/
+ => creates A', With newdir/newfile representing original oldfile
+
+ MERGE_BASE: A, Has oldfile
+ MERGE_SIDE1: A', Has newdir/newfile
+ MERGE_SIDE2: B, Modifies oldfile
+ => expected B', with threeway-merged newdir/newfile from above
+
+ In this case, with the optimization, note that after the first commit:
+ * MERGE_SIDE1 remembers oldfile -> newdir/newfile
+ (NOT oldfile -> olddir/newfile; compare to case of second
+ block under p->status == 'R' in possibly_cache_new_pair())
+ * MERGE_SIDE2 renames are tossed because only MERGE_SIDE1 is remembered
+
+ Given the cached rename noted above, the second merge can proceed as
+ expected without needing to perform rename detection from A -> A'.
+
+Finally, I'll just note here that interactions with the
+skip-irrelevant-renames optimization means we sometimes don't detect
+renames for any files within a directory that was renamed, in which
+case we will not have been able to detect any rename for the directory
+itself. In such a case, we do not know whether the directory was
+renamed; we want to be careful to avoid cacheing some kind of "this
+directory was not renamed" statement. If we did, then a subsequent
+commit being rebased could add a file to the old directory, and the
+user would expect it to end up in the correct directory -- something
+our erroneous "this directory was not renamed" cache would preclude.
diff --git a/Documentation/technical/rerere.txt b/Documentation/technical/rerere.txt
index aa22d7ace8..af5f9fc24f 100644
--- a/Documentation/technical/rerere.txt
+++ b/Documentation/technical/rerere.txt
@@ -117,7 +117,7 @@ early A became C or B, a late X became Y or Z". We can see there are
4 combinations of ("B or C", "C or B") x ("X or Y", "Y or X").
By sorting, the conflict is given its canonical name, namely, "an
-early part became B or C, a late part becames X or Y", and whenever
+early part became B or C, a late part became X or Y", and whenever
any of these four patterns appear, and we can get to the same conflict
and resolution that we saw earlier.
diff --git a/Documentation/technical/shallow.txt b/Documentation/technical/shallow.txt
index 01dedfe9ff..f3738baa0f 100644
--- a/Documentation/technical/shallow.txt
+++ b/Documentation/technical/shallow.txt
@@ -13,7 +13,7 @@ pretend as if they are root commits (e.g. "git log" traversal
stops after showing them; "git fsck" does not complain saying
the commits listed on their "parent" lines do not exist).
-Each line contains exactly one SHA-1. When read, a commit_graft
+Each line contains exactly one object name. When read, a commit_graft
will be constructed, which has nr_parent < 0 to make it easier
to discern from user provided grafts.
diff --git a/Documentation/technical/sparse-index.txt b/Documentation/technical/sparse-index.txt
new file mode 100644
index 0000000000..3b24c1a219
--- /dev/null
+++ b/Documentation/technical/sparse-index.txt
@@ -0,0 +1,208 @@
+Git Sparse-Index Design Document
+================================
+
+The sparse-checkout feature allows users to focus a working directory on
+a subset of the files at HEAD. The cone mode patterns, enabled by
+`core.sparseCheckoutCone`, allow for very fast pattern matching to
+discover which files at HEAD belong in the sparse-checkout cone.
+
+Three important scale dimensions for a Git working directory are:
+
+* `HEAD`: How many files are present at `HEAD`?
+
+* Populated: How many files are within the sparse-checkout cone.
+
+* Modified: How many files has the user modified in the working directory?
+
+We will use big-O notation -- O(X) -- to denote how expensive certain
+operations are in terms of these dimensions.
+
+These dimensions are ordered by their magnitude: users (typically) modify
+fewer files than are populated, and we can only populate files at `HEAD`.
+
+Problems occur if there is an extreme imbalance in these dimensions. For
+example, if `HEAD` contains millions of paths but the populated set has
+only tens of thousands, then commands like `git status` and `git add` can
+be dominated by operations that require O(`HEAD`) operations instead of
+O(Populated). Primarily, the cost is in parsing and rewriting the index,
+which is filled primarily with files at `HEAD` that are marked with the
+`SKIP_WORKTREE` bit.
+
+The sparse-index intends to take these commands that read and modify the
+index from O(`HEAD`) to O(Populated). To do this, we need to modify the
+index format in a significant way: add "sparse directory" entries.
+
+With cone mode patterns, it is possible to detect when an entire
+directory will have its contents outside of the sparse-checkout definition.
+Instead of listing all of the files it contains as individual entries, a
+sparse-index contains an entry with the directory name, referencing the
+object ID of the tree at `HEAD` and marked with the `SKIP_WORKTREE` bit.
+If we need to discover the details for paths within that directory, we
+can parse trees to find that list.
+
+At time of writing, sparse-directory entries violate expectations about the
+index format and its in-memory data structure. There are many consumers in
+the codebase that expect to iterate through all of the index entries and
+see only files. In fact, these loops expect to see a reference to every
+staged file. One way to handle this is to parse trees to replace a
+sparse-directory entry with all of the files within that tree as the index
+is loaded. However, parsing trees is slower than parsing the index format,
+so that is a slower operation than if we left the index alone. The plan is
+to make all of these integrations "sparse aware" so this expansion through
+tree parsing is unnecessary and they use fewer resources than when using a
+full index.
+
+The implementation plan below follows four phases to slowly integrate with
+the sparse-index. The intention is to incrementally update Git commands to
+interact safely with the sparse-index without significant slowdowns. This
+may not always be possible, but the hope is that the primary commands that
+users need in their daily work are dramatically improved.
+
+Phase I: Format and initial speedups
+------------------------------------
+
+During this phase, Git learns to enable the sparse-index and safely parse
+one. Protections are put in place so that every consumer of the in-memory
+data structure can operate with its current assumption of every file at
+`HEAD`.
+
+At first, every index parse will call a helper method,
+`ensure_full_index()`, which scans the index for sparse-directory entries
+(pointing to trees) and replaces them with the full list of paths (with
+blob contents) by parsing tree objects. This will be slower in all cases.
+The only noticeable change in behavior will be that the serialized index
+file contains sparse-directory entries.
+
+To start, we use a new required index extension, `sdir`, to allow
+inserting sparse-directory entries into indexes with file format
+versions 2, 3, and 4. This prevents Git versions that do not understand
+the sparse-index from operating on one, while allowing tools that do not
+understand the sparse-index to operate on repositories as long as they do
+not interact with the index. A new format, index v5, will be introduced
+that includes sparse-directory entries by default. It might also
+introduce other features that have been considered for improving the
+index, as well.
+
+Next, consumers of the index will be guarded against operating on a
+sparse-index by inserting calls to `ensure_full_index()` or
+`expand_index_to_path()`. If a specific path is requested, then those will
+be protected from within the `index_file_exists()` and `index_name_pos()`
+API calls: they will call `ensure_full_index()` if necessary. The
+intention here is to preserve existing behavior when interacting with a
+sparse-checkout. We don't want a change to happen by accident, without
+tests. Many of these locations may not need any change before removing the
+guards, but we should not do so without tests to ensure the expected
+behavior happens.
+
+It may be desirable to _change_ the behavior of some commands in the
+presence of a sparse index or more generally in any sparse-checkout
+scenario. In such cases, these should be carefully communicated and
+tested. No such behavior changes are intended during this phase.
+
+During a scan of the codebase, not every iteration of the cache entries
+needs an `ensure_full_index()` check. The basic reasons include:
+
+1. The loop is scanning for entries with non-zero stage. These entries
+ are not collapsed into a sparse-directory entry.
+
+2. The loop is scanning for submodules. These entries are not collapsed
+ into a sparse-directory entry.
+
+3. The loop is part of the index API, especially around reading or
+ writing the format.
+
+4. The loop is checking for correct order of cache entries and that is
+ correct if and only if the sparse-directory entries are in the correct
+ location.
+
+5. The loop ignores entries with the `SKIP_WORKTREE` bit set, or is
+ otherwise already aware of sparse directory entries.
+
+6. The sparse-index is disabled at this point when using the split-index
+ feature, so no effort is made to protect the split-index API.
+
+Even after inserting these guards, we will keep expanding sparse-indexes
+for most Git commands using the `command_requires_full_index` repository
+setting. This setting will be on by default and disabled one builtin at a
+time until we have sufficient confidence that all of the index operations
+are properly guarded.
+
+To complete this phase, the commands `git status` and `git add` will be
+integrated with the sparse-index so that they operate with O(Populated)
+performance. They will be carefully tested for operations within and
+outside the sparse-checkout definition.
+
+Phase II: Careful integrations
+------------------------------
+
+This phase focuses on ensuring that all index extensions and APIs work
+well with a sparse-index. This requires significant increases to our test
+coverage, especially for operations that interact with the working
+directory outside of the sparse-checkout definition. Some of these
+behaviors may not be the desirable ones, such as some tests already
+marked for failure in `t1092-sparse-checkout-compatibility.sh`.
+
+The index extensions that may require special integrations are:
+
+* FS Monitor
+* Untracked cache
+
+While integrating with these features, we should look for patterns that
+might lead to better APIs for interacting with the index. Coalescing
+common usage patterns into an API call can reduce the number of places
+where sparse-directories need to be handled carefully.
+
+Phase III: Important command speedups
+-------------------------------------
+
+At this point, the patterns for testing and implementing sparse-directory
+logic should be relatively stable. This phase focuses on updating some of
+the most common builtins that use the index to operate as O(Populated).
+Here is a potential list of commands that could be valuable to integrate
+at this point:
+
+* `git commit`
+* `git checkout`
+* `git merge`
+* `git rebase`
+
+Hopefully, commands such as `git merge` and `git rebase` can benefit
+instead from merge algorithms that do not use the index as a data
+structure, such as the merge-ORT strategy. As these topics mature, we
+may enable the ORT strategy by default for repositories using the
+sparse-index feature.
+
+Along with `git status` and `git add`, these commands cover the majority
+of users' interactions with the working directory. In addition, we can
+integrate with these commands:
+
+* `git grep`
+* `git rm`
+
+These have been proposed as some whose behavior could change when in a
+repo with a sparse-checkout definition. It would be good to include this
+behavior automatically when using a sparse-index. Some clarity is needed
+to make the behavior switch clear to the user.
+
+This phase is the first where parallel work might be possible without too
+much conflicts between topics.
+
+Phase IV: The long tail
+-----------------------
+
+This last phase is less a "phase" and more "the new normal" after all of
+the previous work.
+
+To start, the `command_requires_full_index` option could be removed in
+favor of expanding only when hitting an API guard.
+
+There are many Git commands that could use special attention to operate as
+O(Populated), while some might be so rare that it is acceptable to leave
+them with additional overhead when a sparse-index is present.
+
+Here are some commands that might be useful to update:
+
+* `git sparse-checkout set`
+* `git am`
+* `git clean`
+* `git stash`