diff options
Diffstat (limited to 'Documentation/technical')
18 files changed, 725 insertions, 272 deletions
diff --git a/Documentation/technical/api-diff.txt b/Documentation/technical/api-diff.txt index 8b001de0db..30fc0e9c93 100644 --- a/Documentation/technical/api-diff.txt +++ b/Documentation/technical/api-diff.txt @@ -18,8 +18,8 @@ Calling sequence ---------------- * Prepare `struct diff_options` to record the set of diff options, and - then call `diff_setup()` to initialize this structure. This sets up - the vanilla default. + then call `repo_diff_setup()` to initialize this structure. This + sets up the vanilla default. * Fill in the options structure to specify desired output format, rename detection, etc. `diff_opt_parse()` can be used to parse options given diff --git a/Documentation/technical/api-directory-listing.txt b/Documentation/technical/api-directory-listing.txt index 4f44ca24f6..5abb8e8b1f 100644 --- a/Documentation/technical/api-directory-listing.txt +++ b/Documentation/technical/api-directory-listing.txt @@ -54,7 +54,7 @@ The notable options are: this case, the contents are returned as individual entries. + If this is set, files and directories that explicitly match an ignore -pattern are reported. Implicity ignored directories (directories that +pattern are reported. Implicitly ignored directories (directories that do not match an ignore pattern, but whose contents are all ignored) are not reported, instead all of the contents are reported. diff --git a/Documentation/technical/api-gitattributes.txt b/Documentation/technical/api-gitattributes.txt index e7cbb7c13a..45f0df600f 100644 --- a/Documentation/technical/api-gitattributes.txt +++ b/Documentation/technical/api-gitattributes.txt @@ -146,7 +146,7 @@ To get the values of all attributes associated with a file: * Iterate over the `attr_check.items[]` array to examine the attribute names and values. The name of the attribute - described by a `attr_check.items[]` object can be retrieved via + described by an `attr_check.items[]` object can be retrieved via `git_attr_name(check->items[i].attr)`. (Please note that no items will be returned for unset attributes, so `ATTR_UNSET()` will return false for all returned `attr_check.items[]` objects.) diff --git a/Documentation/technical/api-history-graph.txt b/Documentation/technical/api-history-graph.txt index 18142b6d29..d0d1707c8c 100644 --- a/Documentation/technical/api-history-graph.txt +++ b/Documentation/technical/api-history-graph.txt @@ -80,7 +80,7 @@ Calling sequence it is invoked. * For each commit, call `graph_next_line()` repeatedly, until - `graph_is_commit_finished()` returns non-zero. Each call go + `graph_is_commit_finished()` returns non-zero. Each call to `graph_next_line()` will output a single line of the graph. The resulting lines will not contain any newlines. `graph_next_line()` returns 1 if the resulting line contains the current commit, or 0 if this is merely a line @@ -115,7 +115,6 @@ struct commit *commit; struct git_graph *graph = graph_init(opts); while ((commit = get_revision(opts)) != NULL) { - graph_update(graph, commit); while (!graph_is_commit_finished(graph)) { struct strbuf sb; diff --git a/Documentation/technical/api-parse-options.txt b/Documentation/technical/api-parse-options.txt index 829b558110..2b036d7838 100644 --- a/Documentation/technical/api-parse-options.txt +++ b/Documentation/technical/api-parse-options.txt @@ -183,10 +183,6 @@ There are some macros to easily define options: scale the provided value by 1024, 1024^2 or 1024^3 respectively. The scaled value is put into `unsigned_long_var`. -`OPT_DATE(short, long, ×tamp_t_var, description)`:: - Introduce an option with date argument, see `approxidate()`. - The timestamp is put into `timestamp_t_var`. - `OPT_EXPIRY_DATE(short, long, ×tamp_t_var, description)`:: Introduce an option with expiry date argument, see `parse_expiry_date()`. The timestamp is put into `timestamp_t_var`. diff --git a/Documentation/technical/api-revision-walking.txt b/Documentation/technical/api-revision-walking.txt index 55b878ade8..03f9ea6ac4 100644 --- a/Documentation/technical/api-revision-walking.txt +++ b/Documentation/technical/api-revision-walking.txt @@ -15,9 +15,9 @@ revision list. Functions --------- -`init_revisions`:: +`repo_init_revisions`:: - Initialize a rev_info structure with default values. The second + Initialize a rev_info structure with default values. The third parameter may be NULL or can be prefix path, and then the `.prefix` variable will be set to it. This is typically the first function you want to call when you want to deal with a revision list. After calling diff --git a/Documentation/technical/commit-graph-format.txt b/Documentation/technical/commit-graph-format.txt index ad6af8105c..cc0474ba3e 100644 --- a/Documentation/technical/commit-graph-format.txt +++ b/Documentation/technical/commit-graph-format.txt @@ -18,9 +18,9 @@ metadata, including: the graph file. These positional references are stored as unsigned 32-bit integers -corresponding to the array position withing the list of commit OIDs. We -use the most-significant bit for special purposes, so we can store at most -(1 << 31) - 1 (around 2 billion) commits. +corresponding to the array position within the list of commit OIDs. Due +to some special constants we use to track parents, we can store at most +(1 << 30) + (1 << 29) + (1 << 28) - 1 (around 1.8 billion) commits. == Commit graph files have the following format: @@ -70,10 +70,10 @@ CHUNK DATA: OID Lookup (ID: {'O', 'I', 'D', 'L'}) (N * H bytes) The OIDs for all commits in the graph, sorted in ascending order. - Commit Data (ID: {'C', 'G', 'E', 'T' }) (N * (H + 16) bytes) + Commit Data (ID: {'C', 'D', 'A', 'T' }) (N * (H + 16) bytes) * The first H bytes are for the OID of the root tree. * The next 8 bytes are for the positions of the first two parents - of the ith commit. Stores value 0xffffffff if no parent in that + of the ith commit. Stores value 0x7000000 if no parent in that position. If there are more than two parents, the second value has its most-significant bit on and the other bits store an array position into the Large Edge List chunk. diff --git a/Documentation/technical/commit-graph.txt b/Documentation/technical/commit-graph.txt index 0550c6d0dc..7805b0968c 100644 --- a/Documentation/technical/commit-graph.txt +++ b/Documentation/technical/commit-graph.txt @@ -15,13 +15,13 @@ There are two main costs here: 1. Decompressing and parsing commits. 2. Walking the entire graph to satisfy topological order constraints. -The commit graph file is a supplemental data structure that accelerates +The commit-graph file is a supplemental data structure that accelerates commit graph walks. If a user downgrades or disables the 'core.commitGraph' config setting, then the existing ODB is sufficient. The file is stored as "commit-graph" either in the .git/objects/info directory or in the info directory of an alternate. -The commit graph file stores the commit graph structure along with some +The commit-graph file stores the commit graph structure along with some extra metadata to speed up graph walks. By listing commit OIDs in lexi- cographic order, we can identify an integer position for each commit and refer to the parents of a commit using those integer positions. We use @@ -77,10 +77,33 @@ in the commit graph. We can treat these commits as having "infinite" generation number and walk until reaching commits with known generation number. +We use the macro GENERATION_NUMBER_INFINITY = 0xFFFFFFFF to mark commits not +in the commit-graph file. If a commit-graph file was written by a version +of Git that did not compute generation numbers, then those commits will +have generation number represented by the macro GENERATION_NUMBER_ZERO = 0. + +Since the commit-graph file is closed under reachability, we can guarantee +the following weaker condition on all commits: + + If A and B are commits with generation numbers N amd M, respectively, + and N < M, then A cannot reach B. + +Note how the strict inequality differs from the inequality when we have +fully-computed generation numbers. Using strict inequality may result in +walking a few extra commits, but the simplicity in dealing with commits +with generation number *_INFINITY or *_ZERO is valuable. + +We use the macro GENERATION_NUMBER_MAX = 0x3FFFFFFF to for commits whose +generation numbers are computed to be at least this value. We limit at +this value since it is the largest value that can be stored in the +commit-graph file using the 30 bits available to generation numbers. This +presents another case where a commit can have generation number equal to +that of a parent. + Design Details -------------- -- The commit graph file is stored in a file named 'commit-graph' in the +- The commit-graph file is stored in a file named 'commit-graph' in the .git/objects/info directory. This could be stored in the info directory of an alternate. @@ -89,48 +112,34 @@ Design Details - The file format includes parameters for the object ID hash function, so a future change of hash algorithm does not require a change in format. +- Commit grafts and replace objects can change the shape of the commit + history. The latter can also be enabled/disabled on the fly using + `--no-replace-objects`. This leads to difficultly storing both possible + interpretations of a commit id, especially when computing generation + numbers. The commit-graph will not be read or written when + replace-objects or grafts are present. + +- Shallow clones create grafts of commits by dropping their parents. This + leads the commit-graph to think those commits have generation number 1. + If and when those commits are made unshallow, those generation numbers + become invalid. Since shallow clones are intended to restrict the commit + history to a very small set of commits, the commit-graph feature is less + helpful for these clones, anyway. The commit-graph will not be read or + written when shallow commits are present. + Future Work ----------- -- The commit graph feature currently does not honor commit grafts. This can - be remedied by duplicating or refactoring the current graft logic. - -- The 'commit-graph' subcommand does not have a "verify" mode that is - necessary for integration with fsck. - -- The file format includes room for precomputed generation numbers. These - are not currently computed, so all generation numbers will be marked as - 0 (or "uncomputed"). A later patch will include this calculation. - - After computing and storing generation numbers, we must make graph walks aware of generation numbers to gain the performance benefits they enable. This will mostly be accomplished by swapping a commit-date-ordered priority queue with one ordered by generation number. The following operations are important candidates: - - paint_down_to_common() - 'log --topo-order' + - 'tag --merged' -- Currently, parse_commit_gently() requires filling in the root tree - object for a commit. This passes through lookup_tree() and consequently - lookup_object(). Also, it calls lookup_commit() when loading the parents. - These method calls check the ODB for object existence, even if the - consumer does not need the content. For example, we do not need the - tree contents when computing merge bases. Now that commit parsing is - removed from the computation time, these lookup operations are the - slowest operations keeping graph walks from being fast. Consider - loading these objects without verifying their existence in the ODB and - only loading them fully when consumers need them. Consider a method - such as "ensure_tree_loaded(commit)" that fully loads a tree before - using commit->tree. - -- The current design uses the 'commit-graph' subcommand to generate the graph. - When this feature stabilizes enough to recommend to most users, we should - add automatic graph writes to common operations that create many commits. - For example, one could compute a graph on 'clone', 'fetch', or 'repack' - commands. - -- A server could provide a commit graph file as part of the network protocol +- A server could provide a commit-graph file as part of the network protocol to avoid extra calculations by clients. This feature is only of benefit if the user is willing to trust the file, because verifying the file is correct is as hard as computing it from scratch. diff --git a/Documentation/technical/hash-function-transition.txt b/Documentation/technical/hash-function-transition.txt index 4ab6cd1012..bc2ace2a6e 100644 --- a/Documentation/technical/hash-function-transition.txt +++ b/Documentation/technical/hash-function-transition.txt @@ -59,14 +59,11 @@ that are believed to be cryptographically secure. Goals ----- -Where NewHash is a strong 256-bit hash function to replace SHA-1 (see -"Selection of a New Hash", below): - -1. The transition to NewHash can be done one local repository at a time. +1. The transition to SHA-256 can be done one local repository at a time. a. Requiring no action by any other party. - b. A NewHash repository can communicate with SHA-1 Git servers + b. A SHA-256 repository can communicate with SHA-1 Git servers (push/fetch). - c. Users can use SHA-1 and NewHash identifiers for objects + c. Users can use SHA-1 and SHA-256 identifiers for objects interchangeably (see "Object names on the command line", below). d. New signed objects make use of a stronger hash function than SHA-1 for their security guarantees. @@ -79,7 +76,7 @@ Where NewHash is a strong 256-bit hash function to replace SHA-1 (see Non-Goals --------- -1. Add NewHash support to Git protocol. This is valuable and the +1. Add SHA-256 support to Git protocol. This is valuable and the logical next step but it is out of scope for this initial design. 2. Transparently improving the security of existing SHA-1 signed objects. @@ -87,26 +84,26 @@ Non-Goals repository. 4. Taking the opportunity to fix other bugs in Git's formats and protocols. -5. Shallow clones and fetches into a NewHash repository. (This will - change when we add NewHash support to Git protocol.) -6. Skip fetching some submodules of a project into a NewHash - repository. (This also depends on NewHash support in Git +5. Shallow clones and fetches into a SHA-256 repository. (This will + change when we add SHA-256 support to Git protocol.) +6. Skip fetching some submodules of a project into a SHA-256 + repository. (This also depends on SHA-256 support in Git protocol.) Overview -------- We introduce a new repository format extension. Repositories with this -extension enabled use NewHash instead of SHA-1 to name their objects. +extension enabled use SHA-256 instead of SHA-1 to name their objects. This affects both object names and object content --- both the names of objects and all references to other objects within an object are switched to the new hash function. -NewHash repositories cannot be read by older versions of Git. +SHA-256 repositories cannot be read by older versions of Git. -Alongside the packfile, a NewHash repository stores a bidirectional -mapping between NewHash and SHA-1 object names. The mapping is generated +Alongside the packfile, a SHA-256 repository stores a bidirectional +mapping between SHA-256 and SHA-1 object names. The mapping is generated locally and can be verified using "git fsck". Object lookups use this -mapping to allow naming objects using either their SHA-1 and NewHash names +mapping to allow naming objects using either their SHA-1 and SHA-256 names interchangeably. "git cat-file" and "git hash-object" gain options to display an object @@ -116,7 +113,7 @@ object database so that they can be named using the appropriate name (using the bidirectional hash mapping). Fetches from a SHA-1 based server convert the fetched objects into -NewHash form and record the mapping in the bidirectional mapping table +SHA-256 form and record the mapping in the bidirectional mapping table (see below for details). Pushes to a SHA-1 based server convert the objects being pushed into sha1 form so the server does not have to be aware of the hash function the client is using. @@ -125,19 +122,19 @@ Detailed Design --------------- Repository format extension ~~~~~~~~~~~~~~~~~~~~~~~~~~~ -A NewHash repository uses repository format version `1` (see +A SHA-256 repository uses repository format version `1` (see Documentation/technical/repository-version.txt) with extensions `objectFormat` and `compatObjectFormat`: [core] repositoryFormatVersion = 1 [extensions] - objectFormat = newhash + objectFormat = sha256 compatObjectFormat = sha1 The combination of setting `core.repositoryFormatVersion=1` and populating `extensions.*` ensures that all versions of Git later than -`v0.99.9l` will die instead of trying to operate on the NewHash +`v0.99.9l` will die instead of trying to operate on the SHA-256 repository, instead producing an error message. # Between v0.99.9l and v2.7.0 @@ -155,36 +152,36 @@ repository extensions. Object names ~~~~~~~~~~~~ Objects can be named by their 40 hexadecimal digit sha1-name or 64 -hexadecimal digit newhash-name, plus names derived from those (see +hexadecimal digit sha256-name, plus names derived from those (see gitrevisions(7)). The sha1-name of an object is the SHA-1 of the concatenation of its type, length, a nul byte, and the object's sha1-content. This is the traditional <sha1> used in Git to name objects. -The newhash-name of an object is the NewHash of the concatenation of its -type, length, a nul byte, and the object's newhash-content. +The sha256-name of an object is the SHA-256 of the concatenation of its +type, length, a nul byte, and the object's sha256-content. Object format ~~~~~~~~~~~~~ The content as a byte sequence of a tag, commit, or tree object named -by sha1 and newhash differ because an object named by newhash-name refers to -other objects by their newhash-names and an object named by sha1-name +by sha1 and sha256 differ because an object named by sha256-name refers to +other objects by their sha256-names and an object named by sha1-name refers to other objects by their sha1-names. -The newhash-content of an object is the same as its sha1-content, except -that objects referenced by the object are named using their newhash-names +The sha256-content of an object is the same as its sha1-content, except +that objects referenced by the object are named using their sha256-names instead of sha1-names. Because a blob object does not refer to any -other object, its sha1-content and newhash-content are the same. +other object, its sha1-content and sha256-content are the same. -The format allows round-trip conversion between newhash-content and +The format allows round-trip conversion between sha256-content and sha1-content. Object storage ~~~~~~~~~~~~~~ Loose objects use zlib compression and packed objects use the packed format described in Documentation/technical/pack-format.txt, just like -today. The content that is compressed and stored uses newhash-content +today. The content that is compressed and stored uses sha256-content instead of sha1-content. Pack index @@ -255,10 +252,10 @@ network byte order): up to and not including the table of CRC32 values. - Zero or more NUL bytes. - The trailer consists of the following: - - A copy of the 20-byte NewHash checksum at the end of the + - A copy of the 20-byte SHA-256 checksum at the end of the corresponding packfile. - - 20-byte NewHash checksum of all of the above. + - 20-byte SHA-256 checksum of all of the above. Loose object index ~~~~~~~~~~~~~~~~~~ @@ -266,7 +263,7 @@ A new file $GIT_OBJECT_DIR/loose-object-idx contains information about all loose objects. Its format is # loose-object-idx - (newhash-name SP sha1-name LF)* + (sha256-name SP sha1-name LF)* where the object names are in hexadecimal format. The file is not sorted. @@ -292,8 +289,8 @@ To remove entries (e.g. in "git pack-refs" or "git-prune"): Translation table ~~~~~~~~~~~~~~~~~ The index files support a bidirectional mapping between sha1-names -and newhash-names. The lookup proceeds similarly to ordinary object -lookups. For example, to convert a sha1-name to a newhash-name: +and sha256-names. The lookup proceeds similarly to ordinary object +lookups. For example, to convert a sha1-name to a sha256-name: 1. Look for the object in idx files. If a match is present in the idx's sorted list of truncated sha1-names, then: @@ -301,8 +298,8 @@ lookups. For example, to convert a sha1-name to a newhash-name: name order mapping. b. Read the corresponding entry in the full sha1-name table to verify we found the right object. If it is, then - c. Read the corresponding entry in the full newhash-name table. - That is the object's newhash-name. + c. Read the corresponding entry in the full sha256-name table. + That is the object's sha256-name. 2. Check for a loose object. Read lines from loose-object-idx until we find a match. @@ -318,25 +315,25 @@ for all objects in the object store. Reading an object's sha1-content ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -The sha1-content of an object can be read by converting all newhash-names -its newhash-content references to sha1-names using the translation table. +The sha1-content of an object can be read by converting all sha256-names +its sha256-content references to sha1-names using the translation table. Fetch ~~~~~ Fetching from a SHA-1 based server requires translating between SHA-1 -and NewHash based representations on the fly. +and SHA-256 based representations on the fly. SHA-1s named in the ref advertisement that are present on the client -can be translated to NewHash and looked up as local objects using the +can be translated to SHA-256 and looked up as local objects using the translation table. Negotiation proceeds as today. Any "have"s generated locally are converted to SHA-1 before being sent to the server, and SHA-1s -mentioned by the server are converted to NewHash when looking them up +mentioned by the server are converted to SHA-256 when looking them up locally. After negotiation, the server sends a packfile containing the -requested objects. We convert the packfile to NewHash format using +requested objects. We convert the packfile to SHA-256 format using the following steps: 1. index-pack: inflate each object in the packfile and compute its @@ -351,12 +348,12 @@ the following steps: (This list only contains objects reachable from the "wants". If the pack from the server contained additional extraneous objects, then they will be discarded.) -3. convert to newhash: open a new (newhash) packfile. Read the topologically +3. convert to sha256: open a new (sha256) packfile. Read the topologically sorted list just generated. For each object, inflate its - sha1-content, convert to newhash-content, and write it to the newhash - pack. Record the new sha1<->newhash mapping entry for use in the idx. + sha1-content, convert to sha256-content, and write it to the sha256 + pack. Record the new sha1<->sha256 mapping entry for use in the idx. 4. sort: reorder entries in the new pack to match the order of objects - in the pack the server generated and include blobs. Write a newhash idx + in the pack the server generated and include blobs. Write a sha256 idx file 5. clean up: remove the SHA-1 based pack file, index, and topologically sorted list obtained from the server in steps 1 @@ -388,16 +385,16 @@ send-pack. Signed Commits ~~~~~~~~~~~~~~ -We add a new field "gpgsig-newhash" to the commit object format to allow +We add a new field "gpgsig-sha256" to the commit object format to allow signing commits without relying on SHA-1. It is similar to the -existing "gpgsig" field. Its signed payload is the newhash-content of the -commit object with any "gpgsig" and "gpgsig-newhash" fields removed. +existing "gpgsig" field. Its signed payload is the sha256-content of the +commit object with any "gpgsig" and "gpgsig-sha256" fields removed. This means commits can be signed 1. using SHA-1 only, as in existing signed commit objects -2. using both SHA-1 and NewHash, by using both gpgsig-newhash and gpgsig +2. using both SHA-1 and SHA-256, by using both gpgsig-sha256 and gpgsig fields. -3. using only NewHash, by only using the gpgsig-newhash field. +3. using only SHA-256, by only using the gpgsig-sha256 field. Old versions of "git verify-commit" can verify the gpgsig signature in cases (1) and (2) without modifications and view case (3) as an @@ -405,24 +402,24 @@ ordinary unsigned commit. Signed Tags ~~~~~~~~~~~ -We add a new field "gpgsig-newhash" to the tag object format to allow +We add a new field "gpgsig-sha256" to the tag object format to allow signing tags without relying on SHA-1. Its signed payload is the -newhash-content of the tag with its gpgsig-newhash field and "-----BEGIN PGP +sha256-content of the tag with its gpgsig-sha256 field and "-----BEGIN PGP SIGNATURE-----" delimited in-body signature removed. This means tags can be signed 1. using SHA-1 only, as in existing signed tag objects -2. using both SHA-1 and NewHash, by using gpgsig-newhash and an in-body +2. using both SHA-1 and SHA-256, by using gpgsig-sha256 and an in-body signature. -3. using only NewHash, by only using the gpgsig-newhash field. +3. using only SHA-256, by only using the gpgsig-sha256 field. Mergetag embedding ~~~~~~~~~~~~~~~~~~ The mergetag field in the sha1-content of a commit contains the sha1-content of a tag that was merged by that commit. -The mergetag field in the newhash-content of the same commit contains the -newhash-content of the same tag. +The mergetag field in the sha256-content of the same commit contains the +sha256-content of the same tag. Submodules ~~~~~~~~~~ @@ -497,7 +494,7 @@ Caveats ------- Invalid objects ~~~~~~~~~~~~~~~ -The conversion from sha1-content to newhash-content retains any +The conversion from sha1-content to sha256-content retains any brokenness in the original object (e.g., tree entry modes encoded with leading 0, tree objects whose paths are not sorted correctly, and commit objects without an author or committer). This is a deliberate @@ -516,7 +513,7 @@ allow lifting this restriction. Alternates ~~~~~~~~~~ -For the same reason, a newhash repository cannot borrow objects from a +For the same reason, a sha256 repository cannot borrow objects from a sha1 repository using objects/info/alternates or $GIT_ALTERNATE_OBJECT_REPOSITORIES. @@ -524,20 +521,20 @@ git notes ~~~~~~~~~ The "git notes" tool annotates objects using their sha1-name as key. This design does not describe a way to migrate notes trees to use -newhash-names. That migration is expected to happen separately (for +sha256-names. That migration is expected to happen separately (for example using a file at the root of the notes tree to describe which hash it uses). Server-side cost ~~~~~~~~~~~~~~~~ -Until Git protocol gains NewHash support, using NewHash based storage +Until Git protocol gains SHA-256 support, using SHA-256 based storage on public-facing Git servers is strongly discouraged. Once Git -protocol gains NewHash support, NewHash based servers are likely not +protocol gains SHA-256 support, SHA-256 based servers are likely not to support SHA-1 compatibility, to avoid what may be a very expensive hash reencode during clone and to encourage peers to modernize. The design described here allows fetches by SHA-1 clients of a -personal NewHash repository because it's not much more difficult than +personal SHA-256 repository because it's not much more difficult than allowing pushes from that repository. This support needs to be guarded by a configuration option --- servers like git.kernel.org that serve a large number of clients would not be expected to bear that cost. @@ -547,7 +544,7 @@ Meaning of signatures The signed payload for signed commits and tags does not explicitly name the hash used to identify objects. If some day Git adopts a new hash function with the same length as the current SHA-1 (40 -hexadecimal digit) or NewHash (64 hexadecimal digit) objects then the +hexadecimal digit) or SHA-256 (64 hexadecimal digit) objects then the intent behind the PGP signed payload in an object signature is unclear: @@ -562,7 +559,7 @@ Does this mean Git v2.12.0 is the commit with sha1-name e7e07d5a4fcc2a203d9873968ad3e6bd4d7419d7 or the commit with new-40-digit-hash-name e7e07d5a4fcc2a203d9873968ad3e6bd4d7419d7? -Fortunately NewHash and SHA-1 have different lengths. If Git starts +Fortunately SHA-256 and SHA-1 have different lengths. If Git starts using another hash with the same length to name objects, then it will need to change the format of signed payloads using that hash to address this issue. @@ -574,24 +571,24 @@ supports four different modes of operation: 1. ("dark launch") Treat object names input by the user as SHA-1 and convert any object names written to output to SHA-1, but store - objects using NewHash. This allows users to test the code with no + objects using SHA-256. This allows users to test the code with no visible behavior change except for performance. This allows allows running even tests that assume the SHA-1 hash function, to sanity-check the behavior of the new mode. - 2. ("early transition") Allow both SHA-1 and NewHash object names in + 2. ("early transition") Allow both SHA-1 and SHA-256 object names in input. Any object names written to output use SHA-1. This allows users to continue to make use of SHA-1 to communicate with peers (e.g. by email) that have not migrated yet and prepares for mode 3. - 3. ("late transition") Allow both SHA-1 and NewHash object names in - input. Any object names written to output use NewHash. In this + 3. ("late transition") Allow both SHA-1 and SHA-256 object names in + input. Any object names written to output use SHA-256. In this mode, users are using a more secure object naming method by default. The disruption is minimal as long as most of their peers are in mode 2 or mode 3. 4. ("post-transition") Treat object names input by the user as - NewHash and write output using NewHash. This is safer than mode 3 + SHA-256 and write output using SHA-256. This is safer than mode 3 because there is less risk that input is incorrectly interpreted using the wrong hash function. @@ -601,27 +598,31 @@ The user can also explicitly specify which format to use for a particular revision specifier and for output, overriding the mode. For example: -git --output-format=sha1 log abac87a^{sha1}..f787cac^{newhash} +git --output-format=sha1 log abac87a^{sha1}..f787cac^{sha256} -Selection of a New Hash ------------------------ +Choice of Hash +-------------- In early 2005, around the time that Git was written, Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu announced an attack finding SHA-1 collisions in 2^69 operations. In August they published details. Luckily, no practical demonstrations of a collision in full SHA-1 were published until 10 years later, in 2017. -The hash function NewHash to replace SHA-1 should be stronger than -SHA-1 was: we would like it to be trustworthy and useful in practice -for at least 10 years. +Git v2.13.0 and later subsequently moved to a hardened SHA-1 +implementation by default that mitigates the SHAttered attack, but +SHA-1 is still believed to be weak. + +The hash to replace this hardened SHA-1 should be stronger than SHA-1 +was: we would like it to be trustworthy and useful in practice for at +least 10 years. Some other relevant properties: 1. A 256-bit hash (long enough to match common security practice; not excessively long to hurt performance and disk usage). -2. High quality implementations should be widely available (e.g. in - OpenSSL). +2. High quality implementations should be widely available (e.g., in + OpenSSL and Apple CommonCrypto). 3. The hash function's properties should match Git's needs (e.g. Git requires collision and 2nd preimage resistance and does not require @@ -630,14 +631,13 @@ Some other relevant properties: 4. As a tiebreaker, the hash should be fast to compute (fortunately many contenders are faster than SHA-1). -Some hashes under consideration are SHA-256, SHA-512/256, SHA-256x16, -K12, and BLAKE2bp-256. +We choose SHA-256. Transition plan --------------- Some initial steps can be implemented independently of one another: - adding a hash function API (vtable) -- teaching fsck to tolerate the gpgsig-newhash field +- teaching fsck to tolerate the gpgsig-sha256 field - excluding gpgsig-* from the fields copied by "git commit --amend" - annotating tests that depend on SHA-1 values with a SHA1 test prerequisite @@ -664,7 +664,7 @@ Next comes introduction of compatObjectFormat: - adding appropriate index entries when adding a new object to the object store - --output-format option -- ^{sha1} and ^{newhash} revision notation +- ^{sha1} and ^{sha256} revision notation - configuration to specify default input and output format (see "Object names on the command line" above) @@ -672,7 +672,7 @@ The next step is supporting fetches and pushes to SHA-1 repositories: - allow pushes to a repository using the compat format - generate a topologically sorted list of the SHA-1 names of fetched objects -- convert the fetched packfile to newhash format and generate an idx +- convert the fetched packfile to sha256 format and generate an idx file - re-sort to match the order of objects in the fetched packfile @@ -680,30 +680,30 @@ The infrastructure supporting fetch also allows converting an existing repository. In converted repositories and new clones, end users can gain support for the new hash function without any visible change in behavior (see "dark launch" in the "Object names on the command line" -section). In particular this allows users to verify NewHash signatures +section). In particular this allows users to verify SHA-256 signatures on objects in the repository, and it should ensure the transition code is stable in production in preparation for using it more widely. Over time projects would encourage their users to adopt the "early transition" and then "late transition" modes to take advantage of the -new, more futureproof NewHash object names. +new, more futureproof SHA-256 object names. When objectFormat and compatObjectFormat are both set, commands -generating signatures would generate both SHA-1 and NewHash signatures +generating signatures would generate both SHA-1 and SHA-256 signatures by default to support both new and old users. -In projects using NewHash heavily, users could be encouraged to adopt +In projects using SHA-256 heavily, users could be encouraged to adopt the "post-transition" mode to avoid accidentally making implicit use of SHA-1 object names. Once a critical mass of users have upgraded to a version of Git that -can verify NewHash signatures and have converted their existing +can verify SHA-256 signatures and have converted their existing repositories to support verifying them, we can add support for a -setting to generate only NewHash signatures. This is expected to be at +setting to generate only SHA-256 signatures. This is expected to be at least a year later. That is also a good moment to advertise the ability to convert -repositories to use NewHash only, stripping out all SHA-1 related +repositories to use SHA-256 only, stripping out all SHA-1 related metadata. This improves performance by eliminating translation overhead and security by avoiding the possibility of accidentally relying on the safety of SHA-1. @@ -742,16 +742,16 @@ using the old hash function. Signed objects with multiple hashes ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -Instead of introducing the gpgsig-newhash field in commit and tag objects -for newhash-content based signatures, an earlier version of this design -added "hash newhash <newhash-name>" fields to strengthen the existing +Instead of introducing the gpgsig-sha256 field in commit and tag objects +for sha256-content based signatures, an earlier version of this design +added "hash sha256 <sha256-name>" fields to strengthen the existing sha1-content based signatures. In other words, a single signature was used to attest to the object content using both hash functions. This had some advantages: * Using one signature instead of two speeds up the signing process. * Having one signed payload with both hashes allows the signer to - attest to the sha1-name and newhash-name referring to the same object. + attest to the sha1-name and sha256-name referring to the same object. * All users consume the same signature. Broken signatures are likely to be detected quickly using current versions of git. @@ -760,11 +760,11 @@ However, it also came with disadvantages: objects it references, even after the transition is complete and translation table is no longer needed for anything else. To support this, the design added fields such as "hash sha1 tree <sha1-name>" - and "hash sha1 parent <sha1-name>" to the newhash-content of a signed + and "hash sha1 parent <sha1-name>" to the sha256-content of a signed commit, complicating the conversion process. * Allowing signed objects without a sha1 (for after the transition is complete) complicated the design further, requiring a "nohash sha1" - field to suppress including "hash sha1" fields in the newhash-content + field to suppress including "hash sha1" fields in the sha256-content and signed payload. Lazily populated translation table @@ -772,7 +772,7 @@ Lazily populated translation table Some of the work of building the translation table could be deferred to push time, but that would significantly complicate and slow down pushes. Calculating the sha1-name at object creation time at the same time it is -being streamed to disk and having its newhash-name calculated should be +being streamed to disk and having its sha256-name calculated should be an acceptable cost. Document History @@ -814,6 +814,12 @@ Incorporated suggestions from jonathantanmy and sbeller: * avoid loose object overhead by packing more aggressively in "git gc --auto" +Later history: + + See the history of this file in git.git for the history of subsequent + edits. This document history is no longer being maintained as it + would now be superfluous to the commit log + [1] http://public-inbox.org/git/CA+55aFzJtejiCjV0e43+9oR3QuJK2PiFiLQemytoLpyJWe6P9w@mail.gmail.com/ [2] http://public-inbox.org/git/CA+55aFz+gkAsDZ24zmePQuEs1XPS9BP_s8O7Q4wQ7LV7X5-oDA@mail.gmail.com/ [3] http://public-inbox.org/git/20170306084353.nrns455dvkdsfgo5@sigill.intra.peff.net/ diff --git a/Documentation/technical/http-protocol.txt b/Documentation/technical/http-protocol.txt index 64f49d0bbb..9c5b6f0fac 100644 --- a/Documentation/technical/http-protocol.txt +++ b/Documentation/technical/http-protocol.txt @@ -338,11 +338,11 @@ server advertises capability `allow-tip-sha1-in-want` or request_end request_end = "0000" / "done" - want_list = PKT-LINE(want NUL cap_list LF) + want_list = PKT-LINE(want SP cap_list LF) *(want_pkt) want_pkt = PKT-LINE(want LF) want = "want" SP id - cap_list = *(SP capability) SP + cap_list = capability *(SP capability) have_list = *PKT-LINE("have" SP id LF) diff --git a/Documentation/technical/index-format.txt b/Documentation/technical/index-format.txt index db3572626b..7c4d67aa6a 100644 --- a/Documentation/technical/index-format.txt +++ b/Documentation/technical/index-format.txt @@ -314,3 +314,44 @@ The remaining data of each directory block is grouped by type: - An ewah bitmap, the n-th bit indicates whether the n-th index entry is not CE_FSMONITOR_VALID. + +== End of Index Entry + + The End of Index Entry (EOIE) is used to locate the end of the variable + length index entries and the begining of the extensions. Code can take + advantage of this to quickly locate the index extensions without having + to parse through all of the index entries. + + Because it must be able to be loaded before the variable length cache + entries and other index extensions, this extension must be written last. + The signature for this extension is { 'E', 'O', 'I', 'E' }. + + The extension consists of: + + - 32-bit offset to the end of the index entries + + - 160-bit SHA-1 over the extension types and their sizes (but not + their contents). E.g. if we have "TREE" extension that is N-bytes + long, "REUC" extension that is M-bytes long, followed by "EOIE", + then the hash would be: + + SHA-1("TREE" + <binary representation of N> + + "REUC" + <binary representation of M>) + +== Index Entry Offset Table + + The Index Entry Offset Table (IEOT) is used to help address the CPU + cost of loading the index by enabling multi-threading the process of + converting cache entries from the on-disk format to the in-memory format. + The signature for this extension is { 'I', 'E', 'O', 'T' }. + + The extension consists of: + + - 32-bit version (currently 1) + + - A number of index offset entries each consisting of: + + - 32-bit offset from the begining of the file to the first cache entry + in this block of entries. + + - 32-bit count of cache entries in this block diff --git a/Documentation/technical/multi-pack-index.txt b/Documentation/technical/multi-pack-index.txt new file mode 100644 index 0000000000..d7e57639f7 --- /dev/null +++ b/Documentation/technical/multi-pack-index.txt @@ -0,0 +1,109 @@ +Multi-Pack-Index (MIDX) Design Notes +==================================== + +The Git object directory contains a 'pack' directory containing +packfiles (with suffix ".pack") and pack-indexes (with suffix +".idx"). The pack-indexes provide a way to lookup objects and +navigate to their offset within the pack, but these must come +in pairs with the packfiles. This pairing depends on the file +names, as the pack-index differs only in suffix with its pack- +file. While the pack-indexes provide fast lookup per packfile, +this performance degrades as the number of packfiles increases, +because abbreviations need to inspect every packfile and we are +more likely to have a miss on our most-recently-used packfile. +For some large repositories, repacking into a single packfile +is not feasible due to storage space or excessive repack times. + +The multi-pack-index (MIDX for short) stores a list of objects +and their offsets into multiple packfiles. It contains: + +- A list of packfile names. +- A sorted list of object IDs. +- A list of metadata for the ith object ID including: + - A value j referring to the jth packfile. + - An offset within the jth packfile for the object. +- If large offsets are required, we use another list of large + offsets similar to version 2 pack-indexes. + +Thus, we can provide O(log N) lookup time for any number +of packfiles. + +Design Details +-------------- + +- The MIDX is stored in a file named 'multi-pack-index' in the + .git/objects/pack directory. This could be stored in the pack + directory of an alternate. It refers only to packfiles in that + same directory. + +- The pack.multiIndex config setting must be on to consume MIDX files. + +- The file format includes parameters for the object ID hash + function, so a future change of hash algorithm does not require + a change in format. + +- The MIDX keeps only one record per object ID. If an object appears + in multiple packfiles, then the MIDX selects the copy in the most- + recently modified packfile. + +- If there exist packfiles in the pack directory not registered in + the MIDX, then those packfiles are loaded into the `packed_git` + list and `packed_git_mru` cache. + +- The pack-indexes (.idx files) remain in the pack directory so we + can delete the MIDX file, set core.midx to false, or downgrade + without any loss of information. + +- The MIDX file format uses a chunk-based approach (similar to the + commit-graph file) that allows optional data to be added. + +Future Work +----------- + +- Add a 'verify' subcommand to the 'git midx' builtin to verify the + contents of the multi-pack-index file match the offsets listed in + the corresponding pack-indexes. + +- The multi-pack-index allows many packfiles, especially in a context + where repacking is expensive (such as a very large repo), or + unexpected maintenance time is unacceptable (such as a high-demand + build machine). However, the multi-pack-index needs to be rewritten + in full every time. We can extend the format to be incremental, so + writes are fast. By storing a small "tip" multi-pack-index that + points to large "base" MIDX files, we can keep writes fast while + still reducing the number of binary searches required for object + lookups. + +- The reachability bitmap is currently paired directly with a single + packfile, using the pack-order as the object order to hopefully + compress the bitmaps well using run-length encoding. This could be + extended to pair a reachability bitmap with a multi-pack-index. If + the multi-pack-index is extended to store a "stable object order" + (a function Order(hash) = integer that is constant for a given hash, + even as the multi-pack-index is updated) then a reachability bitmap + could point to a multi-pack-index and be updated independently. + +- Packfiles can be marked as "special" using empty files that share + the initial name but replace ".pack" with ".keep" or ".promisor". + We can add an optional chunk of data to the multi-pack-index that + records flags of information about the packfiles. This allows new + states, such as 'repacked' or 'redeltified', that can help with + pack maintenance in a multi-pack environment. It may also be + helpful to organize packfiles by object type (commit, tree, blob, + etc.) and use this metadata to help that maintenance. + +- The partial clone feature records special "promisor" packs that + may point to objects that are not stored locally, but available + on request to a server. The multi-pack-index does not currently + track these promisor packs. + +Related Links +------------- +[0] https://bugs.chromium.org/p/git/issues/detail?id=6 + Chromium work item for: Multi-Pack Index (MIDX) + +[1] https://public-inbox.org/git/20180107181459.222909-1-dstolee@microsoft.com/ + An earlier RFC for the multi-pack-index feature + +[2] https://public-inbox.org/git/alpine.DEB.2.20.1803091557510.23109@alexmv-linux/ + Git Merge 2018 Contributor's summit notes (includes discussion of MIDX) diff --git a/Documentation/technical/pack-format.txt b/Documentation/technical/pack-format.txt index 70a99fd142..cab5bdd2ff 100644 --- a/Documentation/technical/pack-format.txt +++ b/Documentation/technical/pack-format.txt @@ -252,3 +252,80 @@ Pack file entry: <+ corresponding packfile. 20-byte SHA-1-checksum of all of the above. + +== multi-pack-index (MIDX) files have the following format: + +The multi-pack-index files refer to multiple pack-files and loose objects. + +In order to allow extensions that add extra data to the MIDX, we organize +the body into "chunks" and provide a lookup table at the beginning of the +body. The header includes certain length values, such as the number of packs, +the number of base MIDX files, hash lengths and types. + +All 4-byte numbers are in network order. + +HEADER: + + 4-byte signature: + The signature is: {'M', 'I', 'D', 'X'} + + 1-byte version number: + Git only writes or recognizes version 1. + + 1-byte Object Id Version + Git only writes or recognizes version 1 (SHA1). + + 1-byte number of "chunks" + + 1-byte number of base multi-pack-index files: + This value is currently always zero. + + 4-byte number of pack files + +CHUNK LOOKUP: + + (C + 1) * 12 bytes providing the chunk offsets: + First 4 bytes describe chunk id. Value 0 is a terminating label. + Other 8 bytes provide offset in current file for chunk to start. + (Chunks are provided in file-order, so you can infer the length + using the next chunk position if necessary.) + + The remaining data in the body is described one chunk at a time, and + these chunks may be given in any order. Chunks are required unless + otherwise specified. + +CHUNK DATA: + + Packfile Names (ID: {'P', 'N', 'A', 'M'}) + Stores the packfile names as concatenated, null-terminated strings. + Packfiles must be listed in lexicographic order for fast lookups by + name. This is the only chunk not guaranteed to be a multiple of four + bytes in length, so should be the last chunk for alignment reasons. + + OID Fanout (ID: {'O', 'I', 'D', 'F'}) + The ith entry, F[i], stores the number of OIDs with first + byte at most i. Thus F[255] stores the total + number of objects. + + OID Lookup (ID: {'O', 'I', 'D', 'L'}) + The OIDs for all objects in the MIDX are stored in lexicographic + order in this chunk. + + Object Offsets (ID: {'O', 'O', 'F', 'F'}) + Stores two 4-byte values for every object. + 1: The pack-int-id for the pack storing this object. + 2: The offset within the pack. + If all offsets are less than 2^31, then the large offset chunk + will not exist and offsets are stored as in IDX v1. + If there is at least one offset value larger than 2^32-1, then + the large offset chunk must exist. If the large offset chunk + exists and the 31st bit is on, then removing that bit reveals + the row in the large offsets containing the 8-byte offset of + this object. + + [Optional] Object Large Offsets (ID: {'L', 'O', 'F', 'F'}) + 8-byte offsets into large packfiles. + +TRAILER: + + 20-byte SHA1-checksum of the above contents. diff --git a/Documentation/technical/pack-protocol.txt b/Documentation/technical/pack-protocol.txt index 7fee6b780a..6ac774d5f6 100644 --- a/Documentation/technical/pack-protocol.txt +++ b/Documentation/technical/pack-protocol.txt @@ -50,7 +50,8 @@ Each Extra Parameter takes the form of `<key>=<value>` or `<key>`. Servers that receive any such Extra Parameters MUST ignore all unrecognized keys. Currently, the only Extra Parameter recognized is -"version=1". +"version" with a value of '1' or '2'. See protocol-v2.txt for more +information on protocol version 2. Git Transport ------------- @@ -284,7 +285,9 @@ information is sent back to the client in the next step. The client can optionally request that pack-objects omit various objects from the packfile using one of several filtering techniques. These are intended for use with partial clone and partial fetch -operations. See `rev-list` for possible "filter-spec" values. +operations. An object that does not meet a filter-spec value is +omitted unless explicitly requested in a 'want' line. See `rev-list` +for possible filter-spec values. Once all the 'want's and 'shallow's (and optional 'deepen') are transferred, clients MUST send a flush-pkt, to tell the server side diff --git a/Documentation/technical/partial-clone.txt b/Documentation/technical/partial-clone.txt index 0bed2472c8..1ef66bd788 100644 --- a/Documentation/technical/partial-clone.txt +++ b/Documentation/technical/partial-clone.txt @@ -69,24 +69,24 @@ Design Details - A new pack-protocol capability "filter" is added to the fetch-pack and upload-pack negotiation. - - This uses the existing capability discovery mechanism. - See "filter" in Documentation/technical/pack-protocol.txt. ++ +This uses the existing capability discovery mechanism. +See "filter" in Documentation/technical/pack-protocol.txt. - Clients pass a "filter-spec" to clone and fetch which is passed to the server to request filtering during packfile construction. - - There are various filters available to accommodate different situations. - See "--filter=<filter-spec>" in Documentation/rev-list-options.txt. ++ +There are various filters available to accommodate different situations. +See "--filter=<filter-spec>" in Documentation/rev-list-options.txt. - On the server pack-objects applies the requested filter-spec as it creates "filtered" packfiles for the client. - - These filtered packfiles are *incomplete* in the traditional sense because - they may contain objects that reference objects not contained in the - packfile and that the client doesn't already have. For example, the - filtered packfile may contain trees or tags that reference missing blobs - or commits that reference missing trees. ++ +These filtered packfiles are *incomplete* in the traditional sense because +they may contain objects that reference objects not contained in the +packfile and that the client doesn't already have. For example, the +filtered packfile may contain trees or tags that reference missing blobs +or commits that reference missing trees. - On the client these incomplete packfiles are marked as "promisor packfiles" and treated differently by various commands. @@ -104,47 +104,47 @@ Handling Missing Objects to repository corruption. To differentiate these cases, the local repository specially indicates such filtered packfiles obtained from the promisor remote as "promisor packfiles". - - These promisor packfiles consist of a "<name>.promisor" file with - arbitrary contents (like the "<name>.keep" files), in addition to - their "<name>.pack" and "<name>.idx" files. ++ +These promisor packfiles consist of a "<name>.promisor" file with +arbitrary contents (like the "<name>.keep" files), in addition to +their "<name>.pack" and "<name>.idx" files. - The local repository considers a "promisor object" to be an object that it knows (to the best of its ability) that the promisor remote has promised that it has, either because the local repository has that object in one of its promisor packfiles, or because another promisor object refers to it. - - When Git encounters a missing object, Git can see if it a promisor object - and handle it appropriately. If not, Git can report a corruption. - - This means that there is no need for the client to explicitly maintain an - expensive-to-modify list of missing objects.[a] ++ +When Git encounters a missing object, Git can see if it a promisor object +and handle it appropriately. If not, Git can report a corruption. ++ +This means that there is no need for the client to explicitly maintain an +expensive-to-modify list of missing objects.[a] - Since almost all Git code currently expects any referenced object to be present locally and because we do not want to force every command to do a dry-run first, a fallback mechanism is added to allow Git to attempt to dynamically fetch missing objects from the promisor remote. - - When the normal object lookup fails to find an object, Git invokes - fetch-object to try to get the object from the server and then retry - the object lookup. This allows objects to be "faulted in" without - complicated prediction algorithms. - - For efficiency reasons, no check as to whether the missing object is - actually a promisor object is performed. - - Dynamic object fetching tends to be slow as objects are fetched one at - a time. ++ +When the normal object lookup fails to find an object, Git invokes +fetch-object to try to get the object from the server and then retry +the object lookup. This allows objects to be "faulted in" without +complicated prediction algorithms. ++ +For efficiency reasons, no check as to whether the missing object is +actually a promisor object is performed. ++ +Dynamic object fetching tends to be slow as objects are fetched one at +a time. - `checkout` (and any other command using `unpack-trees`) has been taught to bulk pre-fetch all required missing blobs in a single batch. - `rev-list` has been taught to print missing objects. - - This can be used by other commands to bulk prefetch objects. - For example, a "git log -p A..B" may internally want to first do - something like "git rev-list --objects --quiet --missing=print A..B" - and prefetch those objects in bulk. ++ +This can be used by other commands to bulk prefetch objects. +For example, a "git log -p A..B" may internally want to first do +something like "git rev-list --objects --quiet --missing=print A..B" +and prefetch those objects in bulk. - `fsck` has been updated to be fully aware of promisor objects. @@ -154,11 +154,11 @@ Handling Missing Objects - The global variable "fetch_if_missing" is used to control whether an object lookup will attempt to dynamically fetch a missing object or report an error. - - We are not happy with this global variable and would like to remove it, - but that requires significant refactoring of the object code to pass an - additional flag. We hope that concurrent efforts to add an ODB API can - encompass this. ++ +We are not happy with this global variable and would like to remove it, +but that requires significant refactoring of the object code to pass an +additional flag. We hope that concurrent efforts to add an ODB API can +encompass this. Fetching Missing Objects @@ -168,10 +168,10 @@ Fetching Missing Objects transport_fetch_refs(), setting a new transport option TRANS_OPT_NO_DEPENDENTS to indicate that only the objects themselves are desired, not any object that they refer to. - - Because some transports invoke fetch_pack() in the same process, fetch_pack() - has been updated to not use any object flags when the corresponding argument - (no_dependents) is set. ++ +Because some transports invoke fetch_pack() in the same process, fetch_pack() +has been updated to not use any object flags when the corresponding argument +(no_dependents) is set. - The local repository sends a request with the hashes of all requested objects as "want" lines, and does not perform any packfile negotiation. @@ -187,13 +187,13 @@ Current Limitations - The remote used for a partial clone (or the first partial fetch following a regular clone) is marked as the "promisor remote". - - We are currently limited to a single promisor remote and only that - remote may be used for subsequent partial fetches. - - We accept this limitation because we believe initial users of this - feature will be using it on repositories with a strong single central - server. ++ +We are currently limited to a single promisor remote and only that +remote may be used for subsequent partial fetches. ++ +We accept this limitation because we believe initial users of this +feature will be using it on repositories with a strong single central +server. - Dynamic object fetching will only ask the promisor remote for missing objects. We assume that the promisor remote has a complete view of the @@ -221,13 +221,13 @@ Future Work - Allow more than one promisor remote and define a strategy for fetching missing objects from specific promisor remotes or of iterating over the set of promisor remotes until a missing object is found. - - A user might want to have multiple geographically-close cache servers - for fetching missing blobs while continuing to do filtered `git-fetch` - commands from the central server, for example. - - Or the user might want to work in a triangular work flow with multiple - promisor remotes that each have an incomplete view of the repository. ++ +A user might want to have multiple geographically-close cache servers +for fetching missing blobs while continuing to do filtered `git-fetch` +commands from the central server, for example. ++ +Or the user might want to work in a triangular work flow with multiple +promisor remotes that each have an incomplete view of the repository. - Allow repack to work on promisor packfiles (while keeping them distinct from non-promisor packfiles). @@ -238,25 +238,25 @@ Future Work - Investigate use of a long-running process to dynamically fetch a series of objects, such as proposed in [5,6] to reduce process startup and overhead costs. - - It would be nice if pack protocol V2 could allow that long-running - process to make a series of requests over a single long-running - connection. ++ +It would be nice if pack protocol V2 could allow that long-running +process to make a series of requests over a single long-running +connection. - Investigate pack protocol V2 to avoid the info/refs broadcast on each connection with the server to dynamically fetch missing objects. - Investigate the need to handle loose promisor objects. - - Objects in promisor packfiles are allowed to reference missing objects - that can be dynamically fetched from the server. An assumption was - made that loose objects are only created locally and therefore should - not reference a missing object. We may need to revisit that assumption - if, for example, we dynamically fetch a missing tree and store it as a - loose object rather than a single object packfile. - - This does not necessarily mean we need to mark loose objects as promisor; - it may be sufficient to relax the object lookup or is-promisor functions. ++ +Objects in promisor packfiles are allowed to reference missing objects +that can be dynamically fetched from the server. An assumption was +made that loose objects are only created locally and therefore should +not reference a missing object. We may need to revisit that assumption +if, for example, we dynamically fetch a missing tree and store it as a +loose object rather than a single object packfile. ++ +This does not necessarily mean we need to mark loose objects as promisor; +it may be sufficient to relax the object lookup or is-promisor functions. Non-Tasks @@ -265,13 +265,13 @@ Non-Tasks - Every time the subject of "demand loading blobs" comes up it seems that someone suggests that the server be allowed to "guess" and send additional objects that may be related to the requested objects. - - No work has gone into actually doing that; we're just documenting that - it is a common suggestion. We're not sure how it would work and have - no plans to work on it. - - It is valid for the server to send more objects than requested (even - for a dynamic object fetch), but we are not building on that. ++ +No work has gone into actually doing that; we're just documenting that +it is a common suggestion. We're not sure how it would work and have +no plans to work on it. ++ +It is valid for the server to send more objects than requested (even +for a dynamic object fetch), but we are not building on that. Footnotes @@ -282,43 +282,43 @@ Footnotes This would essentially be a sorted linear list of OIDs that the were omitted by the server during a clone or subsequent fetches. - This file would need to be loaded into memory on every object lookup. - It would need to be read, updated, and re-written (like the .git/index) - on every explicit "git fetch" command *and* on any dynamic object fetch. +This file would need to be loaded into memory on every object lookup. +It would need to be read, updated, and re-written (like the .git/index) +on every explicit "git fetch" command *and* on any dynamic object fetch. - The cost to read, update, and write this file could add significant - overhead to every command if there are many missing objects. For example, - if there are 100M missing blobs, this file would be at least 2GiB on disk. +The cost to read, update, and write this file could add significant +overhead to every command if there are many missing objects. For example, +if there are 100M missing blobs, this file would be at least 2GiB on disk. - With the "promisor" concept, we *infer* a missing object based upon the - type of packfile that references it. +With the "promisor" concept, we *infer* a missing object based upon the +type of packfile that references it. Related Links ------------- -[0] https://bugs.chromium.org/p/git/issues/detail?id=2 - Chromium work item for: Partial Clone +[0] https://crbug.com/git/2 + Bug#2: Partial Clone -[1] https://public-inbox.org/git/20170113155253.1644-1-benpeart@microsoft.com/ - Subject: [RFC] Add support for downloading blobs on demand +[1] https://public-inbox.org/git/20170113155253.1644-1-benpeart@microsoft.com/ + + Subject: [RFC] Add support for downloading blobs on demand + Date: Fri, 13 Jan 2017 10:52:53 -0500 -[2] https://public-inbox.org/git/cover.1506714999.git.jonathantanmy@google.com/ - Subject: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches) +[2] https://public-inbox.org/git/cover.1506714999.git.jonathantanmy@google.com/ + + Subject: [PATCH 00/18] Partial clone (from clone to lazy fetch in 18 patches) + Date: Fri, 29 Sep 2017 13:11:36 -0700 -[3] https://public-inbox.org/git/20170426221346.25337-1-jonathantanmy@google.com/ - Subject: Proposal for missing blob support in Git repos +[3] https://public-inbox.org/git/20170426221346.25337-1-jonathantanmy@google.com/ + + Subject: Proposal for missing blob support in Git repos + Date: Wed, 26 Apr 2017 15:13:46 -0700 -[4] https://public-inbox.org/git/1488999039-37631-1-git-send-email-git@jeffhostetler.com/ - Subject: [PATCH 00/10] RFC Partial Clone and Fetch +[4] https://public-inbox.org/git/1488999039-37631-1-git-send-email-git@jeffhostetler.com/ + + Subject: [PATCH 00/10] RFC Partial Clone and Fetch + Date: Wed, 8 Mar 2017 18:50:29 +0000 -[5] https://public-inbox.org/git/20170505152802.6724-1-benpeart@microsoft.com/ - Subject: [PATCH v7 00/10] refactor the filter process code into a reusable module +[5] https://public-inbox.org/git/20170505152802.6724-1-benpeart@microsoft.com/ + + Subject: [PATCH v7 00/10] refactor the filter process code into a reusable module + Date: Fri, 5 May 2017 11:27:52 -0400 -[6] https://public-inbox.org/git/20170714132651.170708-1-benpeart@microsoft.com/ - Subject: [RFC/PATCH v2 0/1] Add support for downloading blobs on demand +[6] https://public-inbox.org/git/20170714132651.170708-1-benpeart@microsoft.com/ + + Subject: [RFC/PATCH v2 0/1] Add support for downloading blobs on demand + Date: Fri, 14 Jul 2017 09:26:50 -0400 diff --git a/Documentation/technical/protocol-v2.txt b/Documentation/technical/protocol-v2.txt index 49bda76d23..09e4e0273f 100644 --- a/Documentation/technical/protocol-v2.txt +++ b/Documentation/technical/protocol-v2.txt @@ -64,9 +64,8 @@ When using the http:// or https:// transport a client makes a "smart" info/refs request as described in `http-protocol.txt` and requests that v2 be used by supplying "version=2" in the `Git-Protocol` header. - C: Git-Protocol: version=2 - C: C: GET $GIT_URL/info/refs?service=git-upload-pack HTTP/1.0 + C: Git-Protocol: version=2 A v2 server would reply: @@ -299,12 +298,21 @@ included in the client's request: for use with partial clone and partial fetch operations. See `rev-list` for possible "filter-spec" values. +If the 'ref-in-want' feature is advertised, the following argument can +be included in the client's request as well as the potential addition of +the 'wanted-refs' section in the server's response as explained below. + + want-ref <ref> + Indicates to the server that the client wants to retrieve a + particular ref, where <ref> is the full name of a ref on the + server. + The response of `fetch` is broken into a number of sections separated by delimiter packets (0001), with each section beginning with its section header. output = *section - section = (acknowledgments | shallow-info | packfile) + section = (acknowledgments | shallow-info | wanted-refs | packfile) (flush-pkt | delim-pkt) acknowledgments = PKT-LINE("acknowledgments" LF) @@ -319,6 +327,10 @@ header. shallow = "shallow" SP obj-id unshallow = "unshallow" SP obj-id + wanted-refs = PKT-LINE("wanted-refs" LF) + *PKT-LINE(wanted-ref LF) + wanted-ref = obj-id SP refname + packfile = PKT-LINE("packfile" LF) *PKT-LINE(%x01-03 *%x00-ff) @@ -379,6 +391,19 @@ header. * This section is only included if a packfile section is also included in the response. + wanted-refs section + * This section is only included if the client has requested a + ref using a 'want-ref' line and if a packfile section is also + included in the response. + + * Always begins with the section header "wanted-refs". + + * The server will send a ref listing ("<oid> <refname>") for + each reference requested using 'want-ref' lines. + + * The server MUST NOT send any refs which were not requested + using 'want-ref' lines. + packfile section * This section is only included if the client has sent 'want' lines in its request and either requested that no more diff --git a/Documentation/technical/repository-version.txt b/Documentation/technical/repository-version.txt index e03eaccebc..7844ef30ff 100644 --- a/Documentation/technical/repository-version.txt +++ b/Documentation/technical/repository-version.txt @@ -1,5 +1,4 @@ -Git Repository Format Versions -============================== +== Git Repository Format Versions Every git repository is marked with a numeric version in the `core.repositoryformatversion` key of its `config` file. This version @@ -40,16 +39,14 @@ format by default. The currently defined format versions are: -Version `0` ------------ +=== Version `0` This is the format defined by the initial version of git, including but not limited to the format of the repository directory, the repository configuration file, and the object and ref storage. Specifying the complete behavior of git is beyond the scope of this document. -Version `1` ------------ +=== Version `1` This format is identical to version `0`, with the following exceptions: @@ -74,21 +71,18 @@ it here, in order to claim the name. The defined extensions are: -`noop` -~~~~~~ +==== `noop` This extension does not change git's behavior at all. It is useful only for testing format-1 compatibility. -`preciousObjects` -~~~~~~~~~~~~~~~~~ +==== `preciousObjects` When the config key `extensions.preciousObjects` is set to `true`, objects in the repository MUST NOT be deleted (e.g., by `git-prune` or `git repack -d`). -`partialclone` -~~~~~~~~~~~~~~ +==== `partialclone` When the config key `extensions.partialclone` is set, it indicates that the repo was created with a partial clone (or later performed @@ -98,3 +92,11 @@ and it promises that all such omitted objects can be fetched from it in the future. The value of this key is the name of the promisor remote. + +==== `worktreeConfig` + +If set, by default "git config" reads from both "config" and +"config.worktree" file from GIT_DIR in that order. In +multiple working directory mode, "config" file is shared while +"config.worktree" is per-working directory (i.e., it's in +GIT_COMMON_DIR/worktrees/<id>/config.worktree) diff --git a/Documentation/technical/rerere.txt b/Documentation/technical/rerere.txt new file mode 100644 index 0000000000..aa22d7ace8 --- /dev/null +++ b/Documentation/technical/rerere.txt @@ -0,0 +1,186 @@ +Rerere +====== + +This document describes the rerere logic. + +Conflict normalization +---------------------- + +To ensure recorded conflict resolutions can be looked up in the rerere +database, even when branches are merged in a different order, +different branches are merged that result in the same conflict, or +when different conflict style settings are used, rerere normalizes the +conflicts before writing them to the rerere database. + +Different conflict styles and branch names are normalized by stripping +the labels from the conflict markers, and removing the common ancestor +version from the `diff3` conflict style. Branches that are merged +in different order are normalized by sorting the conflict hunks. More +on each of those steps in the following sections. + +Once these two normalization operations are applied, a conflict ID is +calculated based on the normalized conflict, which is later used by +rerere to look up the conflict in the rerere database. + +Removing the common ancestor version +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Say we have three branches AB, AC and AC2. The common ancestor of +these branches has a file with a line containing the string "A" (for +brevity this is called "line A" in the rest of the document). In +branch AB this line is changed to "B", in AC, this line is changed to +"C", and branch AC2 is forked off of AC, after the line was changed to +"C". + +Forking a branch ABAC off of branch AB and then merging AC into it, we +get a conflict like the following: + + <<<<<<< HEAD + B + ======= + C + >>>>>>> AC + +Doing the analogous with AC2 (forking a branch ABAC2 off of branch AB +and then merging branch AC2 into it), using the diff3 conflict style, +we get a conflict like the following: + + <<<<<<< HEAD + B + ||||||| merged common ancestors + A + ======= + C + >>>>>>> AC2 + +By resolving this conflict, to leave line D, the user declares: + + After examining what branches AB and AC did, I believe that making + line A into line D is the best thing to do that is compatible with + what AB and AC wanted to do. + +As branch AC2 refers to the same commit as AC, the above implies that +this is also compatible what AB and AC2 wanted to do. + +By extension, this means that rerere should recognize that the above +conflicts are the same. To do this, the labels on the conflict +markers are stripped, and the common ancestor version is removed. The above +examples would both result in the following normalized conflict: + + <<<<<<< + B + ======= + C + >>>>>>> + +Sorting hunks +~~~~~~~~~~~~~ + +As before, lets imagine that a common ancestor had a file with line A +its early part, and line X in its late part. And then four branches +are forked that do these things: + + - AB: changes A to B + - AC: changes A to C + - XY: changes X to Y + - XZ: changes X to Z + +Now, forking a branch ABAC off of branch AB and then merging AC into +it, and forking a branch ACAB off of branch AC and then merging AB +into it, would yield the conflict in a different order. The former +would say "A became B or C, what now?" while the latter would say "A +became C or B, what now?" + +As a reminder, the act of merging AC into ABAC and resolving the +conflict to leave line D means that the user declares: + + After examining what branches AB and AC did, I believe that + making line A into line D is the best thing to do that is + compatible with what AB and AC wanted to do. + +So the conflict we would see when merging AB into ACAB should be +resolved the same way---it is the resolution that is in line with that +declaration. + +Imagine that similarly previously a branch XYXZ was forked from XY, +and XZ was merged into it, and resolved "X became Y or Z" into "X +became W". + +Now, if a branch ABXY was forked from AB and then merged XY, then ABXY +would have line B in its early part and line Y in its later part. +Such a merge would be quite clean. We can construct 4 combinations +using these four branches ((AB, AC) x (XY, XZ)). + +Merging ABXY and ACXZ would make "an early A became B or C, a late X +became Y or Z" conflict, while merging ACXY and ABXZ would make "an +early A became C or B, a late X became Y or Z". We can see there are +4 combinations of ("B or C", "C or B") x ("X or Y", "Y or X"). + +By sorting, the conflict is given its canonical name, namely, "an +early part became B or C, a late part becames X or Y", and whenever +any of these four patterns appear, and we can get to the same conflict +and resolution that we saw earlier. + +Without the sorting, we'd have to somehow find a previous resolution +from combinatorial explosion. + +Conflict ID calculation +~~~~~~~~~~~~~~~~~~~~~~~ + +Once the conflict normalization is done, the conflict ID is calculated +as the sha1 hash of the conflict hunks appended to each other, +separated by <NUL> characters. The conflict markers are stripped out +before the sha1 is calculated. So in the example above, where we +merge branch AC which changes line A to line C, into branch AB, which +changes line A to line C, the conflict ID would be +SHA1('B<NUL>C<NUL>'). + +If there are multiple conflicts in one file, the sha1 is calculated +the same way with all hunks appended to each other, in the order in +which they appear in the file, separated by a <NUL> character. + +Nested conflicts +~~~~~~~~~~~~~~~~ + +Nested conflicts are handled very similarly to "simple" conflicts. +Similar to simple conflicts, the conflict is first normalized by +stripping the labels from conflict markers, stripping the common ancestor +version, and the sorting the conflict hunks, both for the outer and the +inner conflict. This is done recursively, so any number of nested +conflicts can be handled. + +Note that this only works for conflict markers that "cleanly nest". If +there are any unmatched conflict markers, rerere will fail to handle +the conflict and record a conflict resolution. + +The only difference is in how the conflict ID is calculated. For the +inner conflict, the conflict markers themselves are not stripped out +before calculating the sha1. + +Say we have the following conflict for example: + + <<<<<<< HEAD + 1 + ======= + <<<<<<< HEAD + 3 + ======= + 2 + >>>>>>> branch-2 + >>>>>>> branch-3~ + +After stripping out the labels of the conflict markers, and sorting +the hunks, the conflict would look as follows: + + <<<<<<< + 1 + ======= + <<<<<<< + 2 + ======= + 3 + >>>>>>> + >>>>>>> + +and finally the conflict ID would be calculated as: +`sha1('1<NUL><<<<<<<\n3\n=======\n2\n>>>>>>><NUL>')` |