summaryrefslogtreecommitdiff
path: root/hash.h
AgeCommit message (Collapse)AuthorFilesLines
2021-08-15oidtree: avoid unaligned access to crit-bit treeLibravatar René Scharfe1-1/+1
The flexible array member "k" of struct cb_node is used to store the key of the crit-bit tree node. It offers no alignment guarantees -- in fact the current struct layout puts it one byte after a 4-byte aligned address, i.e. guaranteed to be misaligned. oidtree uses a struct object_id as cb_node key. Since cf0983213c (hash: add an algo member to struct object_id, 2021-04-26) it requires 4-byte alignment. The mismatch is reported by UndefinedBehaviorSanitizer at runtime like this: hash.h:277:2: runtime error: member access within misaligned address 0x00015000802d for type 'struct object_id', which requires 4 byte alignment 0x00015000802d: note: pointer points here 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior hash.h:277:2 in We can fix that by: 1. eliminating the alignment requirement of struct object_id, 2. providing the alignment in struct cb_node, or 3. avoiding the issue by only using memcpy to access "k". Currently we only store one of two values in "algo" in struct object_id. We could use a uint8_t for that instead and widen it only once we add support for our twohundredth algorithm or so. That would not only avoid alignment issues, but also reduce the memory requirements for each instance of struct object_id by ca. 9%. Supporting keys with alignment requirements might be useful to spread the use of crit-bit trees. It can be achieved by using a wider type for "k" (e.g. uintmax_t), using different types for the members "byte" and "otherbits" (e.g. uint16_t or uint32_t for each), or by avoiding the use of flexible arrays like khash.h does. This patch implements the third option, though, because it has the least potential for causing side-effects and we're close to the next release. If one of the other options is implemented later as well to get their additional benefits we can get rid of the extra copies introduced here. Reported-by: Andrzej Hunt <andrzej@ahunt.org> Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-07-07oidcpy_with_padding: constify `src' argLibravatar Eric Wong1-1/+1
As with `oidcpy', the source struct will not be modified and this will allow an upcoming const-correct caller to use it. Signed-off-by: Eric Wong <e@80x24.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-05-18parallel-checkout: send the new object_id algo field to the workersLibravatar Matheus Tavares1-0/+16
An object_id storing a SHA-1 name has some unused bytes at the end of the hash array. Since these bytes are not used, they are usually not initialized to any value either. However, at parallel_checkout.c:send_one_item() the object_id of a cache entry is copied into a buffer which is later sent to a checkout worker through a pipe write(). This makes Valgrind complain about passing uninitialized bytes to a syscall. The worker won't use these uninitialized bytes either, but the warning could confuse someone trying to debug this code; So instead of using oidcpy(), send_one_item() uses hashcpy() to only copy the used/initialized bytes of the object_id, and leave the remaining part with zeros. However, since cf0983213c ("hash: add an algo member to struct object_id", 2021-04-26), using hashcpy() is no longer sufficient here as it won't copy the new algo field from the object_id. Let's add and use a new function which meets both our requirements of copying all the important object_id data while still avoiding the uninitialized bytes, by padding the end of the hash array in the destination object_id. With this change, we also no longer need the destination buffer from send_one_item() to be initialized with zeros, so let's switch from xcalloc() to xmalloc() to make this clear. Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-27hash: provide per-algorithm null OIDsLibravatar brian m. carlson1-2/+5
Up until recently, object IDs did not have an algorithm member, only a hash. Consequently, it was possible to share one null (all-zeros) object ID among all hash algorithms. Now that we're going to be handling objects from multiple hash algorithms, it's important to make sure that all object IDs have a correct algorithm field. Introduce a per-algorithm null OID, and add it to struct hash_algo. Introduce a wrapper function as well, and use it everywhere we used to use the null_oid constant. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-27hash: set, copy, and use algo field in struct object_idLibravatar brian m. carlson1-6/+34
Now that struct object_id has an algorithm field, we should populate it. This will allow us to handle object IDs in any supported algorithm and distinguish between them. Ensure that the field is written whenever we write an object ID by storing it explicitly every time we write an object. Set values for the empty blob and tree values as well. In addition, use the algorithm field to compare object IDs. Note that because we zero-initialize struct object_id in many places throughout the codebase, we default to the default algorithm in cases where the algorithm field is zero rather than explicitly initialize all of those locations. This leads to a branch on every comparison, but the alternative is to compare the entire buffer each time and padding the buffer for SHA-1. That alternative ranges up to 3.9% worse than this approach on the perf t0001, t1450, and t1451. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-27hash: add a function to finalize object IDsLibravatar brian m. carlson1-23/+27
To avoid the penalty of having to branch in hash comparison functions, we'll want to always compare the full hash member in a struct object_id, which will require that SHA-1 object IDs be zero-padded. To do so, add a function which finalizes a hash context and writes it into an object ID that performs this padding. Move the definition of struct object_id and the constant definitions higher up so we they are available for us to use. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-27hash: add an algo member to struct object_idLibravatar brian m. carlson1-0/+1
Now that we're working with multiple hash algorithms in the same repo, it's best if we label each object ID with its algorithm so we can determine how to format a given object ID. Add a member called algo to struct object_id. Performance testing on object ID-heavy workloads doesn't reveal a clear change in performance. Out of performance tests t0001 and t1450, there are slight variations in performance both up and down, but all measurements are within the margin of error. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-12-04cache.h: move hash/oid functions to hash.hLibravatar Jeff King1-0/+95
We define git_hash_algo and object_id in hash.h, but most of the utility functions are declared in the main cache.h. Let's move them to hash.h along with their struct definitions. This cleans up cache.h a bit, but also avoids circular dependencies when other headers need to know about these functions (e.g., if oid-array.h were to have an inline that used oideq(), it couldn't include cache.h because it is itself included by cache.h). No including C files should be affected, because hash.h is always included in cache.h already. We do have to mention repository.h at the top of hash.h, though, since we depend on the_repository in some of our inline functions. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-02-24hash: implement and use a context cloning functionLibravatar brian m. carlson1-0/+21
For all of our SHA-1 implementations and most of our SHA-256 implementations, the hash context we use is a real struct. For these implementations, it's possible to copy a hash context by making a copy of the struct. However, for our libgcrypt implementation, our hash context is a pointer. Consequently, copying it does not lead to an independent hash context like we intended. Fortunately, however, libgcrypt provides us with a handy function to copy hash contexts. Let's add a cloning function to the hash algorithm API, and use it in the one place we need to make a hash context copy. With this change, our libgcrypt SHA-256 implementation is fully functional with all of our other hash implementations. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-20hash.h: move object_id definition from cache.hLibravatar Jeff King1-0/+24
Our hashmap.h helpfully defines a sha1hash() function. But it cannot define a similar oidhash() without including all of cache.h, which itself wants to include hashmap.h! Let's break this circular dependency by moving the definition to hash.h, along with the remaining RAWSZ macros, etc. That will put them with the existing git_hash_algo definition. One alternative would be to move oidhash() into cache.h, but it's already quite bloated. We're better off moving things out than in. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-04-01hash: add a function to lookup hash algorithm by lengthLibravatar brian m. carlson1-0/+2
There are some cases, such as the dumb HTTP transport and bundles, where we can only determine the hash algorithm in use by the length of the object IDs. Provide a function that looks up the algorithm by length. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-14hash: add an SHA-256 implementation using OpenSSLLibravatar brian m. carlson1-0/+2
We already have OpenSSL routines available for SHA-1, so add routines for SHA-256 as well. On a Core i7-6600U, this SHA-256 implementation compares favorably to the SHA1DC SHA-1 implementation: SHA-1: 157 MiB/s (64 byte chunks); 337 MiB/s (16 KiB chunks) SHA-256: 165 MiB/s (64 byte chunks); 408 MiB/s (16 KiB chunks) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-14sha256: add an SHA-256 implementation using libgcryptLibravatar brian m. carlson1-0/+4
Generally, one gets better performance out of cryptographic routines written in assembly than C, and this is also true for SHA-256. In addition, most Linux distributions cannot distribute Git linked against OpenSSL for licensing reasons. Most systems with GnuPG will also have libgcrypt, since it is a dependency of GnuPG. libgcrypt is also faster than the SHA1DC implementation for messages of a few KiB and larger. For comparison, on a Core i7-6600U, this implementation processes 16 KiB chunks at 355 MiB/s while SHA1DC processes equivalent chunks at 337 MiB/s. In addition, libgcrypt is licensed under the LGPL 2.1, which is compatible with the GPL. Add an implementation of SHA-256 that uses libgcrypt. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-14Add a base implementation of SHA-256 supportLibravatar brian m. carlson1-1/+18
SHA-1 is weak and we need to transition to a new hash function. For some time, we have referred to this new function as NewHash. Recently, we decided to pick SHA-256 as NewHash. The reasons behind the choice of SHA-256 are outlined in the thread starting at [1] and in the commit history for the hash function transition document. Add a basic implementation of SHA-256 based off libtomcrypt, which is in the public domain. Optimize it and restructure it to meet our coding standards. Pull in the update and final functions from the SHA-1 block implementation, as we know these function correctly with all compilers. This implementation is slower than SHA-1, but more performant implementations will be introduced in future commits. Wire up SHA-256 in the list of hash algorithms, and add a test that the algorithm works correctly. Note that with this patch, it is still not possible to switch to using SHA-256 in Git. Additional patches are needed to prepare the code to handle a larger hash algorithm and further test fixes are needed. [1] https://public-inbox.org/git/20180609224913.GC38834@genre.crustytoothpaste.net/ Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-14sha1-file: add a constant for hash block sizeLibravatar brian m. carlson1-0/+3
There is one place we need the hash algorithm block size: the HMAC code for push certs. Expose this constant in struct git_hash_algo and expose values for SHA-1 and for the largest value of any hash. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-10-22sha1-file: provide functions to look up hash algorithmsLibravatar brian m. carlson1-0/+13
There are several ways we might refer to a hash algorithm: by name, such as in the config file; by format ID, such as in a pack; or internally, by a pointer to the hash_algos array. Provide functions to look up hash algorithms based on these various forms and return the internal constant used for them. If conversion to another form is necessary, this internal constant can be used to look up the proper data in the hash_algos array. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-09hash: update obsolete reference to SHA1_HEADERLibravatar brian m. carlson1-2/+2
We moved away from SHA1_HEADER to a preprocessor if chain, but didn't update the comment discussing the platform defines. Update this comment so it reflects the current state of our codebase. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-02hash: create union for hash context allocationLibravatar brian m. carlson1-6/+9
In various parts of our code, we want to allocate a structure representing the internal state of a hash algorithm. The original implementation of the hash algorithm abstraction assumed we would do that using heap allocations, and added a context size element to struct git_hash_algo. However, most of the existing code uses stack allocations and conversion would needlessly complicate various parts of the code. Add a union for the purpose of allocating hash contexts on the stack and a typedef for ease of use. Use this union for defining the init, update, and final functions to avoid casts. Remove the ctxsz element for struct git_hash_algo, which is no longer very useful. This does mean that stack allocations will grow slightly as additional hash functions are added, but this should not be a significant problem, since we don't allocate many hash contexts. The improved usability and benefits from avoiding dynamic allocation outweigh this small downside. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-02hash: move SHA-1 macros to hash.hLibravatar brian m. carlson1-0/+25
Most of the other code dealing with SHA-1 and other hashes is located in hash.h, which is in turn loaded by cache.h. Move the SHA-1 macros to hash.h as well, so we can use them in additional hash-related items in the future. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-13Add structure representing hash algorithmLibravatar brian m. carlson1-0/+57
Since in the future we want to support an additional hash algorithm, add a structure that represents a hash algorithm and all the data that must go along with it. Add a constant to allow easy enumeration of hash algorithms. Implement function typedefs to create an abstract API that can be used by any hash algorithm, and wrappers for the existing SHA1 functions that conform to this API. Expose a value for hex size as well as binary size. While one will always be twice the other, the two values are both used extremely commonly throughout the codebase and providing both leads to improved readability. Don't include an entry in the hash algorithm structure for the null object ID. As this value is all zeros, any suitably sized all-zero object ID can be used, and there's no need to store a given one on a per-hash basis. The current hash function transition plan envisions a time when we will accept input from the user that might be in SHA-1 or in the NewHash format. Since we cannot know which the user has provided, add a constant representing the unknown algorithm to allow us to indicate that we must look the correct value up. Provide dummy API functions that die in this case. Finally, include git-compat-util.h in hash.h so that the required types are available. This aids people using automated tools their editors. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-16sha1dc: build git plumbing code more explicitlyLibravatar Takashi Iwai1-5/+1
The plumbing code between sha1dc and git is defined in sha1dc_git.[ch], but these aren't compiled / included directly but only via the indirect inclusion from sha1dc code. This is slightly confusing when you try to trace the build flow. This patch brings the following changes for simplification: - Make sha1dc_git.c stand-alone and build from Makefile - sha1dc_git.h is the common header to include further sha1.h depending on the build condition - Move comments for plumbing codes from the header to definitions This is also meant as a preliminary work for further plumbing with external sha1dc shlib. Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-03sha1dc: optionally use sha1collisiondetection as a submoduleLibravatar Ævar Arnfjörð Bjarmason1-0/+4
Add an option to use the sha1collisiondetection library from the submodule in sha1collisiondetection/ instead of in the copy in the sha1dc/ directory. This allows us to try out the submodule in sha1collisiondetection without breaking the build for anyone who's not expecting them as we work out any kinks. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-17Makefile: add DC_SHA1 knobLibravatar Jeff King1-0/+2
This knob lets you use the sha1dc implementation from: https://github.com/cr-marcstevens/sha1collisiondetection which can detect certain types of collision attacks (even when we only see half of the colliding pair). So it mitigates any attack which consists of getting the "good" half of a collision into a trusted repository, and then later replacing it with the "bad" half. The "good" half is rejected by the victim's version of Git (and even if they run an old version of Git, any sha1dc-enabled git will complain loudly if it ever has to interact with the object). The big downside is that it's slower than either the openssl or block-sha1 implementations. Here are some timings based off of linux.git: - compute sha1 over whole packfile sha1dc: 3.580s blk-sha1: 2.046s (-43%) openssl: 1.335s (-62%) - rev-list --all --objects sha1dc: 33.512s blk-sha1: 33.514s (+0.0%) openssl: 33.650s (+0.4%) - git log --no-merges -10000 -p sha1dc: 8.124s blk-sha1: 7.986s (-1.6%) openssl: 8.203s (+0.9%) - index-pack --verify sha1dc: 4m19s blk-sha1: 2m57s (-32%) openssl: 2m19s (-42%) So overall the sha1 computation with collision detection is about 1.75x slower than block-sha1, and 2.7x slower than sha1. But of course most operations do more than just sha1. Normal object access isn't really slowed at all (both the +/- changes there are well within the run-to-run noise); any changes are drowned out by the other work Git is doing. The most-affected operation is `index-pack --verify`, which is essentially just computing the sha1 on every object. This is similar to the `index-pack` invocation that the receiver of a push or fetch would perform. So clearly there's some extra CPU load here. There will also be some latency for the user, though keep in mind that such an operation will generally be network bound (this is about a 1.2GB packfile). Some of that extra CPU is "free" in the sense that we use it while the pack is streaming in anyway. But most of it comes during the delta-resolution phase, after the whole pack has been received. So we can imagine that for this (quite large) push, the user might have to wait an extra 100 seconds over openssl (which is what we use now). If we assume they can push to us at 20Mbit/s, that's 480s for a 1.2GB pack, which is only 20% slower. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-15hash.h: move SHA-1 implementation selection into a header fileLibravatar brian m. carlson1-0/+14
Many developers use functionality in their editors that allows for quick syntax checks, including warning about questionable constructs. This functionality allows rapid development with fewer errors. However, such functionality generally does not allow the specification of project-specific defines or command-line options. Since the SHA1_HEADER include is not defined in such a case, developers see spurious errors when using these tools. Furthermore, there are known implementations of "cc" whose '#include' is unhappy with this construct. Instead of using SHA1_HEADER, create a hash.h header and use #if and #elif to select the desired header. Have the Makefile pass an appropriate option to help the header select the right implementation to use. [jc: make BLK_SHA1 the fallback default as discussed on list, e.g. <20170314201424.vccij5z2ortq4a4o@sigill.intra.peff.net>; also remove SHA1_HEADER and SHA1_HEADER_SQ that are no longer used]. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-11-18remove old hash.[ch] implementationLibravatar Karsten Blees1-50/+0
Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-16Preallocate hash tables when the number of inserts are known in advanceLibravatar Nguyễn Thái Ngọc Duy1-0/+7
This avoids unnecessary re-allocations and reinsertions. On webkit.git (i.e. about 182k inserts to the name hash table), this reduces about 100ms out of 3s user time. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-02-18for_each_hash: allow passing a 'void *data' pointer to callbackLibravatar Linus Torvalds1-1/+1
For the find_exact_renames() function, this allows us to pass the diff_options structure pointer to the low-level routines. We will use that to distinguish between the "rename" and "copy" cases. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-03-09Add 'const' where appropriate to index handling functionsLibravatar Linus Torvalds1-2/+2
This is in an effort to make the source index of 'unpack_trees()' as being const, and thus making the compiler help us verify that we only access it for reading. The constification also extended to some of the hashing helpers that get called indirectly. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-10-26Do linear-time/space rename logic for exact renamesLibravatar Linus Torvalds1-0/+43
This implements a smarter rename detector for exact renames, which rather than doing a pairwise comparison (time O(m*n)) will just hash the files into a hash-table (size O(n+m)), and only do pairwise comparisons to renames that have the same hash (time O(n+m) except for unrealistic hash collissions, which we just cull aggressively). Admittedly the exact rename case is not nearly as interesting as the generic case, but it's an important case none-the-less. A similar general approach should work for the generic case too, but even then you do need to handle the exact renames/copies separately (to avoid the inevitable added cost factor that comes from the _size_ of the file), so this is worth doing. In the expectation that we will indeed do the same hashing trick for the general rename case, this code uses a generic hash-table implementation that can be used for other things too. In fact, we might be able to consolidate some of our existing hash tables with the new generic code in hash.[ch]. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>