summaryrefslogtreecommitdiff
path: root/name-hash.c
AgeCommit message (Collapse)AuthorFilesLines
2021-04-30Merge branch 'ds/sparse-index-protections'Libravatar Junio C Hamano1-2/+9
Builds on top of the sparse-index infrastructure to mark operations that are not ready to mark with the sparse index, causing them to fall back on fully-populated index that they always have worked with. * ds/sparse-index-protections: (47 commits) name-hash: use expand_to_path() sparse-index: expand_to_path() name-hash: don't add directories to name_hash revision: ensure full index resolve-undo: ensure full index read-cache: ensure full index pathspec: ensure full index merge-recursive: ensure full index entry: ensure full index dir: ensure full index update-index: ensure full index stash: ensure full index rm: ensure full index merge-index: ensure full index ls-files: ensure full index grep: ensure full index fsck: ensure full index difftool: ensure full index commit: ensure full index checkout: ensure full index ...
2021-04-14name-hash: use expand_to_path()Libravatar Derrick Stolee1-0/+4
A sparse-index loads the name-hash data for its entries, including the sparse-directory entries. If a caller asks for a path that is contained within a sparse-directory entry, we need to expand to a full index and recalculate the name hash table before returning the result. Insert calls to expand_to_path() to protect against this case. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-04-14name-hash: don't add directories to name_hashLibravatar Derrick Stolee1-2/+5
Sparse directory entries represent a directory that is outside the sparse-checkout definition. These are not paths to blobs, so should not be added to the name_hash table. Instead, they should be added to the directory hashtable when 'ignore_case' is true. Add a condition to avoid placing sparse directories into the name_hash hashtable. This avoids filling the table with extra entries that will never be queried. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Reviewed-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-03-13use CALLOC_ARRAYLibravatar René Scharfe1-4/+4
Add and apply a semantic patch for converting code that open-codes CALLOC_ARRAY to use it instead. It shortens the code and infers the element size automatically. Signed-off-by: René Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2021-01-23name-hash: use trace2 regions for initLibravatar Derrick Stolee1-0/+3
The lazy_init_name_hash() populates a hashset with all filenames and another with all directories represented in the index. This is run only if we need to use the hashsets to check for existence or case-folding renames. Place trace2 regions where there is already a performance trace. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2020-11-02hashmap: provide deallocation function namesLibravatar Elijah Newren1-2/+2
hashmap_free(), hashmap_free_entries(), and hashmap_free_() have existed for a while, but aren't necessarily the clearest names, especially with hashmap_partial_clear() being added to the mix and lazy-initialization now being supported. Peff suggested we adopt the following names[1]: - hashmap_clear() - remove all entries and de-allocate any hashmap-specific data, but be ready for reuse - hashmap_clear_and_free() - ditto, but free the entries themselves - hashmap_partial_clear() - remove all entries but don't deallocate table - hashmap_partial_clear_and_free() - ditto, but free the entries This patch provides the new names and converts all existing callers over to the new naming scheme. [1] https://lore.kernel.org/git/20201030125059.GA3277724@coredump.intra.peff.net/ Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-12-01Merge branch 'en/doc-typofix'Libravatar Junio C Hamano1-1/+1
Docfix. * en/doc-typofix: Fix spelling errors in no-longer-updated-from-upstream modules multimail: fix a few simple spelling errors sha1dc: fix trivial comment spelling error Fix spelling errors in test commands Fix spelling errors in messages shown to users Fix spelling errors in names of tests Fix spelling errors in comments of testcases Fix spelling errors in code comments Fix spelling errors in documentation outside of Documentation/ Documentation: fix a bunch of typos, both old and new
2019-11-10Fix spelling errors in code commentsLibravatar Elijah Newren1-1/+1
Reported-by: Jens Schleusener <Jens.Schleusener@fossies.org> Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-11-07name-hash.c: remove duplicate word in commentLibravatar Elijah Newren1-1/+1
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap: remove type arg from hashmap_{get,put,remove}_entryLibravatar Eric Wong1-2/+1
Since these macros already take a `keyvar' pointer of a known type, we can rely on OFFSETOF_VAR to get the correct offset without relying on non-portable `__typeof__' and `offsetof'. Argument order is also rearranged, so `keyvar' and `member' are sequential as they are used as: `keyvar->member' Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07OFFSETOF_VAR macro to simplify hashmap iteratorsLibravatar Eric Wong1-2/+1
While we cannot rely on a `__typeof__' operator being portable to use with `offsetof'; we can calculate the pointer offset using an existing pointer and the address of a member using pointer arithmetic for compilers without `__typeof__'. This allows us to simplify usage of hashmap iterator macros by not having to specify a type when a pointer of that type is already given. In the future, list iterator macros (e.g. list_for_each_entry) may also be implemented using OFFSETOF_VAR to save hackers the trouble of using container_of/list_entry macros and without relying on non-portable `__typeof__'. v3: use `__typeof__' to avoid clang warnings Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap: introduce hashmap_free_entriesLibravatar Eric Wong1-2/+2
`hashmap_free_entries' behaves like `container_of' and passes the offset of the hashmap_entry struct to the internal `hashmap_free_' function, allowing the function to free any struct pointer regardless of where the hashmap_entry field is located. `hashmap_free' no longer takes any arguments aside from the hashmap itself. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_cmp_fn takes hashmap_entry paramsLibravatar Eric Wong1-8/+13
Another step in eliminating the requirement of hashmap_entry being the first member of a struct. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_get{,_from_hash} return "struct hashmap_entry *"Libravatar Eric Wong1-1/+2
Update callers to use hashmap_get_entry, hashmap_get_entry_from_hash or container_of as appropriate. This is another step towards eliminating the requirement of hashmap_entry being the first field in a struct. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap: use *_entry APIs to wrap container_ofLibravatar Eric Wong1-6/+5
Using `container_of' can be verbose and choosing names for intermediate "struct hashmap_entry" pointers is a hard problem. So introduce "*_entry" APIs inspired by similar linked-list APIs in the Linux kernel. Unfortunately, `__typeof__' is not portable C, so we need an extra parameter to specify the type. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_get_next returns "struct hashmap_entry *"Libravatar Eric Wong1-3/+5
This is a step towards removing the requirement for hashmap_entry being the first field of a struct. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_remove takes "const struct hashmap_entry *"Libravatar Eric Wong1-2/+2
This is less error-prone than "const void *" as the compiler now detects invalid types being passed. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_get takes "const struct hashmap_entry *"Libravatar Eric Wong1-1/+1
This is less error-prone than "const void *" as the compiler now detects invalid types being passed. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_add takes "struct hashmap_entry *"Libravatar Eric Wong1-4/+4
This is less error-prone than "void *" as the compiler now detects invalid types being passed. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_get_next takes "const struct hashmap_entry *"Libravatar Eric Wong1-1/+1
This is less error-prone than "const void *" as the compiler now detects invalid types being passed. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-10-07hashmap_entry_init takes "struct hashmap_entry *"Libravatar Eric Wong1-5/+5
C compilers do type checking to make life easier for us. So rely on that and update all hashmap_entry_init callers to take "struct hashmap_entry *" to avoid future bugs while improving safety and readability. Signed-off-by: Eric Wong <e@80x24.org> Reviewed-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-06-13cleanup: fix possible overflow errors in binary search, part 2Libravatar René Scharfe1-1/+2
Calculating the sum of two array indexes to find the midpoint between them can overflow, i.e. code like this is unsafe for big arrays: mid = (first + last) >> 1; Make sure the intermediate value stays within the boundaries instead, like this: mid = first + ((last - first) >> 1); The loop condition of the binary search makes sure that 'last' is always greater than 'first', so this is safe as long as 'first' is not negative. And that can be verified easily using the pre-context of each change, except for name-hash.c, so add an assertion to that effect there. The unsafe calculations were found with: git grep '(.*+.*) *>> *1' This is a continuation of 19716b21a4 (cleanup: fix possible overflow errors in binary search, 2017-10-08). Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2019-01-24cache.h: flip NO_THE_INDEX_COMPATIBILITY_MACROS switchLibravatar Nguyễn Thái Ngọc Duy1-1/+0
By default, index compat macros are off from now on, because they could hide the_index dependency. Only those in builtin can use it. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-05Clean up pthread_create() error handlingLibravatar Nguyễn Thái Ngọc Duy1-6/+10
Normally pthread_create() rarely fails. But with new pthreads wrapper, pthread_create() will return ENOSYS on a system without thread support. Threaded code _is_ protected by HAVE_THREADS and pthread_create() should never run in the first place. But the situation could change in the future and bugs may sneak in. Make sure that all pthread_create() reports the error cause. While at there, mark these strings for translation if they aren't. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-11-05name-hash.c: remove #ifdef NO_PTHREADSLibravatar Nguyễn Thái Ngọc Duy1-18/+4
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-08-18trace.h: support nested performance tracingLibravatar Nguyễn Thái Ngọc Duy1-2/+2
Performance measurements are listed right now as a flat list, which is fine when we measure big blocks. But when we start adding more and more measurements, some of them could be just part of a bigger measurement and a flat list gives a wrong impression that they are executed at the same level instead of nested. Add trace_performance_enter() and trace_performance_leave() to allow indent these nested measurements. For now it does not help much because the only nested thing is (lazy) name hash initialization (e.g. called in diff-index from "git status"). This will help more because I'm going to add some more tracing that's actually nested. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-27Merge branch 'bp/name-hash-dirname-fix'Libravatar Junio C Hamano1-3/+3
"git add" files in the same directory, but spelling the directory path in different cases on case insensitive filesystem, corrupted the name hash data structure and led to unexpected results. This has been corrected. * bp/name-hash-dirname-fix: name-hash: properly fold directory names in adjust_dirname_case()
2018-02-08name-hash: properly fold directory names in adjust_dirname_case()Libravatar Ben Peart1-3/+3
Correct the pointer arithmetic in adjust_dirname_case() so that it calls find_dir_entry() with the correct string length. Previously passing in "dir1/foo" would pass a length of 6 instead of the correct 4. This resulted in find_dir_entry() never finding the entry and so the subsequent memcpy that would fold the name to the version with the correct case never executed. Add a test to validate the corrected behavior with name folding of directories. Signed-off-by: Ben Peart <benpeart@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-02trace: measure where the time is spent in the index-heavy operationsLibravatar Nguyễn Thái Ngọc Duy1-0/+3
All the known heavy code blocks are measured (except object database access). This should help identify if an optimization is effective or not. An unoptimized git-status would give something like below: 0.001791141 s: read cache ... 0.004011363 s: preload index 0.000516161 s: refresh index 0.003139257 s: git command: ... 'status' '--porcelain=2' 0.006788129 s: diff-files 0.002090267 s: diff-index 0.001885735 s: initialize name hash 0.032013138 s: read directory 0.051781209 s: git command: './git' 'status' Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-07hashmap: add API to disable item counting when threadedLibravatar Jeff Hostetler1-2/+8
This is to address concerns raised by ThreadSanitizer on the mailing list about threaded unprotected R/W access to map.size with my previous "disallow rehash" change (0607e10009ee4e37cb49b4cec8d28a9dda1656a4). See: https://public-inbox.org/git/adb37b70139fd1e2bac18bfd22c8b96683ae18eb.1502780344.git.martin.agren@gmail.com/ Add API to hashmap to disable item counting and thus automatic rehashing. Also include API to later re-enable them. When item counting is disabled, the map.size field is invalid. So to prevent accidents, the field has been renamed and an accessor function hashmap_get_size() has been added. All direct references to this field have been been updated. And the name of the field changed to map.private_size to communicate this. Here is the relevant output from ThreadSanitizer showing the problem: WARNING: ThreadSanitizer: data race (pid=10554) Read of size 4 at 0x00000082d488 by thread T2 (mutexes: write M16): #0 hashmap_add hashmap.c:209 #1 hash_dir_entry_with_parent_and_prefix name-hash.c:302 #2 handle_range_dir name-hash.c:347 #3 handle_range_1 name-hash.c:415 #4 lazy_dir_thread_proc name-hash.c:471 #5 <null> <null> Previous write of size 4 at 0x00000082d488 by thread T1 (mutexes: write M31): #0 hashmap_add hashmap.c:209 #1 hash_dir_entry_with_parent_and_prefix name-hash.c:302 #2 handle_range_dir name-hash.c:347 #3 handle_range_1 name-hash.c:415 #4 handle_range_dir name-hash.c:380 #5 handle_range_1 name-hash.c:415 #6 lazy_dir_thread_proc name-hash.c:471 #7 <null> <null> Martin gives instructions for running TSan on test t3008 in this post: https://public-inbox.org/git/CAN0heSoJDL9pWELD6ciLTmWf-a=oyxe4EXXOmCKvsG5MSuzxsA@mail.gmail.com/ Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-07-05name-hash.c: drop hashmap_cmp_fn castLibravatar Stefan Beller1-9/+13
Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-30hashmap.h: compare function has access to a data fieldLibravatar Stefan Beller1-6/+10
When using the hashmap a common need is to have access to caller provided data in the compare function. A couple of times we abuse the keydata field to pass in the data needed. This happens for example in patch-ids.c. This patch changes the function signature of the compare function to have one more void pointer available. The pointer given for each invocation of the compare function must be defined in the init function of the hashmap and is just passed through. Documentation of this new feature is deferred to a later patch. This is a rather mechanical conversion, just adding the new pass-through parameter. However while at it improve the naming of the fields of all compare functions used by hashmaps by ensuring unused parameters are prefixed with 'unused_' and naming the parameters what they are (instead of 'unused' make it 'unused_keydata'). Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-31name-hash: fix buffer overrunLibravatar Kevin Willford1-1/+3
Add check for the end of the entries for the thread partition. Add test for lazy init name hash with specific directory structure The lazy init hash name was causing a buffer overflow when the last entry in the index was multiple folder deep with parent folders that did not have any files in them. This adds a test for the boundary condition of the thread partitions with the folder structure that was triggering the buffer overflow. The fix was to check if it is the last entry for the thread partition in the handle_range_dir and not try to use the next entry in the cache. Signed-off-by: Kevin Willford <kewillf@microsoft.com> Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-24name-hash: perf improvement for lazy_init_name_hashLibravatar Jeff Hostetler1-7/+485
Improve performance of lazy_init_name_hash() when ignore_case is set. Teach name-hash to build the istate.name_hash and istate.dir_hash simultaneously using a forward-diving technique on the pathname of the index_entry, rather than adding name_hash entries and then searching backwards in the pathname for parent directories. This borrows algorithm ideas from clear_ce_flags_{1,dir}. Multiple threads are used with the new algorithm to speed hashmap construction. This new code path is only used when threads are present (a compiler settings) and when the index is large enough to warrant the pthread complexity. The code in clear_ce_flags_dir() uses a linear search to find the adjacent index entries with the same prefix; a binary search is used here handle_range_dir() to further speed things up. The size of LAZY_THREAD_COST was determined from rough analysis using: t/helper/test-lazy-init-name-hash --analyze Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-22name-hash: specify initial size for istate.dir_hash tableLibravatar Jeff Hostetler1-1/+2
Specify an initial size for the istate.dir_hash HashMap matching the size of the istate.name_hash. Previously hashmap_init() was given 0, causing a 64 bucket hashmap to be created. When working with very large repositories, this would cause numerous rehash() calls to realloc and rebalance the hashmap. This is especially true when the worktree is deep, with many directories containing a few files. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2016-02-22convert trivial cases to FLEX_ARRAY macrosLibravatar Jeff King1-2/+1
Using FLEX_ARRAY macros reduces the amount of manual computation size we have to do. It also ensures we don't overflow size_t, and it makes sure we write the same number of bytes that we allocated. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-10-21name-hash: don't reuse cache_entry in dir_entryLibravatar David Turner1-26/+28
Stop reusing cache_entry in dir_entry; doing so causes a use-after-free bug. During merges, we free entries that we no longer need in the destination index. But those entries might have also been stored in the dir_entry cache, and when a later call to add_to_index found them, they would be used after being freed. To prevent this, change dir_entry to store a copy of the name instead of a pointer to a cache_entry. This entails some refactoring of code that expects the cache_entry. Keith McGuigan <kmcguigan@twitter.com> diagnosed this bug and wrote the initial patch, but this version does not use any of Keith's code. Helped-by: Keith McGuigan <kmcguigan@twitter.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: David Turner <dturner@twopensource.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-07-07hashmap: add simplified hashmap_get_from_hash() APILibravatar Karsten Blees1-3/+2
Hashmap entries are typically looked up by just a key. The hashmap_get() API expects an initialized entry structure instead, to support compound keys. This flexibility is currently only needed by find_dir_entry() in name-hash.c (and compat/win32/fscache.c in the msysgit fork). All other (currently five) call sites of hashmap_get() have to set up a near emtpy entry structure, resulting in duplicate code like this: struct hashmap_entry keyentry; hashmap_entry_init(&keyentry, hash(key)); return hashmap_get(map, &keyentry, key); Add a hashmap_get_from_hash() API that allows hashmap lookups by just specifying the key and its hash code, i.e.: return hashmap_get_from_hash(map, hash(key), key); Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-06-20name-hash.c: replace cache_name_compare() with memcmp(3)Libravatar Jeremiah Mahler1-1/+1
The same_name() private function wants a quick-and-exact check to see if they two names are byte-for-byte identical first and then fall back to the slow path. Use memcmp(3) for the former to make it clear that we do not want any "name" specific comparison. Signed-off-by: Jeremiah Mahler <jmmahler@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2014-02-24name-hash: retire unused index_name_exists()Libravatar Eric Sunshine1-8/+1
db5360f3f496 (name-hash: refactor polymorphic index_name_exists(); 2013-09-17) split index_name_exists() into index_file_exists() and index_dir_exists() but retained index_name_exists() as a thin wrapper to avoid disturbing possible in-flight topics. Since this change landed in 'master' some time ago and there are no in-flight topics referencing index_name_exists(), retire it. Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-11-18name-hash.c: remove cache entries instead of marking them CE_UNHASHEDLibravatar Karsten Blees1-24/+22
The new hashmap implementation supports remove, so really remove unused cache entries from the name hashmap instead of just marking them. The CE_UNHASHED flag and CE_STATE_MASK are no longer needed. Keep the CE_HASHED flag to prevent adding entries twice. Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-11-18name-hash.c: use new hash map implementation for cache entriesLibravatar Karsten Blees1-16/+8
Note: the "ce->next = NULL;" in unpack-trees.c::do_add_entry can safely be removed, as ce->next (now ce->ent.next) is always properly initialized in name-hash.c::hash_index_entry. Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-11-18name-hash.c: remove unreferenced directory entriesLibravatar Karsten Blees1-7/+8
The new hashmap implementation supports remove, so remove and free directory entries that are no longer referenced by active cache entries. Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-11-18name-hash.c: use new hash map implementation for directoriesLibravatar Karsten Blees1-59/+18
Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-09-17name-hash: stop storing trailing '/' on paths in index_state.dir_hashLibravatar Eric Sunshine1-5/+6
When 5102c617 (Add case insensitivity support for directories when using git status, 2010-10-03) added directories to the name-hash there was only a single hash table in which both real cache entries and leading directory prefixes were registered. To distinguish between the two types of entries, directories were stored with a trailing '/'. 2092678c (name-hash.c: fix endless loop with core.ignorecase=true, 2013-02-28), however, moved directories to a separate hash table (index_state.dir_hash) but retained the (now) redundant trailing '/', thus callers continue to bear the burden of ensuring the slash's presence before searching the index for a directory. Eliminate this redundancy by storing paths in the dir-hash without the trailing '/'. An important benefit of this change is that it eliminates undocumented and dangerous behavior of dir.c:directory_exists_in_index_icase() in which it assumes not only that it can validly access one character beyond the end of its incoming directory argument, but also that that character will unconditionally be a '/'. This perilous behavior was "tolerated" because the string passed in by its lone caller always had a '/' in that position, however, things broke [1] when 2eac2a4c (ls-files -k: a directory only can be killed if the index has a non-directory, 2013-08-15) added a new caller which failed to respect the undocumented assumption. [1]: http://thread.gmane.org/gmane.comp.version-control.git/232727 Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-09-17name-hash: refactor polymorphic index_name_exists()Libravatar Eric Sunshine1-24/+30
Depending upon the absence or presence of a trailing '/' on the incoming pathname, index_name_exists() checks either if a file is present in the index or if a directory is represented within the index. Each caller explicitly chooses the mode of operation by adding or removing a trailing '/' before invoking index_name_exists(). Since these two modes of operations are disjoint and have no code in common (one searches index_state.name_hash; the other dir_hash), they can be represented more naturally as distinct functions: one to search for a file, and one for a directory. Splitting index searching into two functions relieves callers of the artificial burden of having to add or remove a slash to select the mode of operation; instead they just call the desired function. A subsequent patch will take advantage of this benefit in order to eliminate the requirement that the incoming pathname for a directory search must have a trailing slash. (In order to avoid disturbing in-flight topics, index_name_exists() is retained as a thin wrapper dispatching either to index_dir_exists() or index_file_exists().) Signed-off-by: Eric Sunshine <sunshine@sunshineco.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-04-01Merge branch 'kb/name-hash'Libravatar Junio C Hamano1-43/+139
The code to keep track of what directory names are known to Git on platforms with case insensitive filesystems can get confused upon a hash collision between these pathnames and looped forever. * kb/name-hash: name-hash.c: fix endless loop with core.ignorecase=true
2013-03-21Merge branch 'nd/preallocate-hash'Libravatar Junio C Hamano1-0/+2
When we know approximately how many entries we will have in the hash-table, it makes sense to size the hash table to that number from the beginning to avoid unnecessary rehashing. * nd/preallocate-hash: Preallocate hash tables when the number of inserts are known in advance
2013-03-16Preallocate hash tables when the number of inserts are known in advanceLibravatar Nguyễn Thái Ngọc Duy1-0/+2
This avoids unnecessary re-allocations and reinsertions. On webkit.git (i.e. about 182k inserts to the name hash table), this reduces about 100ms out of 3s user time. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-02-27name-hash.c: fix endless loop with core.ignorecase=trueLibravatar Karsten Blees1-43/+139
With core.ignorecase=true, name-hash.c builds a case insensitive index of all tracked directories. Currently, the existing cache entry structures are added multiple times to the same hashtable (with different name lengths and hash codes). However, there's only one dir_next pointer, which gets completely messed up in case of hash collisions. In the worst case, this causes an endless loop if ce == ce->dir_next (see t7062). Use a separate hashtable and separate structures for the directory index so that each directory entry has its own next pointer. Use reference counting to track which directory entry contains files. There are only slight changes to the name-hash.c API: - new free_name_hash() used by read_cache.c::discard_index() - remove_name_hash() takes an additional index_state parameter - index_name_exists() for a directory (trailing '/') may return a cache entry that has been removed (CE_UNHASHED). This is not a problem as the return value is only used to check if the directory exists (dir.c) or to normalize casing of directory names (read-cache.c). Getting rid of cache_entry.dir_next reduces memory consumption, especially with core.ignorecase=false (which doesn't use that member at all). With core.ignorecase=true, building the directory index is slightly faster as we add / check the parent directory first (instead of going through all directory levels for each file in the index). E.g. with WebKit (~200k files, ~7k dirs), time spent in lazy_init_name_hash is reduced from 176ms to 130ms. Signed-off-by: Karsten Blees <blees@dcon.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>