summaryrefslogtreecommitdiff
path: root/refs/packed-backend.c
AgeCommit message (Collapse)AuthorFilesLines
2018-08-29convert "oidcmp() != 0" to "!oideq()"Libravatar Jeff King1-1/+1
This is the flip side of the previous two patches: checking for a non-zero oidcmp() can be more strictly expressed as inequality. Like those patches, we write "!= 0" in the coccinelle transformation, which covers by isomorphism the more common: if (oidcmp(E1, E2)) As with the previous two patches, this patch can be achieved almost entirely by running "make coccicheck"; the only differences are manual line-wrap fixes to match the original code. There is one thing to note for anybody replicating this, though: coccinelle 1.0.4 seems to miss the case in builtin/tag.c, even though it's basically the same as all the others. Running with 1.0.7 does catch this, so presumably it's just a coccinelle bug that was fixed in the interim. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-06-01refs/packed-backend.c: close fd of empty fileLibravatar Stefan Beller1-0/+1
Signed-off-by: Stefan Beller <sbeller@google.com> Acked-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-06Replace all die("BUG: ...") calls by BUG() onesLibravatar Johannes Schindelin1-8/+8
In d8193743e08 (usage.c: add BUG() function, 2017-05-12), a new macro was introduced to use for reporting bugs instead of die(). It was then subsequently used to convert one single caller in 588a538ae55 (setup_git_env: convert die("BUG") to BUG(), 2017-05-12). The cover letter of the patch series containing this patch (cf 20170513032414.mfrwabt4hovujde2@sigill.intra.peff.net) is not terribly clear why only one call site was converted, or what the plan is for other, similar calls to die() to report bugs. Let's just convert all remaining ones in one fell swoop. This trick was performed by this invocation: sed -i 's/die("BUG: /BUG("/g' $(git grep -l 'die("BUG' \*.c) Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-30refs: use chdir_notify to update cached relative pathsLibravatar Jeff King1-0/+3
Commit f57f37e2e1 (files-backend: remove the use of git_path(), 2017-03-26) introduced a regression when a relative $GIT_DIR is used in a working tree: - when we initialize the ref backend, we make a copy of get_git_dir(), which may be relative - later, we may call setup_work_tree() and chdir to the root of the working tree - further calls to the ref code will use the stored git directory, but relative paths will now point to the wrong place The new test in t1501 demonstrates one such instance (the bug causes us to write the ref update to the nonsense "relative/relative/.git"). Since setup_work_tree() now uses chdir_notify, we can just ask it update our relative paths when necessary. Reported-by: Rafael Ascensao <rafa.almas@gmail.com> Helped-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-15Merge branch 'kg/packed-ref-cache-fix'Libravatar Junio C Hamano1-51/+55
Avoid mmapping small files while using packed refs (especially ones with zero size, which would cause later munmap() to fail). * kg/packed-ref-cache-fix: packed_ref_cache: don't use mmap() for small files load_contents(): don't try to mmap an empty file packed_ref_iterator_begin(): make optimization more general find_reference_location(): make function safe for empty snapshots create_snapshot(): use `xmemdupz()` rather than a strbuf struct snapshot: store `start` rather than `header_len`
2018-01-24packed_ref_cache: don't use mmap() for small filesLibravatar Kim Gybels1-1/+3
Take a hint from commit ea68b0ce9f8 (hash-object: don't use mmap() for small files, 2010-02-21) and use read() instead of mmap() for small packed-refs files. Signed-off-by: Kim Gybels <kgybels@infogroep.be> Signed-off-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-24load_contents(): don't try to mmap an empty fileLibravatar Michael Haggerty1-7/+6
We don't actually create zero-length `packed-refs` files, but they are valid and we should handle them correctly. The old code `xmmap()`ed such files, which led to an error when `munmap()` was called. So, if the `packed-refs` file is empty, leave the snapshot at its zero values and return 0 without trying to read or mmap the file. Returning 0 also makes `create_snapshot()` exit early, which avoids the technically undefined comparison `NULL < NULL`. Reported-by: Kim Gybels <kgybels@infogroep.be> Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-24packed_ref_iterator_begin(): make optimization more generalLibravatar Michael Haggerty1-6/+6
We can return an empty iterator not only if the `packed-refs` file is missing, but also if it is empty or if there are no references whose names succeed `prefix`. Optimize away those cases as well by moving the call to `find_reference_location()` higher in the function and checking whether the determined start position is the same as `snapshot->eof`. (This is possible now because the previous commit made `find_reference_location()` robust against empty snapshots.) Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-24find_reference_location(): make function safe for empty snapshotsLibravatar Michael Haggerty1-4/+6
This function had two problems if called for an empty snapshot (i.e., `snapshot->start == snapshot->eof == NULL`): * It checked `NULL < NULL`, which is undefined by C (albeit highly unlikely to fail in the real world). * (Assuming the above comparison behaved as expected), it returned NULL when `mustexist` was false, contrary to its docstring. Change the check and fix the docstring. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-24create_snapshot(): use `xmemdupz()` rather than a strbufLibravatar Michael Haggerty1-5/+4
It's lighter weight. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-01-24struct snapshot: store `start` rather than `header_len`Libravatar Michael Haggerty1-31/+33
Store a pointer to the start of the actual references within the `packed-refs` contents rather than storing the length of the header. This is more convenient for most users of this field. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-12-06Merge branch 'mh/avoid-rewriting-packed-refs' into maintLibravatar Junio C Hamano1-0/+94
Recent update to the refs infrastructure implementation started rewriting packed-refs file more often than before; this has been optimized again for most trivial cases. * mh/avoid-rewriting-packed-refs: files-backend: don't rewrite the `packed-refs` file unnecessarily t1409: check that `packed-refs` is not rewritten unnecessarily
2017-11-15Merge branch 'mh/tidy-ref-update-flags'Libravatar Junio C Hamano1-9/+9
Code clean-up in refs API implementation. * mh/tidy-ref-update-flags: refs: update some more docs to use "oid" rather than "sha1" write_packed_entry(): take `object_id` arguments refs: rename constant `REF_ISPRUNING` to `REF_IS_PRUNING` refs: rename constant `REF_NODEREF` to `REF_NO_DEREF` refs: tidy up and adjust visibility of the `ref_update` flags ref_transaction_add_update(): remove a check ref_transaction_update(): die on disallowed flags prune_ref(): call `ref_transaction_add_update()` directly files_transaction_prepare(): don't leak flags to packed transaction
2017-11-15Merge branch 'mh/avoid-rewriting-packed-refs'Libravatar Junio C Hamano1-0/+94
Recent update to the refs infrastructure implementation started rewriting packed-refs file more often than before; this has been optimized again for most trivial cases. * mh/avoid-rewriting-packed-refs: files-backend: don't rewrite the `packed-refs` file unnecessarily t1409: check that `packed-refs` is not rewritten unnecessarily
2017-11-06refs: update some more docs to use "oid" rather than "sha1"Libravatar Michael Haggerty1-1/+1
Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-11-06write_packed_entry(): take `object_id` argumentsLibravatar Michael Haggerty1-8/+8
Change `write_packed_entry()` to take `struct object_id *` rather than `unsigned char *` arguments. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-30files-backend: don't rewrite the `packed-refs` file unnecessarilyLibravatar Michael Haggerty1-0/+94
Even when we are deleting references, we needn't overwrite the `packed-refs` file if the references that we are deleting only exist as loose references. Implement this optimization as follows: * Add a function `is_packed_transaction_needed()`, which checks whether a given packed-refs transaction actually needs to be carried out (i.e., it returns false if the transaction obviously wouldn't have any effect). This function must be called while holding the `packed-refs` lock to avoid races. * Change `files_transaction_prepare()` to check whether the packed-refs transaction is actually needed. If not, squelch it, but continue holding the `packed-refs` lock until the end of the transaction to avoid races. This fixes a mild regression caused by dc39e09942 (files_ref_store: use a transaction to update packed refs, 2017-09-08). Before that commit, unnecessary rewrites of `packed-refs` were suppressed by `repack_without_refs()`. But the transaction-based writing introduced by that commit didn't perform that optimization. Note that the pre-dc39e09942 code still had to *read* the whole `packed-refs` file to determine that the rewrite could be skipped, so the performance for the cases that the write could be elided was `O(N)` in the number of packed references both before and after dc39e09942. But after that commit the constant factor increased. This commit reimplements the optimization of eliding unnecessary `packed-refs` rewrites. That, plus the fact that since cfa2e29c34 (packed_ref_store: get rid of the `ref_cache` entirely, 2017-03-17) we don't necessarily have to read the whole `packed-refs` file at all, means that deletes of one or a few loose references can now be done with `O(n lg N)` effort, where `n` is the number of loose references being deleted and `N` is the total number of packed references. This commit fixes two tests in t1409. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16refs: convert read_raw_ref backends to struct object_idLibravatar brian m. carlson1-2/+2
Convert the unsigned char * parameter to struct object_id * for files_read_raw_ref and packed_read_raw_ref. Update the documentation. Switch from using get_sha1_hex and a hard-coded 40 to using parse_oid_hex. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16refs: convert peel_object to struct object_idLibravatar brian m. carlson1-3/+3
Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16refs: convert reflog_expire parameter to struct object_idLibravatar brian m. carlson1-1/+1
reflog_expire already used struct object_id internally, but it did not take it as a parameter. Adjust the parameter (and the callers) to pass a pointer to struct object_id instead of a pointer to unsigned char. Remove the temporary inserted earlier as it is no longer required. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-05Merge branch 'rs/cleanup-strbuf-users'Libravatar Junio C Hamano1-2/+2
Code clean-up. * rs/cleanup-strbuf-users: graph: use strbuf_addchars() to add spaces use strbuf_addstr() for adding strings to strbufs path: use strbuf_add_real_path()
2017-10-03Merge branch 'mh/mmap-packed-refs'Libravatar Junio C Hamano1-237/+742
Operations that do not touch (majority of) packed refs have been optimized by making accesses to packed-refs file lazy; we no longer pre-parse everything, and an access to a single ref in the packed-refs does not touch majority of irrelevant refs, either. * mh/mmap-packed-refs: (21 commits) packed-backend.c: rename a bunch of things and update comments mmapped_ref_iterator: inline into `packed_ref_iterator` ref_cache: remove support for storing peeled values packed_ref_store: get rid of the `ref_cache` entirely ref_store: implement `refs_peel_ref()` generically packed_read_raw_ref(): read the reference from the mmapped buffer packed_ref_iterator_begin(): iterate using `mmapped_ref_iterator` read_packed_refs(): ensure that references are ordered when read packed_ref_cache: keep the `packed-refs` file mmapped if possible packed-backend.c: reorder some definitions mmapped_ref_iterator_advance(): no peeled value for broken refs mmapped_ref_iterator: add iterator over a packed-refs file packed_ref_cache: remember the file-wide peeling state read_packed_refs(): read references with minimal copying read_packed_refs(): make parsing of the header line more robust read_packed_refs(): only check for a header at the top of the file read_packed_refs(): use mmap to read the `packed-refs` file die_unterminated_line(), die_invalid_line(): new functions packed_ref_cache: add a backlink to the associated `packed_ref_store` prefix_ref_iterator: break when we leave the prefix ...
2017-10-03Merge branch 'sd/branch-copy'Libravatar Junio C Hamano1-0/+8
"git branch" learned "-c/-C" to create a new branch by copying an existing one. * sd/branch-copy: branch: fix "copy" to never touch HEAD branch: add a --copy (-c) option to go with --move (-m) branch: add test for -m renaming multiple config sections config: create a function to format section headers
2017-10-02use strbuf_addstr() for adding strings to strbufsLibravatar René Scharfe1-2/+2
Use strbuf_addstr() instead of strbuf_addf() for adding strings. That's simpler and makes the intent clearer. Patch generated by Coccinelle and contrib/coccinelle/strbuf.cocci; adjusted indentation in refs/packed-backend.c manually. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25packed-backend.c: rename a bunch of things and update commentsLibravatar Michael Haggerty1-190/+232
We've made huge changes to this file, and some of the old names and comments are no longer very fitting. So rename a bunch of things: * `struct packed_ref_cache` → `struct snapshot` * `acquire_packed_ref_cache()` → `acquire_snapshot()` * `release_packed_ref_buffer()` → `clear_snapshot_buffer()` * `release_packed_ref_cache()` → `release_snapshot()` * `clear_packed_ref_cache()` → `clear_snapshot()` * `struct packed_ref_entry` → `struct snapshot_record` * `cmp_packed_ref_entries()` → `cmp_packed_ref_records()` * `cmp_entry_to_refname()` → `cmp_record_to_refname()` * `sort_packed_refs()` → `sort_snapshot()` * `read_packed_refs()` → `create_snapshot()` * `validate_packed_ref_cache()` → `validate_snapshot()` * `get_packed_ref_cache()` → `get_snapshot()` * Renamed local variables and struct members accordingly. Also update a bunch of comments to reflect the renaming and the accumulated changes that the code has undergone. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25mmapped_ref_iterator: inline into `packed_ref_iterator`Libravatar Michael Haggerty1-170/+114
Since `packed_ref_iterator` is now delegating to `mmapped_ref_iterator` rather than `cache_ref_iterator` to do the heavy lifting, there is no need to keep the two iterators separate. So "inline" `mmapped_ref_iterator` into `packed_ref_iterator`. This removes a bunch of boilerplate. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25ref_cache: remove support for storing peeled valuesLibravatar Michael Haggerty1-1/+8
Now that the `packed-refs` backend doesn't use `ref_cache`, there is nobody left who might want to store peeled values of references in `ref_cache`. So remove that feature. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25packed_ref_store: get rid of the `ref_cache` entirelyLibravatar Michael Haggerty1-27/+2
Now that everything has been changed to read what it needs directly out of the `packed-refs` file, `packed_ref_store` doesn't need to maintain a `ref_cache` at all. So get rid of it. First of all, this will save a lot of memory and lots of little allocations. Instead of needing to store complicated parsed data structures in memory, we just mmap the file (potentially sharing memory with other processes) and parse only what we need. Moreover, since the mmapped access to the file reads only the parts of the file that it needs, this might save reading all of the data from disk at all (at least if the file starts out sorted). Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25ref_store: implement `refs_peel_ref()` genericallyLibravatar Michael Haggerty1-36/+0
We're about to stop storing packed refs in a `ref_cache`. That means that the only way we have left to optimize `peel_ref()` is by checking whether the reference being peeled is the one currently being iterated over (in `current_ref_iter`), and if so, using `ref_iterator_peel()`. But this can be done generically; it doesn't have to be implemented per-backend. So implement `refs_peel_ref()` in `refs.c` and remove the `peel_ref()` method from the refs API. This removes the last callers of a couple of functions, so delete them. More cleanup to come... Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25packed_read_raw_ref(): read the reference from the mmapped bufferLibravatar Michael Haggerty1-5/+9
Instead of reading the reference from the `ref_cache`, read it directly from the mmapped buffer. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25packed_ref_iterator_begin(): iterate using `mmapped_ref_iterator`Libravatar Michael Haggerty1-3/+106
Now that we have an efficient way to iterate, in order, over the mmapped contents of the `packed-refs` file, we can use that directly to implement reference iteration for the `packed_ref_store`, rather than iterating over the `ref_cache`. This is the next step towards getting rid of the `ref_cache` entirely. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25read_packed_refs(): ensure that references are ordered when readLibravatar Michael Haggerty1-11/+212
It doesn't actually matter now, because the references are only iterated over to fill the associated `ref_cache`, which itself puts them in the correct order. But we want to get rid of the `ref_cache`, so we want to be able to iterate directly over the `packed-refs` buffer, and then the iteration will need to be ordered correctly. In fact, we already write the `packed-refs` file sorted, but it is possible that other Git clients don't get it right. So let's not assume that a `packed-refs` file is sorted unless it is explicitly declared to be so via a `sorted` trait in its header line. If it is *not* declared to be sorted, then scan quickly through the file to check. If it is found to be out of order, then sort the records into a new memory-only copy. This checking and sorting is done quickly, without parsing the full file contents. However, it needs a little bit of care to avoid reading past the end of the buffer even if the `packed-refs` file is corrupt. Since *we* always write the file correctly sorted, include that trait when we write or rewrite a `packed-refs` file. This means that the scan described in the previous paragraph should only have to be done for `packed-refs` files that were written by older versions of the Git command-line client, or by other clients that haven't yet learned to write the `sorted` trait. If `packed-refs` was already sorted, then (if the system allows it) we can use the mmapped file contents directly. But if the system doesn't allow a file that is currently mmapped to be replaced using `rename()`, then it would be bad for us to keep the file mmapped for any longer than necessary. So, on such systems, always make a copy of the file contents, either as part of the sorting process, or afterwards. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25packed_ref_cache: keep the `packed-refs` file mmapped if possibleLibravatar Michael Haggerty1-42/+143
Keep a copy of the `packed-refs` file contents in memory for as long as a `packed_ref_cache` object is in use: * If the system allows it, keep the `packed-refs` file mmapped. * If not (either because the system doesn't support `mmap()` at all, or because a file that is currently mmapped cannot be replaced via `rename()`), then make a copy of the file's contents in heap-allocated space, and keep that around instead. We base the choice of behavior on a new build-time switch, `MMAP_PREVENTS_DELETE`. By default, this switch is set for Windows variants. After this commit, `MMAP_NONE` and `MMAP_TEMPORARY` are still handled identically. But the next commit will introduce a difference. This whole change is still pointless, because we only read the `packed-refs` file contents immediately after instantiating the `packed_ref_cache`. But that will soon change. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25packed-backend.c: reorder some definitionsLibravatar Michael Haggerty1-24/+24
No code has been changed. This will make subsequent patches more self-contained. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25mmapped_ref_iterator_advance(): no peeled value for broken refsLibravatar Michael Haggerty1-2/+8
If a reference is broken, suppress its peeled value. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25mmapped_ref_iterator: add iterator over a packed-refs fileLibravatar Michael Haggerty1-55/+152
Add a new `mmapped_ref_iterator`, which can iterate over the references in an mmapped `packed-refs` file directly. Use this iterator from `read_packed_refs()` to fill the packed refs cache. Note that we are not yet willing to promise that the new iterator generates its output in order. That doesn't matter for now, because the packed refs cache doesn't care what order it is filled. This change adds a lot of boilerplate without providing any obvious benefits. The benefits will come soon, when we get rid of the `ref_cache` for packed references altogether. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25packed_ref_cache: remember the file-wide peeling stateLibravatar Michael Haggerty1-5/+12
Rather than store the peeling state (i.e., the one defined by traits in the `packed-refs` file header line) in a local variable in `read_packed_refs()`, store it permanently in `packed_ref_cache`. This will be needed when we stop reading all packed refs at once. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25read_packed_refs(): read references with minimal copyingLibravatar Michael Haggerty1-61/+40
Instead of copying data from the `packed-refs` file one line at time and then processing it, process the data in place as much as possible. Also, instead of processing one line per iteration of the main loop, process a reference line plus its corresponding peeled line (if present) together. Note that this change slightly tightens up the parsing of the `packed-refs` file. Previously, the parser would have accepted multiple "peeled" lines for a single reference (ignoring all but the last one). Now it would reject that. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-19Merge branch 'mh/packed-ref-transactions'Libravatar Junio C Hamano1-152/+310
Implement transactional update to the packed-ref representation of references. * mh/packed-ref-transactions: files_transaction_finish(): delete reflogs before references packed-backend: rip out some now-unused code files_ref_store: use a transaction to update packed refs t1404: demonstrate two problems with reference transactions files_initial_transaction_commit(): use a transaction for packed refs prune_refs(): also free the linked list files_pack_refs(): use a reference transaction to write packed refs packed_delete_refs(): implement method packed_ref_store: implement reference transactions struct ref_transaction: add a place for backends to store data packed-backend: don't adjust the reference count on lock/unlock
2017-09-14read_packed_refs(): make parsing of the header line more robustLibravatar Michael Haggerty1-6/+15
The old code parsed the traits in the `packed-refs` header by looking for the string " trait " (i.e., the name of the trait with a space on either side) in the header line. This is fragile, because if any other implementation of Git forgets to write the trailing space, the last trait would silently be ignored (and the error might never be noticed). So instead, use `string_list_split_in_place()` to split the traits into tokens then use `unsorted_string_list_has_string()` to look for the tokens we are interested in. This means that we can read the traits correctly even if the header line is missing a trailing space (or indeed, if it is missing the space after the colon, or if it has multiple spaces somewhere). However, older Git clients (and perhaps other Git implementations) still require the surrounding spaces, so we still have to output the header with a trailing space. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14read_packed_refs(): only check for a header at the top of the fileLibravatar Michael Haggerty1-11/+24
This tightens up the parsing a bit; previously, stray header-looking lines would have been processed. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14read_packed_refs(): use mmap to read the `packed-refs` fileLibravatar Michael Haggerty1-10/+32
It's still done in a pretty stupid way, involving more data copying than necessary. That will improve in future commits. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14die_unterminated_line(), die_invalid_line(): new functionsLibravatar Michael Haggerty1-3/+25
Extract some helper functions for reporting errors. While we're at it, prevent them from spewing unlimited output to the terminal. These functions will soon have more callers. These functions accept the problematic line as a `(ptr, len)` pair rather than a NUL-terminated string, and `die_invalid_line()` checks for an EOL itself, because these calling conventions will be convenient for future callers. (Efficiency is not a concern here because these functions are only ever called if the `packed-refs` file is corrupt.) Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14packed_ref_cache: add a backlink to the associated `packed_ref_store`Libravatar Michael Haggerty1-7/+16
It will prove convenient in upcoming patches. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-14ref_iterator: keep track of whether the iterator output is orderedLibravatar Michael Haggerty1-1/+1
References are iterated over in order by refname, but reflogs are not. Some consumers of reference iteration care about the difference. Teach each `ref_iterator` to keep track of whether its output is ordered. `overlay_ref_iterator` is one of the picky consumers. Add a sanity check in `overlay_ref_iterator_begin()` to verify that its inputs are ordered. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09packed-backend: rip out some now-unused codeLibravatar Michael Haggerty1-193/+0
Now the outside world interacts with the packed ref store only via the generic refs API plus a few lock-related functions. This allows us to delete some functions that are no longer used, thereby completing the encapsulation of the packed ref store. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09packed_delete_refs(): implement methodLibravatar Michael Haggerty1-1/+44
Implement `packed_delete_refs()` using a reference transaction. This means that `files_delete_refs()` can use `refs_delete_refs()` instead of `repack_without_refs()` to delete any packed references, decreasing the coupling between the classes. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09packed_ref_store: implement reference transactionsLibravatar Michael Haggerty1-3/+310
Implement the methods needed to support reference transactions for the packed-refs backend. The new methods are not yet used. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-09packed-backend: don't adjust the reference count on lock/unlockLibravatar Michael Haggerty1-5/+5
The old code incremented the packed ref cache reference count when acquiring the packed-refs lock, and decremented the count when releasing the lock. This is unnecessary because: * Another process cannot change the packed-refs file because it is locked. * When we ourselves change the packed-refs file, we do so by first modifying the packed ref-cache, and then writing the data from the ref-cache to disk. So the packed ref-cache remains fresh because any changes that we plan to make to the file are made in the cache first anyway. So there is no reason for the cache to become stale. Moreover, the extra reference count causes a problem if we intentionally clear the packed refs cache, as we sometimes need to do if we change the cache in anticipation of writing a change to disk, but then the write to disk fails. In that case, `packed_refs_unlock()` would have no easy way to find the cache whose reference count it needs to decrement. This whole issue will soon become moot due to upcoming changes that avoid changing the in-memory cache as part of updating the packed-refs on disk, but this change makes that transition easier. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-06tempfile: auto-allocate tempfiles on heapLibravatar Jeff King1-5/+6
The previous commit taught the tempfile code to give up ownership over tempfiles that have been renamed or deleted. That makes it possible to use a stack variable like this: struct tempfile t; create_tempfile(&t, ...); ... if (!err) rename_tempfile(&t, ...); else delete_tempfile(&t); But doing it this way has a high potential for creating memory errors. The tempfile we pass to create_tempfile() ends up on a global linked list, and it's not safe for it to go out of scope until we've called one of those two deactivation functions. Imagine that we add an early return from the function that forgets to call delete_tempfile(). With a static or heap tempfile variable, the worst case is that the tempfile hangs around until the program exits (and some functions like setup_shallow_temporary rely on this intentionally, creating a tempfile and then leaving it for later cleanup). But with a stack variable as above, this is a serious memory error: the variable goes out of scope and may be filled with garbage by the time the tempfile code looks at it. Let's see if we can make it harder to get this wrong. Since many callers need to allocate arbitrary numbers of tempfiles, we can't rely on static storage as a general solution. So we need to turn to the heap. We could just ask all callers to pass us a heap variable, but that puts the burden on them to call free() at the right time. Instead, let's have the tempfile code handle the heap allocation _and_ the deallocation (when the tempfile is deactivated and removed from the list). This changes the return value of all of the creation functions. For the cleanup functions (delete and rename), we'll add one extra bit of safety: instead of taking a tempfile pointer, we'll take a pointer-to-pointer and set it to NULL after freeing the object. This makes it safe to double-call functions like delete_tempfile(), as the second call treats the NULL input as a noop. Several callsites follow this pattern. The resulting patch does have a fair bit of noise, as each caller needs to be converted to handle: 1. Storing a pointer instead of the struct itself. 2. Passing the pointer instead of taking the struct address. 3. Handling a "struct tempfile *" return instead of a file descriptor. We could play games to make this less noisy. For example, by defining the tempfile like this: struct tempfile { struct heap_allocated_part_of_tempfile { int fd; ...etc } *actual_data; } Callers would continue to have a "struct tempfile", and it would be "active" only when the inner pointer was non-NULL. But that just makes things more awkward in the long run. There aren't that many callers, so we can simply bite the bullet and adjust all of them. And the compiler makes it easy for us to find them all. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>