summaryrefslogtreecommitdiff
path: root/fast-import.c
AgeCommit message (Collapse)AuthorFilesLines
2018-05-30Merge branch 'ma/lockfile-cleanup'Libravatar Junio C Hamano1-1/+1
Code clean-up to adjust to a more recent lockfile API convention that allows lockfile instances kept on the stack. * ma/lockfile-cleanup: lock_file: move static locks into functions lock_file: make function-local locks non-static refs.c: do not die if locking fails in `delete_pseudoref()` refs.c: do not die if locking fails in `write_pseudoref()` t/helper/test-write-cache: clean up lock-handling
2018-05-23Merge branch 'sb/oid-object-info'Libravatar Junio C Hamano1-6/+10
The codepath around object-info API has been taught to take the repository object (which in turn tells the API which object store the objects are to be located). * sb/oid-object-info: cache.h: allow oid_object_info to handle arbitrary repositories packfile: add repository argument to cache_or_unpack_entry packfile: add repository argument to unpack_entry packfile: add repository argument to read_object packfile: add repository argument to packed_object_info packfile: add repository argument to packed_to_object_type packfile: add repository argument to retry_bad_packed_offset cache.h: add repository argument to oid_object_info cache.h: add repository argument to oid_object_info_extended
2018-05-10lock_file: make function-local locks non-staticLibravatar Martin Ågren1-1/+1
Placing `struct lock_file`s on the stack used to be a bad idea, because the temp- and lockfile-machinery would keep a pointer into the struct. But after 076aa2cbd (tempfile: auto-allocate tempfiles on heap, 2017-09-05), we can safely have lockfiles on the stack. (This applies even if a user returns early, leaving a locked lock behind.) These `struct lock_file`s are local to their respective functions and we can drop their staticness. For good measure, I have inspected these sites and come to believe that they always release the lock, with the possible exception of bailing out using `die()` or `exit()` or by returning from a `cmd_foo()`. As pointed out by Jeff King, it would be bad if someone held on to a `struct lock_file *` for some reason. After some grepping, I agree with his findings: no-one appears to be doing that. Signed-off-by: Martin Ågren <martin.agren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-05-08Merge branch 'ds/commit-graph'Libravatar Junio C Hamano1-1/+1
Precompute and store information necessary for ancestry traversal in a separate file to optimize graph walking. * ds/commit-graph: commit-graph: implement "--append" option commit-graph: build graph from starting commits commit-graph: read only from specific pack-indexes commit: integrate commit graph with commit parsing commit-graph: close under reachability commit-graph: add core.commitGraph setting commit-graph: implement git commit-graph read commit-graph: implement git-commit-graph write commit-graph: implement write_commit_graph() commit-graph: create git-commit-graph builtin graph: add commit graph design document commit-graph: add format document csum-file: refactor finalize_hashfile() method csum-file: rename hashclose() to finalize_hashfile()
2018-04-26packfile: add repository argument to unpack_entryLibravatar Stefan Beller1-1/+1
Add a repository argument to allow the callers of unpack_entry to be more specific about which repository to act on. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Stefan Beller <sbeller@google.com> Reviewed-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-26cache.h: add repository argument to oid_object_infoLibravatar Stefan Beller1-5/+9
Add a repository argument to allow the callers of oid_object_info to be more specific about which repository to handle. This is a small mechanical change; it doesn't change the implementation to handle repositories other than the_repository yet. As with the previous commits, use a macro to catch callers passing a repository other than the_repository at compile time. Signed-off-by: Stefan Beller <sbeller@google.com> Reviewed-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-25Merge branch 'jm/mem-pool'Libravatar Junio C Hamano1-61/+16
An reusable "memory pool" implementation has been extracted from fast-import.c, which in turn has become the first user of the mem-pool API. * jm/mem-pool: mem-pool: move reusable parts of memory pool into its own file fast-import: introduce mem_pool type fast-import: rename mem_pool type to mp_block
2018-04-12mem-pool: move reusable parts of memory pool into its own fileLibravatar Jameson Miller1-69/+1
This moves the reusable parts of the memory pool logic used by fast-import.c into its own file for use by other components. Signed-off-by: Jameson Miller <jamill@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-12fast-import: introduce mem_pool typeLibravatar Jameson Miller1-29/+52
Introduce the mem_pool type which encapsulates all the information necessary to manage a pool of memory. This change moves the existing variables in fast-import used to support the global memory pool to use this structure. It also renames variables that are no longer used by memory pools to reflect their more scoped usage. These changes allow for the multiple instances of a memory pool to exist and be reused outside of fast-import. In a future commit the mem_pool type will be moved to its own file. Signed-off-by: Jameson Miller <jamill@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-12fast-import: rename mem_pool type to mp_blockLibravatar Jameson Miller1-10/+10
This is part of a patch series to extract the memory pool logic in fast-import into a more generalized version. The existing mem_pool type maps more closely to a "block of memory" (mp_block) in the more generalized memory pool. This commit renames the mem_pool to mp_block to reduce churn in future patches. Signed-off-by: Jameson Miller <jamill@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-04-11Merge branch 'sb/packfiles-in-repository'Libravatar Junio C Hamano1-2/+1
Refactoring of the internal global data structure continues. * sb/packfiles-in-repository: packfile: keep prepare_packed_git() private packfile: allow find_pack_entry to handle arbitrary repositories packfile: add repository argument to find_pack_entry packfile: allow reprepare_packed_git to handle arbitrary repositories packfile: allow prepare_packed_git to handle arbitrary repositories packfile: allow prepare_packed_git_one to handle arbitrary repositories packfile: add repository argument to reprepare_packed_git packfile: add repository argument to prepare_packed_git packfile: add repository argument to prepare_packed_git_one packfile: allow install_packed_git to handle arbitrary repositories packfile: allow rearrange_packed_git to handle arbitrary repositories packfile: allow prepare_packed_git_mru to handle arbitrary repositories
2018-04-11Merge branch 'sb/object-store'Libravatar Junio C Hamano1-2/+6
Refactoring the internal global data structure to make it possible to open multiple repositories, work with and then close them. Rerolled by Duy on top of a separate preliminary clean-up topic. The resulting structure of the topics looked very sensible. * sb/object-store: (27 commits) sha1_file: allow sha1_loose_object_info to handle arbitrary repositories sha1_file: allow map_sha1_file to handle arbitrary repositories sha1_file: allow map_sha1_file_1 to handle arbitrary repositories sha1_file: allow open_sha1_file to handle arbitrary repositories sha1_file: allow stat_sha1_file to handle arbitrary repositories sha1_file: allow sha1_file_name to handle arbitrary repositories sha1_file: add repository argument to sha1_loose_object_info sha1_file: add repository argument to map_sha1_file sha1_file: add repository argument to map_sha1_file_1 sha1_file: add repository argument to open_sha1_file sha1_file: add repository argument to stat_sha1_file sha1_file: add repository argument to sha1_file_name sha1_file: allow prepare_alt_odb to handle arbitrary repositories sha1_file: allow link_alt_odb_entries to handle arbitrary repositories sha1_file: add repository argument to prepare_alt_odb sha1_file: add repository argument to link_alt_odb_entries sha1_file: add repository argument to read_info_alternates sha1_file: add repository argument to link_alt_odb_entry sha1_file: add raw_object_store argument to alt_odb_usable pack: move approximate object count to object store ...
2018-04-10Merge branch 'bc/object-id'Libravatar Junio C Hamano1-15/+16
Conversion from uchar[20] to struct object_id continues. * bc/object-id: (36 commits) convert: convert to struct object_id sha1_file: introduce a constant for max header length Convert lookup_replace_object to struct object_id sha1_file: convert read_sha1_file to struct object_id sha1_file: convert read_object_with_reference to object_id tree-walk: convert tree entry functions to object_id streaming: convert istream internals to struct object_id tree-walk: convert get_tree_entry_follow_symlinks internals to object_id builtin/notes: convert static functions to object_id builtin/fmt-merge-msg: convert remaining code to object_id sha1_file: convert sha1_object_info* to object_id Convert remaining callers of sha1_object_info_extended to object_id packfile: convert unpack_entry to struct object_id sha1_file: convert retry_bad_packed_offset to struct object_id sha1_file: convert assert_sha1_type to object_id builtin/mktree: convert to struct object_id streaming: convert open_istream to use struct object_id sha1_file: convert check_sha1_signature to struct object_id sha1_file: convert read_loose_object to use struct object_id builtin/index-pack: convert struct ref_delta_entry to object_id ...
2018-04-02csum-file: rename hashclose() to finalize_hashfile()Libravatar Derrick Stolee1-1/+1
The hashclose() method behaves very differently depending on the flags parameter. In particular, the file descriptor is not always closed. Perform a simple rename of "hashclose()" to "finalize_hashfile()" in preparation for functional changes. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-26packfile: keep prepare_packed_git() privateLibravatar Nguyễn Thái Ngọc Duy1-1/+0
The reason callers have to call this is to make sure either packed_git or packed_git_mru pointers are initialized since we don't do that by default. Sometimes it's hard to see this connection between where the function is called and where packed_git pointer is used (sometimes in separate functions). Keep this dependency internal because now all access to packed_git and packed_git_mru must go through get_xxx() wrappers. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-26packfile: add repository argument to prepare_packed_gitLibravatar Stefan Beller1-1/+1
Add a repository argument to allow prepare_packed_git callers to be more specific about which repository to handle. See commit "sha1_file: add repository argument to link_alt_odb_entry" for an explanation of the #define trick. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-26packfile: allow install_packed_git to handle arbitrary repositoriesLibravatar Stefan Beller1-1/+1
This conversion was done without the #define trick used in the earlier series refactoring to have better repository access, because this function is easy to review, as it only has one caller and all lines but the first two are converted. We must not convert 'pack_open_fds' to be a repository specific variable, as it is used to monitor resource usage of the machine that Git executes on. Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-26object-store: move packed_git and packed_git_mru to object storeLibravatar Stefan Beller1-2/+6
In a process with multiple repositories open, packfile accessors should be associated to a single repository and not shared globally. Move packed_git and packed_git_mru into the_repository and adjust callers to reflect this. [nd: while at there, wrap access to these two fields in get_packed_git() and get_packed_git_mru(). This allows us to lazily initialize these fields without caller doing that explicitly] Signed-off-by: Stefan Beller <sbeller@google.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-21Merge branch 'rj/warning-uninitialized-fix'Libravatar Junio C Hamano1-2/+2
Compilation fix. * rj/warning-uninitialized-fix: read-cache: fix an -Wmaybe-uninitialized warning -Wuninitialized: remove some 'init-self' workarounds
2018-03-20-Wuninitialized: remove some 'init-self' workaroundsLibravatar Ramsay Jones1-2/+2
The 'self-initialised' variables construct (ie <type> var = var;) has been used to silence gcc '-W[maybe-]uninitialized' warnings. This has, unfortunately, caused MSVC to issue 'uninitialized variable' warnings. Also, using clang static analysis causes complaints about an 'Assigned value is garbage or undefined'. There are six such constructs in the current codebase. Only one of the six causes gcc to issue a '-Wmaybe-uninitialized' warning (which will be addressed elsewhere). The remaining five 'init-self' gcc workarounds are noted below, along with the commit which introduced them: 1. builtin/rev-list.c: 'reaches' and 'all', see commit 457f08a030 ("git-rev-list: add --bisect-vars option.", 2007-03-21). 2. merge-recursive.c:2064 'mrtree', see commit f120ae2a8e ("merge- recursive.c: mrtree in merge() is not used before set", 2007-10-29). 3. fast-import.c:3023 'oe', see commit 85c62395b1 ("fast-import: let importers retrieve blobs", 2010-11-28). 4. fast-import.c:3006 'oe', see commit 28c7b1f7b7 ("fast-import: add a get-mark command", 2015-07-01). Remove the 'self-initialised' variable constructs noted above. Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14sha1_file: convert read_sha1_file to struct object_idLibravatar brian m. carlson1-3/+3
Convert read_sha1_file to take a pointer to struct object_id and rename it read_object_file. Do the same for read_sha1_file_extended. Convert one use in grep.c to use the new function without any other code change, since the pointer being passed is a void pointer that is already initialized with a pointer to struct object_id. Update the declaration and definitions of the modified functions, and apply the following semantic patch to convert the remaining callers: @@ expression E1, E2, E3; @@ - read_sha1_file(E1.hash, E2, E3) + read_object_file(&E1, E2, E3) @@ expression E1, E2, E3; @@ - read_sha1_file(E1->hash, E2, E3) + read_object_file(E1, E2, E3) @@ expression E1, E2, E3, E4; @@ - read_sha1_file_extended(E1.hash, E2, E3, E4) + read_object_file_extended(&E1, E2, E3, E4) @@ expression E1, E2, E3, E4; @@ - read_sha1_file_extended(E1->hash, E2, E3, E4) + read_object_file_extended(E1, E2, E3, E4) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14sha1_file: convert read_object_with_reference to object_idLibravatar brian m. carlson1-7/+8
Convert read_object_with_reference to take pointers to struct object_id. Update the internals of the function accordingly. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-14sha1_file: convert sha1_object_info* to object_idLibravatar brian m. carlson1-5/+5
Convert sha1_object_info and sha1_object_info_extended to take pointers to struct object_id and rename them to use "oid" instead of "sha1" in their names. Update the declaration and definition and apply the following semantic patch, plus the standard object_id transforms: @@ expression E1, E2; @@ - sha1_object_info(E1.hash, E2) + oid_object_info(&E1, E2) @@ expression E1, E2; @@ - sha1_object_info(E1->hash, E2) + oid_object_info(E1, E2) @@ expression E1, E2, E3; @@ - sha1_object_info_extended(E1.hash, E2, E3) + oid_object_info_extended(&E1, E2, E3) @@ expression E1, E2, E3; @@ - sha1_object_info_extended(E1->hash, E2, E3) + oid_object_info_extended(E1, E2, E3) Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-03-06Merge branch 'bw/c-plus-plus'Libravatar Junio C Hamano1-8/+8
Avoid using identifiers that clash with C++ keywords. Even though it is not a goal to compile Git with C++ compilers, changes like this help use of code analysis tools that targets C++ on our codebase. * bw/c-plus-plus: (37 commits) replace: rename 'new' variables trailer: rename 'template' variables tempfile: rename 'template' variables wrapper: rename 'template' variables environment: rename 'namespace' variables diff: rename 'template' variables environment: rename 'template' variables init-db: rename 'template' variables unpack-trees: rename 'new' variables trailer: rename 'new' variables submodule: rename 'new' variables split-index: rename 'new' variables remote: rename 'new' variables ref-filter: rename 'new' variables read-cache: rename 'new' variables line-log: rename 'new' variables imap-send: rename 'new' variables http: rename 'new' variables entry: rename 'new' variables diffcore-delta: rename 'new' variables ...
2018-02-14object: rename function 'typename' to 'type_name'Libravatar Brandon Williams1-8/+8
Rename C++ keyword in order to bring the codebase closer to being able to be compiled with a C++ compiler. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-02csum-file: rename sha1file to hashfileLibravatar brian m. carlson1-14/+14
Rename struct sha1file to struct hashfile, along with all of its related functions. The transformation in this commit was made by global search-and-replace. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2018-02-02fast-import: switch various uses of SHA-1 to the_hash_algoLibravatar brian m. carlson1-20/+22
Switch various uses of explicit calls to SHA-1 to use the_hash_algo. Convert various uses of 20 and the GIT_SHA1 constants as well. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16refs: convert read_ref and read_ref_full to object_idLibravatar brian m. carlson1-1/+1
All but two of the call sites already have parameters using the hash parameter of struct object_id, so convert them to take a pointer to the struct directly. Also convert refs_read_refs_full, the underlying implementation. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-16refs: update ref transactions to use struct object_idLibravatar brian m. carlson1-2/+2
Update the ref transaction code to use struct object_id. Remove one NULL pointer check which was previously inserted around a dereference; since we now pass a pointer to struct object_id directly through, the code we're calling handles this for us. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-10-05Merge branch 'er/fast-import-dump-refs-on-checkpoint'Libravatar Junio C Hamano1-3/+3
The checkpoint command "git fast-import" did not flush updates to refs and marks unless at least one object was created since the last checkpoint, which has been corrected, as these things can happen without any new object getting created. * er/fast-import-dump-refs-on-checkpoint: fast-import: checkpoint: dump branches/tags/marks even if object_count==0
2017-09-29fast-import: checkpoint: dump branches/tags/marks even if object_count==0Libravatar Eric Rannaud1-3/+3
The checkpoint command cycles packfiles if object_count != 0, a sensible test or there would be no pack files to write. Since 820b931012, the command also dumps branches, tags and marks, but still conditionally. However, it is possible for a command stream to modify refs or create marks without creating any new objects. For example, reset a branch (and keep fast-import running): $ git fast-import reset refs/heads/master from refs/heads/master^ checkpoint but refs/heads/master remains unchanged. Other example: a commit command that re-creates an object that already exists in the object database. The man page also states that checkpoint "updates the refs" and that "placing a progress command immediately after a checkpoint will inform the reader when the checkpoint has been completed and it can safely access the refs that fast-import updated". This wasn't always true without this patch. This fix unconditionally calls dump_{branches,tags,marks}() for all checkpoint commands. dump_branches() and dump_tags() are cheap to call in the case of a no-op. Add tests to t9300 that observe the (non-packfiles) effects of checkpoint. Signed-off-by: Eric Rannaud <e@nanocritical.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-09-25Merge branch 'jk/write-in-full-fix'Libravatar Junio C Hamano1-1/+1
Many codepaths did not diagnose write failures correctly when disks go full, due to their misuse of write_in_full() helper function, which have been corrected. * jk/write-in-full-fix: read_pack_header: handle signed/unsigned comparison in read result config: flip return value of store_write_*() notes-merge: use ssize_t for write_in_full() return value pkt-line: check write_in_full() errors against "< 0" convert less-trivial versions of "write_in_full() != len" avoid "write_in_full(fd, buf, len) != len" pattern get-tar-commit-id: check write_in_full() return against 0 config: avoid "write_in_full(fd, buf, len) < len" pattern
2017-09-14avoid "write_in_full(fd, buf, len) != len" patternLibravatar Jeff King1-1/+1
The return value of write_in_full() is either "-1", or the requested number of bytes[1]. If we make a partial write before seeing an error, we still return -1, not a partial value. This goes back to f6aa66cb95 (write_in_full: really write in full or return error on disk full., 2007-01-11). So checking anything except "was the return value negative" is pointless. And there are a couple of reasons not to do so: 1. It can do a funny signed/unsigned comparison. If your "len" is signed (e.g., a size_t) then the compiler will promote the "-1" to its unsigned variant. This works out for "!= len" (unless you really were trying to write the maximum size_t bytes), but is a bug if you check "< len" (an example of which was fixed recently in config.c). We should avoid promoting the mental model that you need to check the length at all, so that new sites are not tempted to copy us. 2. Checking for a negative value is shorter to type, especially when the length is an expression. 3. Linus says so. In d34cf19b89 (Clean up write_in_full() users, 2007-01-11), right after the write_in_full() semantics were changed, he wrote: I really wish every "write_in_full()" user would just check against "<0" now, but this fixes the nasty and stupid ones. Appeals to authority aside, this makes it clear that writing it this way does not have an intentional benefit. It's a historical curiosity that we never bothered to clean up (and which was undoubtedly cargo-culted into new sites). So let's convert these obviously-correct cases (this includes write_str_in_full(), which is just a wrapper for write_in_full()). [1] A careful reader may notice there is one way that write_in_full() can return a different value. If we ask write() to write N bytes and get a return value that is _larger_ than N, we could return a larger total. But besides the fact that this would imply a totally broken version of write(), it would already invoke undefined behavior. Our internal remaining counter is an unsigned size_t, which means that subtracting too many byte will wrap it around to a very large number. So we'll instantly begin reading off the end of the buffer, trying to write gigabytes (or petabytes) of data. Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-08-23pack: move pack name-related functionsLibravatar Jonathan Tan1-0/+1
Currently, sha1_file.c and cache.h contain many functions, both related to and unrelated to packfiles. This makes both files very large and causes an unclear separation of concerns. Create a new file, packfile.c, to hold all packfile-related functions currently in sha1_file.c. It has a corresponding header packfile.h. In this commit, the pack name-related functions are moved. Subsequent commits will move the other functions. Signed-off-by: Jonathan Tan <jonathantanmy@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-24Merge branch 'ab/free-and-null'Libravatar Junio C Hamano1-4/+2
A common pattern to free a piece of memory and assign NULL to the pointer that used to point at it has been replaced with a new FREE_AND_NULL() macro. * ab/free-and-null: *.[ch] refactoring: make use of the FREE_AND_NULL() macro coccinelle: make use of the "expression" FREE_AND_NULL() rule coccinelle: add a rule to make "expression" code use FREE_AND_NULL() coccinelle: make use of the "type" FREE_AND_NULL() rule coccinelle: add a rule to make "type" code use FREE_AND_NULL() git-compat-util: add a FREE_AND_NULL() wrapper around free(ptr); ptr = NULL
2017-06-24Merge branch 'bw/config-h'Libravatar Junio C Hamano1-0/+1
Fix configuration codepath to pay proper attention to commondir that is used in multi-worktree situation, and isolate config API into its own header file. * bw/config-h: config: don't implicitly use gitdir or commondir config: respect commondir setup: teach discover_git_directory to respect the commondir config: don't include config.h by default config: remove git_config_iter config: create config.h
2017-06-22Merge branch 'mh/fast-import-raise-default-depth'Libravatar Junio C Hamano1-1/+1
"fast-import" uses a default pack chain depth that is consistent with other parts of the system. * mh/fast-import-raise-default-depth: fast-import: increase the default pack depth to 50
2017-06-16coccinelle: make use of the "type" FREE_AND_NULL() ruleLibravatar Ævar Arnfjörð Bjarmason1-4/+2
Apply the result of the just-added coccinelle rule. This manually excludes a few occurrences, mostly things that resulted in many FREE_AND_NULL() on one line, that'll be manually fixed in a subsequent change. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-15config: don't include config.h by defaultLibravatar Brandon Williams1-0/+1
Stop including config.h by default in cache.h. Instead only include config.h in those files which require use of the config system. Signed-off-by: Brandon Williams <bmwill@google.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-13Merge branch 'nd/fopen-errors'Libravatar Junio C Hamano1-3/+1
We often try to open a file for reading whose existence is optional, and silently ignore errors from open/fopen; report such errors if they are not due to missing files. * nd/fopen-errors: mingw_fopen: report ENOENT for invalid file names mingw: verify that paths are not mistaken for remote nicknames log: fix memory leak in open_next_file() rerere.c: move error_errno() closer to the source system call print errno when reporting a system call error wrapper.c: make warn_on_inaccessible() static wrapper.c: add and use fopen_or_warn() wrapper.c: add and use warn_on_fopen_errors() config.mak.uname: set FREAD_READS_DIRECTORIES for Darwin, too config.mak.uname: set FREAD_READS_DIRECTORIES for Linux and FreeBSD clone: use xfopen() instead of fopen() use xfopen() in more places git_fopen: fix a sparse 'not declared' warning
2017-06-12fast-import: increase the default pack depth to 50Libravatar Mike Hommey1-1/+1
In 618e613a70, 10 years ago, the default for pack depth used for git-pack-objects and git-repack was changed from 10 to 50, while leaving fast-import's default to 10. There doesn't seem to be a reason besides oversight for the change not having happened in fast-import as well. Interestingly, fast-import uses pack.depth when it's set, and the git-config manual says the default for pack.depth is 50. While the git-fast-import manual does say the default depth is 10, the inconsistency is also confusing. Signed-off-by: Mike Hommey <mh@glandium.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-26use xfopen() in more placesLibravatar Nguyễn Thái Ngọc Duy1-3/+1
xfopen() - provides error details - explains error on reading, or writing, or whatever operation - has l10n support - prints file name in the error Some of these are missing in the places that are replaced with xfopen(), which is a clear win. In some other places, it's just less code (not as clearly a win as the previous case but still is). The only slight regresssion is in remote-testsvn, where we don't report the file class (marks files) in the error messages anymore. But since this is a _test_ svn remote transport, I'm not too concerned. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-08pack: convert struct pack_idx_entry to struct object_idLibravatar brian m. carlson1-15/+15
Convert struct pack_idx_entry to use struct object_id by changing the definition and applying the following semantic patch, plus the standard object_id transforms: @@ struct pack_idx_entry E1; @@ - E1.sha1 + E1.oid.hash @@ struct pack_idx_entry *E1; @@ - E1->sha1 + E1->oid.hash Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-08Convert lookup_commit* to struct object_idLibravatar brian m. carlson1-2/+2
Convert lookup_commit, lookup_commit_or_die, lookup_commit_reference, and lookup_commit_reference_gently to take struct object_id arguments. Introduce a temporary in parse_object buffer in order to convert this function. This is required since in order to convert parse_object and parse_object_buffer, lookup_commit_reference_gently and lookup_commit_or_die would need to be converted. Not introducing a temporary would therefore require that lookup_commit_or_die take a struct object_id *, but lookup_commit would take unsigned char *, leaving a confusing and hard-to-use interface. parse_object_buffer will lose this temporary in a later patch. This commit was created with manual changes to commit.c, commit.h, and object.c, plus the following semantic patch: @@ expression E1, E2; @@ - lookup_commit_reference_gently(E1.hash, E2) + lookup_commit_reference_gently(&E1, E2) @@ expression E1, E2; @@ - lookup_commit_reference_gently(E1->hash, E2) + lookup_commit_reference_gently(E1, E2) @@ expression E1; @@ - lookup_commit_reference(E1.hash) + lookup_commit_reference(&E1) @@ expression E1; @@ - lookup_commit_reference(E1->hash) + lookup_commit_reference(E1) @@ expression E1; @@ - lookup_commit(E1.hash) + lookup_commit(&E1) @@ expression E1; @@ - lookup_commit(E1->hash) + lookup_commit(E1) @@ expression E1, E2; @@ - lookup_commit_or_die(E1.hash, E2) + lookup_commit_or_die(&E1, E2) @@ expression E1, E2; @@ - lookup_commit_or_die(E1->hash, E2) + lookup_commit_or_die(E1, E2) Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-08fast-import: convert to struct object_idLibravatar brian m. carlson1-158/+159
Convert the remaining parts of fast-import.c to use struct object_id. Convert several instances of get_sha1_hex to parse_oid_hex to avoid needing to specify constants. Convert other hardcoded values to named constants. Finally, use the is_empty_tree_oid function instead of a direct comparison against a fixed string. Note that the odd computation with GIT_MAX_HEXSZ is due to the insertion of a slash between every two hex digits in the path, plus one for the terminating NUL. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-02fast-import: convert internal structs to struct object_idLibravatar brian m. carlson1-86/+96
Convert struct tree_entry_ms, struct branch, struct tag, and struct hash_list to use struct object_id by changing the definition and applying the following semantic patch, plus the standard object_id transforms: @@ struct tree_entry_ms E1; @@ - E1.sha1 + E1.oid.hash @@ struct tree_entry_ms *E1; @@ - E1->sha1 + E1->oid.hash @@ struct branch E1; @@ - E1.sha1 + E1.oid.hash @@ struct branch *E1; @@ - E1->sha1 + E1->oid.hash @@ struct tag E1; @@ - E1.sha1 + E1.oid.hash @@ struct tag *E1; @@ - E1->sha1 + E1->oid.hash @@ struct hash_list E1; @@ - E1.sha1 + E1.oid.hash @@ struct hash_list *E1; @@ - E1->sha1 + E1->oid.hash Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-26Merge branch 'jk/war-on-git-path'Libravatar Junio C Hamano1-1/+1
While handy, "git_path()" is a dangerous function to use as a callsite that uses it safely one day can be broken by changes to other code that calls it. Reduction of its use continues. * jk/war-on-git-path: am: drop "dir" parameter from am_state_init replace strbuf_addstr(git_path()) with git_path_buf() replace xstrdup(git_path(...)) with git_pathdup(...) use git_path_* helper functions branch: add edit_description() helper bisect: add git_path_bisect_terms helper
2017-04-20replace xstrdup(git_path(...)) with git_pathdup(...)Libravatar Jeff King1-1/+1
It's more efficient to use git_pathdup(), as it skips an extra copy of the path. And by removing some calls to git_path(), it makes it easier to audit for dangerous uses. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-28odb_mkstemp: write filename into strbufLibravatar Jeff King1-4/+5
The odb_mkstemp() function expects the caller to provide a fixed buffer to write the resulting tempfile name into. But it creates the template using snprintf without checking the return value. This means we could silently truncate the filename. In practice, it's unlikely that the truncation would end in the template-pattern that mkstemp needs to open the file. So we'd probably end up failing either way, unless the path was specially crafted. The simplest fix would be to notice the truncation and die. However, we can observe that most callers immediately xstrdup() the result anyway. So instead, let's switch to using a strbuf, which is easier for them (and isn't a big deal for the other 2 callers, who can just strbuf_release when they're done with it). Note that many of the callers used static buffers, but this was purely to avoid putting a large buffer on the stack. We never passed the static buffers out of the function, so there's no complicated memory handling we need to change. Signed-off-by: Jeff King <peff@peff.net>
2017-03-28Merge branch 'jk/fast-import-cleanup'Libravatar Junio C Hamano1-9/+7
Code clean-up. * jk/fast-import-cleanup: pack.h: define largest possible encoded object size encode_in_pack_object_header: respect output buffer length fast-import: use xsnprintf for formatting headers fast-import: use xsnprintf for writing sha1s