summaryrefslogtreecommitdiff
path: root/Makefile
AgeCommit message (Collapse)AuthorFilesLines
2017-06-22Merge branch 'nd/fopen-errors'Libravatar Junio C Hamano1-1/+2
Hotfix for a topic that is already in 'master'. * nd/fopen-errors: configure.ac: loosen FREAD_READS_DIRECTORIES test program
2017-06-19Merge branch 'ab/pcre-v2'Libravatar Junio C Hamano1-8/+41
Update "perl-compatible regular expression" support to enable JIT and also allow linking with the newer PCRE v2 library. * ab/pcre-v2: grep: add support for PCRE v2 grep: un-break building with PCRE >= 8.32 without --enable-jit grep: un-break building with PCRE < 8.20 grep: un-break building with PCRE < 8.32 grep: add support for the PCRE v1 JIT API log: add -P as a synonym for --perl-regexp grep: skip pthreads overhead when using one thread grep: don't redundantly compile throwaway patterns under threading
2017-06-15configure.ac: loosen FREAD_READS_DIRECTORIES test programLibravatar Jeff King1-1/+2
We added an FREAD_READS_DIRECTORIES Makefile knob long ago in cba22528f (Add compat/fopen.c which returns NULL on attempt to open directory, 2008-02-08) to handle systems where reading from a directory returned garbage. This works by catching the problem at the fopen() stage and returning NULL. More recently, we found that there is a class of systems (including Linux) where fopen() succeeds but fread() fails. Since the solution is the same (having fopen return NULL), they use the same Makefile knob as of e2d90fd1c (config.mak.uname: set FREAD_READS_DIRECTORIES for Linux and FreeBSD, 2017-05-03). This works fine except for one thing: the autoconf test in configure.ac to set FREAD_READS_DIRECTORIES actually checks whether fread succeeds. Which means that on Linux systems, the knob isn't set (and we even override the config.mak.uname default). t1308 catches the failure. We can fix this by tweaking the autoconf test to cover both cases. In theory we might care about the distinction between the traditional "fread reads directories" case and the new "fopen opens directories". But since our solution catches the problem at the fopen stage either way, we don't actually need to know the difference. The "fopen" case is a superset. This does mean the FREAD_READS_DIRECTORIES name is slightly misleading. Probably FOPEN_OPENS_DIRECTORIES would be more accurate. But it would be disruptive to simply change the name (people's existing build configs would fail), and it's not worth the complexity of handling both. Let's just add a comment in the knob description. Reported-by: Øyvind A. Holm <sunny@sunbase.org> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-05Merge branch 'js/blame-lib'Libravatar Junio C Hamano1-0/+1
The internal logic used in "git blame" has been libified to make it easier to use by cgit. * js/blame-lib: (29 commits) blame: move entry prepend to libgit blame: move scoreboard setup to libgit blame: move scoreboard-related methods to libgit blame: move fake-commit-related methods to libgit blame: move origin-related methods to libgit blame: move core structures to header blame: create entry prepend function blame: create scoreboard setup function blame: create scoreboard init function blame: rework methods that determine 'final' commit blame: wrap blame_sort and compare_blame_final blame: move progress updates to a scoreboard callback blame: make sanity_check use a callback in scoreboard blame: move no_whole_file_rename flag to scoreboard blame: move xdl_opts flags to scoreboard blame: move show_root flag to scoreboard blame: move reverse flag to scoreboard blame: move contents_from to scoreboard blame: move copy/move thresholds to scoreboard blame: move stat counters to scoreboard ...
2017-06-04Merge branch 'ab/sha1dc-maint'Libravatar Junio C Hamano1-1/+8
The "collision detecting" SHA-1 implementation shipped with 2.13 was quite broken on some big-endian platforms and/or platforms that do not like unaligned fetches. Update to the upstream code which has already fixed these issues. * ab/sha1dc-maint: sha1dc: update from upstream
2017-06-02Merge branch 'ab/grep-preparatory-cleanup'Libravatar Junio C Hamano1-4/+10
The internal implementation of "git grep" has seen some clean-up. * ab/grep-preparatory-cleanup: (31 commits) grep: assert that threading is enabled when calling grep_{lock,unlock} grep: given --threads with NO_PTHREADS=YesPlease, warn pack-objects: fix buggy warning about threads pack-objects & index-pack: add test for --threads warning test-lib: add a PTHREADS prerequisite grep: move is_fixed() earlier to avoid forward declaration grep: change internal *pcre* variable & function names to be *pcre1* grep: change the internal PCRE macro names to be PCRE1 grep: factor test for \0 in grep patterns into a function grep: remove redundant regflags assignments grep: catch a missing enum in switch statement perf: add a comparison test of log --grep regex engines with -F perf: add a comparison test of log --grep regex engines perf: add a comparison test of grep regex engines with -F perf: add a comparison test of grep regex engines perf: emit progress output when unpacking & building perf: add a GIT_PERF_MAKE_COMMAND for when *_MAKE_OPTS won't do grep: add tests to fix blind spots with \0 patterns grep: prepare for testing binary regexes containing rx metacharacters grep: add a test helper function for less verbose -f \0 tests ...
2017-06-02grep: add support for PCRE v2Libravatar Ævar Arnfjörð Bjarmason1-8/+28
Add support for v2 of the PCRE API. This is a new major version of PCRE that came out in early 2015[1]. The regular expression syntax is the same, but while the API is similar, pretty much every function is either renamed or takes different arguments. Thus using it via entirely new functions makes sense, as opposed to trying to e.g. have one compile_pcre_pattern() that would call either PCRE v1 or v2 functions. Git can now be compiled with either USE_LIBPCRE1=YesPlease or USE_LIBPCRE2=YesPlease, with USE_LIBPCRE=YesPlease currently being a synonym for the former. Providing both is a compile-time error. With earlier patches to enable JIT for PCRE v1 the performance of the release versions of both libraries is almost exactly the same, with PCRE v2 being around 1% slower. However after I reported this to the pcre-dev mailing list[2] I got a lot of help with the API use from Zoltán Herczeg, he subsequently optimized some of the JIT functionality in v2 of the library. Running the p7820-grep-engines.sh performance test against the latest Subversion trunk of both, with both them and git compiled as -O3, and the test run against linux.git, gives the following results. Just the /perl/ tests shown: $ GIT_PERF_REPEAT_COUNT=30 GIT_PERF_LARGE_REPO=~/g/linux GIT_PERF_MAKE_COMMAND='grep -q LIBPCRE2 Makefile && make -j8 USE_LIBPCRE2=YesPlease CC=~/perl5/installed/bin/gcc NO_R_TO_GCC_LINKER=YesPlease CFLAGS=-O3 LIBPCREDIR=/home/avar/g/pcre2/inst LDFLAGS=-Wl,-rpath,/home/avar/g/pcre2/inst/lib || make -j8 USE_LIBPCRE=YesPlease CC=~/perl5/installed/bin/gcc NO_R_TO_GCC_LINKER=YesPlease CFLAGS=-O3 LIBPCREDIR=/home/avar/g/pcre/inst LDFLAGS=-Wl,-rpath,/home/avar/g/pcre/inst/lib' ./run HEAD~5 HEAD~ HEAD p7820-grep-engines.sh [...] Test HEAD~5 HEAD~ HEAD ----------------------------------------------------------------------------------------------------------------- 7820.3: perl grep 'how.to' 0.31(1.10+0.48) 0.21(0.35+0.56) -32.3% 0.21(0.34+0.55) -32.3% 7820.7: perl grep '^how to' 0.56(2.70+0.40) 0.24(0.64+0.52) -57.1% 0.20(0.28+0.60) -64.3% 7820.11: perl grep '[how] to' 0.56(2.66+0.38) 0.29(0.95+0.45) -48.2% 0.23(0.45+0.54) -58.9% 7820.15: perl grep '(e.t[^ ]*|v.ry) rare' 1.02(5.77+0.42) 0.31(1.02+0.54) -69.6% 0.23(0.50+0.54) -77.5% 7820.19: perl grep 'm(ú|u)lt.b(æ|y)te' 0.38(1.57+0.42) 0.27(0.85+0.46) -28.9% 0.21(0.33+0.57) -44.7% See commit ("perf: add a comparison test of grep regex engines", 2017-04-19) for details on the machine the above test run was executed on. Here HEAD~2 is git with PCRE v1 without JIT, HEAD~ is PCRE v1 with JIT, and HEAD is PCRE v2 (also with JIT). See previous commits of mine mentioning p7820-grep-engines.sh for more details on the test setup. For ease of readability, a different run just of HEAD~ (PCRE v1 with JIT v.s. PCRE v2), again with just the /perl/ tests shown: [...] Test HEAD~ HEAD ---------------------------------------------------------------------------------------- 7820.3: perl grep 'how.to' 0.21(0.42+0.52) 0.21(0.31+0.58) +0.0% 7820.7: perl grep '^how to' 0.25(0.65+0.50) 0.20(0.31+0.57) -20.0% 7820.11: perl grep '[how] to' 0.30(0.90+0.50) 0.23(0.46+0.53) -23.3% 7820.15: perl grep '(e.t[^ ]*|v.ry) rare' 0.30(1.19+0.38) 0.23(0.51+0.51) -23.3% 7820.19: perl grep 'm(ú|u)lt.b(æ|y)te' 0.27(0.84+0.48) 0.21(0.34+0.57) -22.2% I.e. the two are either neck-to-neck, but PCRE v2 usually pulls ahead, when it does it's around 20% faster. A brief note on thread safety: As noted in pcre2api(3) & pcre2jit(3) the compiled pattern can be shared between threads, but not some of the JIT context, however the grep threading support does all pattern & JIT compilation in separate threads, so this code doesn't need to concern itself with thread safety. See commit 63e7e9d8b6 ("git-grep: Learn PCRE", 2011-05-09) for the initial addition of PCRE v1. This change follows some of the same patterns it did (and which were discussed on list at the time), e.g. mocking up types with typedef instead of ifdef-ing them out when USE_LIBPCRE2 isn't defined. This adds some trivial memory use to the program, but makes the code look nicer. 1. https://lists.exim.org/lurker/message/20150105.162835.0666407a.en.html 2. https://lists.exim.org/lurker/thread/20170419.172322.833ee099.en.html Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-06-02grep: un-break building with PCRE >= 8.32 without --enable-jitLibravatar Ævar Arnfjörð Bjarmason1-0/+13
Amend my change earlier in this series ("grep: add support for the PCRE v1 JIT API", 2017-04-11) to un-break the build on PCRE v1 versions later than 8.31 compiled without --enable-jit. As explained in that change and a later compatibility change in this series ("grep: un-break building with PCRE < 8.32", 2017-05-10) the pcre_jit_exec() function is a faster path to execute the JIT. Unfortunately there's no compatibility stub for that function compiled into the library if pcre_config(PCRE_CONFIG_JIT, &ret) would return 0, and no macro that can be used to check for it, so the only portable option to support builds without --enable-jit is via a new NO_LIBPCRE1_JIT=UnfortunatelyYes Makefile option[1]. Another option would be to make the JIT opt-in via USE_LIBPCRE1_JIT=YesPlease, after all it's not a default option of PCRE v1. I think it makes more sense to make it opt-out since even though it's not a default option, most packagers of PCRE seem to turn it on by default, with the notable exception of the MinGW package. Make the MinGW platform work by default by changing the build defaults to turn on NO_LIBPCRE1_JIT=UnfortunatelyYes. It is the only platform that turns on USE_LIBPCRE=YesPlease by default, see commit df5218b4c3 ("config.mak.uname: support MSys2", 2016-01-13) for that change. 1. "How do I support pcre1 JIT on all versions?" (https://lists.exim.org/lurker/thread/20170601.103148.10253788.en.html) 2. https://github.com/Alexpux/MINGW-packages/blob/master/mingw-w64-pcre/PKGBUILD (referenced from "Re: PCRE v2 compile error, was Re: What's cooking in git.git (May 2017, #01; Mon, 1)"; <alpine.DEB.2.20.1705021756530.3480@virtualbox>) Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-30Merge branch 'bp/sub-process-convert-filter'Libravatar Junio C Hamano1-0/+1
Code from "conversion using external process" codepath has been extracted to a separate sub-process.[ch] module. * bp/sub-process-convert-filter: convert: update subprocess_read_status() to not die on EOF sub-process: move sub-process functions into separate files convert: rename reusable sub-process functions convert: update generic functions to only use generic data structures convert: separate generic structures and variables from the filter specific ones convert: split start_multi_file_filter() into two separate functions pkt-line: annotate packet_writel with LAST_ARG_MUST_BE_NULL convert: move packet_write_line() into pkt-line as packet_writel() pkt-line: add packet_read_line_gently() pkt-line: fix packet_read_line() to handle len < 0 errors convert: remove erroneous tests for errno == EPIPE
2017-05-26test-lib: add a PTHREADS prerequisiteLibravatar Ævar Arnfjörð Bjarmason1-0/+1
Add a PTHREADS prerequisite which is false when git is compiled with NO_PTHREADS=YesPlease. There's lots of custom code that runs when threading isn't available, but before this prerequisite there was no way to test it. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-26grep: change the internal PCRE macro names to be PCRE1Libravatar Ævar Arnfjörð Bjarmason1-2/+2
Change the internal USE_LIBPCRE define, & build options flag to use a naming convention ending in PCRE1, without changing the long-standing USE_LIBPCRE Makefile flag which enables this code. This is for preparation for libpcre2 support where having things like USE_LIBPCRE and USE_LIBPCRE2 in any more places than we absolutely need to for backwards compatibility with old Makefile arguments would be confusing. In some ways it would be better to change everything that now uses USE_LIBPCRE to use USE_LIBPCRE1, and to make specifying USE_LIBPCRE (or --with-pcre) an error. This would impose a one-time burden on packagers of git to s/USE_LIBPCRE/USE_LIBPCRE1/ in their build scripts. However I'd like to leave the door open to making USE_LIBPCRE=YesPlease eventually mean USE_LIBPCRE2=YesPlease, i.e. once PCRE v2 is ubiquitous enough that it makes sense to make it the default. This code and the USE_LIBPCRE Makefile argument was added in commit 63e7e9d8b6 ("git-grep: Learn PCRE", 2011-05-09). At the time there was no indication that the PCRE project would release an entirely new & incompatible API around 3 years later. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-25blame: move origin-related methods to libgitLibravatar Jeff Smith1-0/+1
Signed-off-by: Jeff Smith <whydoubt@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-22sha1dc: update from upstreamLibravatar Ævar Arnfjörð Bjarmason1-1/+8
Update sha1dc from the latest version by the upstream maintainer[1]. This version includes a commit of mine which allows for replacing the local modifications done to the upstream files in git.git with macro definitions to monkeypatch it in place. It also brings in a change[2] upstream made for the breakage 2.13.0 introduced on SPARC and other platforms that forbid unaligned access[3]. This means that the code customizations done since the initial import in commit 28dc98e343 ("sha1dc: add collision-detecting sha1 implementation", 2017-03-16) can be done purely via Makefile definitions and by including the content of our own sha1dc_git.[ch] in sha1dc/sha1.c via a macro. 1. https://github.com/cr-marcstevens/sha1collisiondetection/commit/cc465543b310e5f59a1d534381690052e8509b22 2. https://github.com/cr-marcstevens/sha1collisiondetection/commit/33a694a9ee1b79c24be45f9eab5ac0e1aeeaf271 3. "Git 2.13.0 segfaults on Solaris SPARC due to DC_SHA1=YesPlease being on by default" (https://public-inbox.org/git/CACBZZX6nmKK8af0-UpjCKWV4R+hV-uk2xWXVA5U+_UQ3VXU03g@mail.gmail.com/) Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-21perf: add a GIT_PERF_MAKE_COMMAND for when *_MAKE_OPTS won't doLibravatar Ævar Arnfjörð Bjarmason1-0/+3
Add a git GIT_PERF_MAKE_COMMAND variable to compliment the existing GIT_PERF_MAKE_OPTS facility. This allows specifying an arbitrary shell command to execute instead of 'make'. This is useful e.g. in cases where the name, semantics or defaults of a Makefile flag have changed over time. It can even be used to change the contents of the tree, useful for monkeypatching ancient versions of git to get them to build. This opens Pandora's box in some ways, it's now possible to "jailbreak" the perf environment and e.g. modify the source tree via this arbitrary instead of just issuing a custom "make" command, such a command has to be re-entrant in the sense that subsequent perf runs will re-use the possibly modified tree. It would be pointless to try to mitigate or work around that caveat in a tool purely aimed at Git developers, so this change makes no attempt to do so. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-21Makefile & configure: reword inaccurate comment about PCRELibravatar Ævar Arnfjörð Bjarmason1-2/+4
Reword an outdated & inaccurate comment which suggests that only git-grep can use PCRE. This comment was added back when PCRE support was initially added in commit 63e7e9d8b6 ("git-grep: Learn PCRE", 2011-05-09), and was true at the time. It hasn't been telling the full truth since git-log learned to use PCRE with --grep in commit 727b6fc3ed ("log --grep: accept --basic-regexp and --perl-regexp", 2012-10-03), and more importantly is likely to get more inaccurate over time as more use is made of PCRE in other areas. Reword it to be more future-proof, and to more clearly explain that this enables user-initiated runtime behavior. Copy/pasting this so much in configure.ac is lame, these Makefile-like flags aren't even used by autoconf, just the corresponding --with[out]-* options. But copy/pasting the comments that make sense for the Makefile to configure.ac where they make less sense is the pattern everything else follows in that file. I'm not going to war against that as part of this change, just following the existing pattern. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-05-15sub-process: move sub-process functions into separate filesLibravatar Ben Peart1-0/+1
Move the sub-proces functions into sub-process.h/c. Add documentation for the new module in Documentation/technical/api-sub-process.txt Signed-off-by: Ben Peart <benpeart@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-26Merge branch 'mh/separate-ref-cache'Libravatar Junio C Hamano1-0/+1
The internals of the refs API around the cached refs has been streamlined. * mh/separate-ref-cache: do_for_each_entry_in_dir(): delete function files_pack_refs(): use reference iteration commit_packed_refs(): use reference iteration cache_ref_iterator_begin(): make function smarter get_loose_ref_cache(): new function get_loose_ref_dir(): function renamed from get_loose_refs() do_for_each_entry_in_dir(): eliminate `offset` argument refs: handle "refs/bisect/" in `loose_fill_ref_dir()` ref-cache: use a callback function to fill the cache refs: record the ref_store in ref_cache, not ref_dir ref-cache: introduce a new type, ref_cache refs: split `ref_cache` code into separate files ref-cache: rename `remove_entry()` to `remove_entry_from_dir()` ref-cache: rename `find_ref()` to `find_ref_entry()` ref-cache: rename `add_ref()` to `add_ref_entry()` refs_verify_refname_available(): use function in more places refs_verify_refname_available(): implement once for all backends refs_ref_iterator_begin(): new function refs_read_raw_ref(): new function get_ref_dir(): don't call read_loose_refs() for "refs/bisect"
2017-04-26Merge branch 'jh/add-index-entry-optim'Libravatar Junio C Hamano1-0/+1
"git checkout" that handles a lot of paths has been optimized by reducing the number of unnecessary checks of paths in the has_dir_name() function. * jh/add-index-entry-optim: read-cache: speed up has_dir_name (part 2) read-cache: speed up has_dir_name (part 1) read-cache: speed up add_index_entry during checkout p0006-read-tree-checkout: perf test to time read-tree read-cache: add strcmp_offset function
2017-04-19Merge branch 'jh/memihash-opt'Libravatar Junio C Hamano1-0/+1
Hotfix for a topic that is already in 'master'. * jh/memihash-opt: p0004: make perf test executable t3008: skip lazy-init test on a single-core box test-online-cpus: helper to return cpu count name-hash: fix buffer overrun
2017-04-19Merge branch 'nd/files-backend-git-dir'Libravatar Junio C Hamano1-0/+1
The "submodule" specific field in the ref_store structure is replaced with a more generic "gitdir" that can later be used also when dealing with ref_store that represents the set of refs visible from the other worktrees. * nd/files-backend-git-dir: (28 commits) refs.h: add a note about sorting order of for_each_ref_* t1406: new tests for submodule ref store t1405: some basic tests on main ref store t/helper: add test-ref-store to test ref-store functions refs: delete pack_refs() in favor of refs_pack_refs() files-backend: avoid ref api targeting main ref store refs: new transaction related ref-store api refs: add new ref-store api refs: rename get_ref_store() to get_submodule_ref_store() and make it public files-backend: replace submodule_allowed check in files_downcast() refs: move submodule code out of files-backend.c path.c: move some code out of strbuf_git_path_submodule() refs.c: make get_main_ref_store() public and use it refs.c: kill register_ref_store(), add register_submodule_ref_store() refs.c: flatten get_ref_store() a bit refs: rename lookup_ref_store() to lookup_submodule_ref_store() refs.c: introduce get_main_ref_store() files-backend: remove the use of git_path() files-backend: add and use files_ref_path() files-backend: add and use files_reflog_path() ...
2017-04-16Merge branch 'ab/regen-perl-mak-with-different-perl'Libravatar Junio C Hamano1-0/+1
Update the build dependency so that an update to /usr/bin/perl etc. result in recomputation of perl.mak file. * ab/regen-perl-mak-with-different-perl: perl: regenerate perl.mak if perl -V changes
2017-04-16refs: split `ref_cache` code into separate filesLibravatar Michael Haggerty1-0/+1
The `ref_cache` code is currently too tightly coupled to `files-backend`, making the code harder to understand and making it awkward for new code to use `ref_cache` (as we indeed have planned). Start loosening that coupling by splitting `ref_cache` into a separate module. This commit moves code, adds declarations, and changes the visibility of some functions, but doesn't change any code. The modules are still too tightly coupled, but the situation will be improved in subsequent commits. Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-15read-cache: add strcmp_offset functionLibravatar Jeff Hostetler1-0/+1
Add strcmp_offset() function to also return the offset of the first change. Add unit test and helper to verify. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-14t/helper: add test-ref-store to test ref-store functionsLibravatar Nguyễn Thái Ngọc Duy1-0/+1
Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-04-12test-online-cpus: helper to return cpu countLibravatar Jeff Hostetler1-0/+1
Created helper executable to print the value of online_cpus() allowing multi-threaded tests to be skipped when appropriate. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-30Merge branch 'jk/make-coccicheck-detect-errors'Libravatar Junio C Hamano1-2/+10
Build fix. * jk/make-coccicheck-detect-errors: Makefile: detect errors in running spatch
2017-03-29perl: regenerate perl.mak if perl -V changesLibravatar Ævar Arnfjörð Bjarmason1-0/+1
Change the perl/perl.mak build process so that the file is regenerated if the output of "perl -V" changes. Before this change updating e.g. /usr/bin/perl to a new major version would cause the next "make" command to fail, since perl.mak has hardcoded paths to perl library paths retrieved from its first run. Now the logic added in commit ee9be06770 ("perl: detect new files in MakeMaker builds", 2012-07-27) is extended to regenerate perl/perl.mak if there's any change to "perl -V". This will in some cases redundantly trigger perl/perl.mak to be re-made, e.g. if @INC is modified in ways the build process doesn't care about through sitecustomize.pl, but the common case is that we just do the right thing and re-generate perl/perl.mak when needed. Signed-off-by: Ævar Arnfjörð Bjarmason <avarab@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-29Makefile: detect errors in running spatchLibravatar Jeff King1-2/+10
The "make coccicheck" target runs spatch against each source file. But it does so in a for loop, so "make" never sees the exit code of spatch. Worse, it redirects stderr to a log file, so the user has no indication of any failure. And then to top it all off, because we touched the patch file's mtime, make will refuse to repeat the command because it think the target is up-to-date. So for example: $ make coccicheck SPATCH=does-not-exist SPATCH contrib/coccinelle/free.cocci SPATCH contrib/coccinelle/qsort.cocci SPATCH contrib/coccinelle/xstrdup_or_null.cocci SPATCH contrib/coccinelle/swap.cocci SPATCH contrib/coccinelle/strbuf.cocci SPATCH contrib/coccinelle/object_id.cocci SPATCH contrib/coccinelle/array.cocci $ make coccicheck SPATCH=does-not-exist make: Nothing to be done for 'coccicheck'. With this patch, you get: $ make coccicheck SPATCH=does-not-exist SPATCH contrib/coccinelle/free.cocci /bin/sh: 4: does-not-exist: not found Makefile:2338: recipe for target 'contrib/coccinelle/free.cocci.patch' failed make: *** [contrib/coccinelle/free.cocci.patch] Error 1 It also dumps the log on failure, so any errors from spatch itself (like syntax errors in our .cocci files) will be seen by the user. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-28Merge branch 'jh/memihash-opt'Libravatar Junio C Hamano1-0/+1
The name-hash used for detecting paths that are different only in cases (which matter on case insensitive filesystems) has been optimized to take advantage of multi-threading when it makes sense. * jh/memihash-opt: name-hash: add test-lazy-init-name-hash to .gitignore name-hash: add perf test for lazy_init_name_hash name-hash: add test-lazy-init-name-hash name-hash: perf improvement for lazy_init_name_hash hashmap: document memihash_cont, hashmap_disallow_rehash api hashmap: add disallow_rehash setting hashmap: allow memihash computation to be continued name-hash: specify initial size for istate.dir_hash table
2017-03-24Merge branch 'jk/sha1dc'Libravatar Junio C Hamano1-2/+17
The "detect attempt to create collisions" variant of SHA-1 implementation by Marc Stevens (CWI) and Dan Shumow (Microsoft) has been integrated and made the default. * jk/sha1dc: Makefile: make DC_SHA1 the default t0013: add a basic sha1 collision detection test Makefile: add DC_SHA1 knob sha1dc: disable safe_hash feature sha1dc: adjust header includes for git sha1dc: add collision-detecting sha1 implementation
2017-03-24name-hash: add test-lazy-init-name-hashLibravatar Jeff Hostetler1-0/+1
Add t/helper/test-lazy-init-name-hash.c test code to demonstrate performance times for lazy_init_name_hash() using the original single-threaded and the new multi-threaded code paths. Includes a --dump option to dump the created hashmaps to stdout. You can use this to run both code paths and confirm that they generate the same hashmaps. Includes a --analyze option to analyze performance of both code paths over a range of index sizes to help you find a lower bound for the LAZY_THREAD_COST in name-hash.c. For example, passing "-a 4000" will set "istate.cache_nr" to 4000 and then try the multi-threaded code -- probably giving 2 threads with 2000 entries each. It will then run both the single-threaded (1x4000) and the multi-threaded (2x2000) and compare the times. It will then repeat the test with 8000, 12000, and etc. so that you can see the cross over. Signed-off-by: Jeff Hostetler <jeffhost@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-17Merge branch 'bc/sha1-header-selection-with-cpp-macros'Libravatar Junio C Hamano1-7/+5
Our source code has used the SHA1_HEADER cpp macro after "#include" in the C code to switch among the SHA-1 implementations. Instead, list the exact header file names and switch among implementations using "#ifdef BLK_SHA1/#include "block-sha1/sha1.h"/.../#endif"; this helps some IDE tools. * bc/sha1-header-selection-with-cpp-macros: hash.h: move SHA-1 implementation selection into a header file
2017-03-17Merge branch 'jk/interop-test'Libravatar Junio C Hamano1-0/+3
Picking two versions of Git and running tests to make sure the older one and the newer one interoperate happily has now become possible. * jk/interop-test: t/interop: add test of old clients against modern git-daemon t: add an interoperability test harness
2017-03-17Makefile: make DC_SHA1 the defaultLibravatar Junio C Hamano1-6/+10
We used to use the SHA1 implementation from the OpenSSL library by default. As we are trying to be careful against collision attacks after the recent "shattered" announcement, switch the default to encourage people to use DC_SHA1 implementation instead. Those who want to use the implementation from OpenSSL can explicitly ask for it by OPENSSL_SHA1=YesPlease when running "make". Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-17t0013: add a basic sha1 collision detection testLibravatar Jeff King1-0/+1
We don't actually have a Git-object collision, so the best we can do is to run one of the shattered PDFs through test-sha1. This should trigger the collision check and die. In a sense this isn't really checking anything that the upstream sha1collisiondetection project doesn't cover already. But it at least makes sure that our build correctly uses the library. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-17Makefile: add DC_SHA1 knobLibravatar Jeff King1-0/+10
This knob lets you use the sha1dc implementation from: https://github.com/cr-marcstevens/sha1collisiondetection which can detect certain types of collision attacks (even when we only see half of the colliding pair). So it mitigates any attack which consists of getting the "good" half of a collision into a trusted repository, and then later replacing it with the "bad" half. The "good" half is rejected by the victim's version of Git (and even if they run an old version of Git, any sha1dc-enabled git will complain loudly if it ever has to interact with the object). The big downside is that it's slower than either the openssl or block-sha1 implementations. Here are some timings based off of linux.git: - compute sha1 over whole packfile sha1dc: 3.580s blk-sha1: 2.046s (-43%) openssl: 1.335s (-62%) - rev-list --all --objects sha1dc: 33.512s blk-sha1: 33.514s (+0.0%) openssl: 33.650s (+0.4%) - git log --no-merges -10000 -p sha1dc: 8.124s blk-sha1: 7.986s (-1.6%) openssl: 8.203s (+0.9%) - index-pack --verify sha1dc: 4m19s blk-sha1: 2m57s (-32%) openssl: 2m19s (-42%) So overall the sha1 computation with collision detection is about 1.75x slower than block-sha1, and 2.7x slower than sha1. But of course most operations do more than just sha1. Normal object access isn't really slowed at all (both the +/- changes there are well within the run-to-run noise); any changes are drowned out by the other work Git is doing. The most-affected operation is `index-pack --verify`, which is essentially just computing the sha1 on every object. This is similar to the `index-pack` invocation that the receiver of a push or fetch would perform. So clearly there's some extra CPU load here. There will also be some latency for the user, though keep in mind that such an operation will generally be network bound (this is about a 1.2GB packfile). Some of that extra CPU is "free" in the sense that we use it while the pack is streaming in anyway. But most of it comes during the delta-resolution phase, after the whole pack has been received. So we can imagine that for this (quite large) push, the user might have to wait an extra 100 seconds over openssl (which is what we use now). If we assume they can push to us at 20Mbit/s, that's 480s for a 1.2GB pack, which is only 20% slower. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-15hash.h: move SHA-1 implementation selection into a header fileLibravatar brian m. carlson1-7/+5
Many developers use functionality in their editors that allows for quick syntax checks, including warning about questionable constructs. This functionality allows rapid development with fewer errors. However, such functionality generally does not allow the specification of project-specific defines or command-line options. Since the SHA1_HEADER include is not defined in such a case, developers see spurious errors when using these tools. Furthermore, there are known implementations of "cc" whose '#include' is unhappy with this construct. Instead of using SHA1_HEADER, create a hash.h header and use #if and #elif to select the desired header. Have the Makefile pass an appropriate option to help the header select the right implementation to use. [jc: make BLK_SHA1 the fallback default as discussed on list, e.g. <20170314201424.vccij5z2ortq4a4o@sigill.intra.peff.net>; also remove SHA1_HEADER and SHA1_HEADER_SQ that are no longer used]. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> Reviewed-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-10t: add an interoperability test harnessLibravatar Jeff King1-0/+3
The current test suite is good at letting you test a particular version of Git. But it's not very good at letting you test _two_ versions and seeing how they interact (e.g., one cloning from the other). This commit adds a test harness that will build two arbitrary versions of git and make it easy to call them from inside your tests. See the README and the example script for details. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-03-10Merge branch 'rj/remove-unused-mktemp'Libravatar Junio C Hamano1-5/+0
Code cleanup. * rj/remove-unused-mktemp: wrapper.c: remove unused gitmkstemps() function wrapper.c: remove unused git_mkstemp() function
2017-02-28wrapper.c: remove unused gitmkstemps() functionLibravatar Ramsay Jones1-5/+0
The last call to the mkstemps() function was removed in commit 659488326 ("wrapper.c: delete dead function git_mkstemps()", 22-04-2016). In order to support platforms without mkstemps(), this functionality was provided, along with a Makefile build variable (NO_MKSTEMPS), by the gitmkstemps() function. Remove the dead code, along with the defunct build machinery. Signed-off-by: Ramsay Jones <ramsay@ramsayjones.plus.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-27Merge branch 'js/rebase-helper'Libravatar Junio C Hamano1-0/+1
"git rebase -i" starts using the recently updated "sequencer" code. * js/rebase-helper: rebase -i: use the rebase--helper builtin rebase--helper: add a builtin helper for interactive rebases
2017-02-09rebase--helper: add a builtin helper for interactive rebasesLibravatar Johannes Schindelin1-0/+1
Git's interactive rebase is still implemented as a shell script, despite its complexity. This implies that it suffers from the portability point of view, from lack of expressibility, and of course also from performance. The latter issue is particularly serious on Windows, where we pay a hefty price for relying so much on POSIX. Unfortunately, being such a huge shell script also means that we missed the train when it would have been relatively easy to port it to C, and instead piled feature upon feature onto that poor script that originally never intended to be more than a slightly pimped cherry-pick in a loop. To open the road toward better performance (in addition to all the other benefits of C over shell scripts), let's just start *somewhere*. The approach taken here is to add a builtin helper that at first intends to take care of the parts of the interactive rebase that are most affected by the performance penalties mentioned above. In particular, after we spent all those efforts on preparing the sequencer to process rebase -i's git-rebase-todo scripts, we implement the `git rebase -i --continue` functionality as a new builtin, git-rebase--helper. Once that is in place, we can work gradually on tackling the rest of the technical debt. Note that the rebase--helper needs to learn about the transient --ff/--no-ff options of git-rebase, as the corresponding flag is not persisted to, and re-read from, the state directory. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-08add oidset APILibravatar Jeff King1-0/+1
This is similar to many of our uses of sha1-array, but it overcomes one limitation of a sha1-array: when you are de-duplicating a large input with relatively few unique entries, sha1-array uses 20 bytes per non-unique entry. Whereas this set will use memory linear in the number of unique entries (albeit a few more than 20 bytes due to hashmap overhead). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-02-02Merge branch 'bc/use-asciidoctor-opt'Libravatar Junio C Hamano1-0/+6
Asciidoctor, an alternative reimplementation of AsciiDoc, still needs some changes to work with documents meant to be formatted with AsciiDoc. "make USE_ASCIIDOCTOR=YesPlease" to use it out of the box to document our pages is getting closer to reality. * bc/use-asciidoctor-opt: Documentation: implement linkgit macro for Asciidoctor Makefile: add a knob to enable the use of Asciidoctor Documentation: move dblatex arguments into variable Documentation: add XSLT to fix DocBook for Texinfo Documentation: sort sources for gitman.texi Documentation: remove unneeded argument in cat-texi.perl Documentation: modernize cat-texi.perl Documentation: fix warning in cat-texi.perl
2017-02-02Merge branch 'js/retire-relink'Libravatar Junio C Hamano1-1/+0
Cruft removal. * js/retire-relink: relink: really remove the command relink: retire the command
2017-01-31Merge branch 'js/difftool-builtin'Libravatar Junio C Hamano1-1/+1
Rewrite a scripted porcelain "git difftool" in C. * js/difftool-builtin: difftool: hack around -Wzero-length-format warning difftool: retire the scripted version difftool: implement the functionality in the builtin difftool: add a skeleton for the upcoming builtin
2017-01-31Merge branch 'rs/qsort-s'Libravatar Junio C Hamano1-0/+8
A few codepaths had to rely on a global variable when sorting elements of an array because sort(3) API does not allow extra data to be passed to the comparison function. Use qsort_s() when natively available, and a fallback implementation of it when not, to eliminate the need, which is a prerequisite for making the codepath reentrant. * rs/qsort-s: ref-filter: use QSORT_S in ref_array_sort() string-list: use QSORT_S in string_list_sort() perf: add basic sort performance test add QSORT_S compat: add qsort_s()
2017-01-25relink: retire the commandLibravatar Johannes Schindelin1-1/+0
Back in the olden days, when all objects were loose and rubber boots were made out of wood, it made sense to try to share (immutable) objects between repositories. Ever since the arrival of pack files, it is but an anachronism. Let's move the script to the contrib/examples/ directory and no longer offer it. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-23compat: add qsort_s()Libravatar René Scharfe1-0/+8
The function qsort_s() was introduced with C11 Annex K; it provides the ability to pass a context pointer to the comparison function, supports the convention of using a NULL pointer for an empty array and performs a few safety checks. Add an implementation based on compat/qsort.c for platforms that lack a native standards-compliant qsort_s() (i.e. basically everyone). It doesn't perform the full range of possible checks: It uses size_t instead of rsize_t and doesn't check nmemb and size against RSIZE_MAX because we probably don't have the restricted size type defined. For the same reason it returns int instead of errno_t. Signed-off-by: Rene Scharfe <l.s.r@web.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2017-01-23Makefile: add a knob to enable the use of AsciidoctorLibravatar brian m. carlson1-0/+6
While Git has traditionally built its documentation using AsciiDoc, some people wish to use Asciidoctor for speed or other reasons. Add a Makefile knob, USE_ASCIIDOCTOR, that sets various options in order to produce acceptable output. For HTML output, XHTML5 was chosen, since the AsciiDoc options also produce XHTML, albeit XHTML 1.1. Asciidoctor does not have built-in support for the linkgit macro, but it is available using the Asciidoctor Extensions Lab. Add a macro to enable the use of this extension if it is available. Without it, the linkgit macros are emitted into the output. Signed-off-by: brian m. carlson <sandals@crustytoothpaste.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>