Age | Commit message (Collapse) | Author | Files | Lines |
|
Last minute fixes to two fixups merged to 'master' recently.
* js/pwd-var-vs-pwd-cmd-fix:
t0021, t5615: use $PWD instead of $(pwd) in PATH-like shell variables
|
|
Test portability improvements and optimization for an
already-graduated topic.
* ls/filter-process:
t0021: remove debugging cruft
|
|
The redirection of the standard error stream to a temporary file is
a leftover cruft during debugging. Remove it.
Besides, it is reported by folks on the Windows that the test is
flaky with this redirection; somebody gets confused and this
merely-redirected-to file gets marked as delete-pending by git.exe
and makes it finish with a non-zero exit status when "git checkout"
finishes. Windows folks may want to figure that one out, but for
the purpose of this test, it shouldn't become a show-stopper.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We have to use $PWD instead of $(pwd) because on Windows the latter
would add a C: style path to bash's Unix-style $PATH variable, which
becomes confused by the colon after the drive letter. ($PWD is a
Unix-style path.)
In the case of GIT_ALTERNATE_OBJECT_DIRECTORIES, bash on Windows
assembles a Unix-style path list with the colon as separators. It
converts the value to a Windows-style path list with the semicolon as
path separator when it forwards the variable to git.exe. The same
confusion happens when bash's original value is contaminated with
Windows style paths.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Test portability improvements and cleanups for t0021.
* jk/filter-process-fix:
t0021: fix filehandle usage on older perl
t0021: use $PERL_PATH for rot13-filter.pl
t0021: put $TEST_ROOT in $PATH
t0021: use write_script to create rot13 shell script
|
|
Avoid unwanted coding patterns (prodigal use of pipelines), and in
particular a useless use of cat.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Jeff King <peff@peff.net>
|
|
Some versions of uniq -c write the count left-justified, other version
write it right-justified. Be prepared for both kinds.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Jeff King <peff@peff.net>
|
|
The rot13-filter.pl script hardcodes "#!/usr/bin/perl", and
does not respect $PERL_PATH at all. That is a problem if the
system does not have perl at that path, or if it has a perl
that is too old to run a complicated script like the
rot13-filter (but PERL_PATH points to a more modern one).
We can fix this by using write_script() to create a new copy
of the script with the correct #!-line. In theory we could
move the whole script inside t0021-conversion.sh rather than
having it as an auxiliary file, but it's long enough that
it just makes things harder to read.
As a bonus, we can stop using the full path to the script in
the filter-process config we add (because the trash
directory is in our PATH). Not only is this shorter, but it
sidesteps any shell-quoting issues. The original was broken
when $TEST_DIRECTORY contained a space, because it was
interpolated in the outer script.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We create a rot13.sh script in the trash directory, but need
to call it by its full path when we have moved our cwd to
another directory. Let's just put $TEST_ROOT in our $PATH so
that the script is always found.
This is a minor convenience for rot13.sh, but will be a
major one when we switch rot13-filter.pl to a script in the
same directory, as it means we will not have to deal with
shell quoting inside the filter-process config.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This avoids us fooling around with $SHELL_PATH and the
executable bit ourselves.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Git's clean/smudge mechanism invokes an external filter process for
every single blob that is affected by a filter. If Git filters a lot of
blobs then the startup time of the external filter processes can become
a significant part of the overall Git execution time.
In a preliminary performance test this developer used a clean/smudge
filter written in golang to filter 12,000 files. This process took 364s
with the existing filter mechanism and 5s with the new mechanism. See
details here: https://github.com/github/git-lfs/pull/1382
This patch adds the `filter.<driver>.process` string option which, if
used, keeps the external filter process running and processes all blobs
with the packet format (pkt-line) based protocol over standard input and
standard output. The full protocol is explained in detail in
`Documentation/gitattributes.txt`.
A few key decisions:
* The long running filter process is referred to as filter protocol
version 2 because the existing single shot filter invocation is
considered version 1.
* Git sends a welcome message and expects a response right after the
external filter process has started. This ensures that Git will not
hang if a version 1 filter is incorrectly used with the
filter.<driver>.process option for version 2 filters. In addition,
Git can detect this kind of error and warn the user.
* The status of a filter operation (e.g. "success" or "error) is set
before the actual response and (if necessary!) re-set after the
response. The advantage of this two step status response is that if
the filter detects an error early, then the filter can communicate
this and Git does not even need to create structures to read the
response.
* All status responses are pkt-line lists terminated with a flush
packet. This allows us to send other status fields with the same
protocol in the future.
Helped-by: Martin-Louis Bright <mlbright@gmail.com>
Reviewed-by: Jakub Narebski <jnareb@gmail.com>
Signed-off-by: Lars Schneider <larsxschneider@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Use `test_config` to set the config, check that files are empty with
`test_must_be_empty`, compare files with `test_cmp`, and remove spaces
after ">" and "<".
Please note that the "rot13" filter configured in "setup" keeps using
`git config` instead of `test_config` because subsequent tests might
depend on it.
Reviewed-by: Stefan Beller <sbeller@google.com>
Signed-off-by: Lars Schneider <larsxschneider@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
When accessing a blob for a diff, we may try to reuse file
contents in the working tree, under the theory that it is
faster to mmap those file contents than it would be to
extract the content from the object database.
When we have to filter those contents, though, that
assumption does not hold. Even for our internal conversions
like CRLF, we have to allocate and fill a new buffer anyway.
But much worse, for external clean filters we have to exec
an arbitrary script, and we have no idea how expensive it
may be to run.
So let's skip this optimization when conversion into git's
"clean" form is required. This applies whenever the
"want_file" flag is false. When it's true, the caller
actually wants the smudged worktree contents, which the
reused file by definition already has (in fact, this is a
key optimization going the other direction, since reusing
the worktree file there lets us skip smudge filters).
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Once a lower-priority configuration file defines a clean or smudge
filter, there is no convenient way to override it to produce as-is
output. Even though the configuration mechanism implements "the
last one wins" semantics, you cannot set them to an empty string and
expect them to work, as apply_filter() would try to run the empty
string as an external command and fail. The conversion is not done,
but the function would still report a failure to convert.
Even though resetting the variable to "cat" (i.e. pass the data back
as-is and report success) is an obvious and a viable way to solve
this, it is wasteful to spawn an external process just as a
workaround.
Instead, teach apply_filter() to treat an empty string as a no-op
filter that always returns successfully its input as-is without
conversion.
Signed-off-by: Lars Schneider <larsxschneider@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The clean/smudge interface did not work well when filtering an
empty contents (failed and then passed the empty input through).
It can be argued that a filter that produces anything but empty for
an empty input is nonsense, but if the user wants to do strange
things, then why not?
* jh/filter-empty-contents:
sha1_file: pass empty buffer to index empty file
|
|
We are explicitly ignoring SIGPIPE, as we fully expect that the
filter program may not read our output fully. Ignore EPIPE that
may come from writing to it as well.
A new test was stolen from Jeff's suggestion.
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
`git add` of an empty file with a filter pops complaints from
`copy_fd` about a bad file descriptor.
This traces back to these lines in sha1_file.c:index_core:
if (!size) {
ret = index_mem(sha1, NULL, size, type, path, flags);
The problem here is that content to be added to the index can be
supplied from an fd, or from a memory buffer, or from a pathname. This
call is supplying a NULL buffer pointer and a zero size.
Downstream logic takes the complete absence of a buffer to mean the
data is to be found elsewhere -- for instance, these, from convert.c:
if (params->src) {
write_err = (write_in_full(child_process.in, params->src, params->size) < 0);
} else {
write_err = copy_fd(params->fd, child_process.in);
}
~If there's a buffer, write from that, otherwise the data must be coming
from an open fd.~
Perfectly reasonable logic in a routine that's going to write from
either a buffer or an fd.
So change `index_core` to supply an empty buffer when indexing an empty
file.
There's a patch out there that instead changes the logic quoted above to
take a `-1` fd to mean "use the buffer", but it seems to me that the
distinction between a missing buffer and an empty one carries intrinsic
semantics, where the logic change is adapting the code to handle
incorrect arguments.
Signed-off-by: Jim Hill <gjthill@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The data is streamed to the filter process anyway. Better avoid mapping
the file if possible. This is especially useful if a clean filter
reduces the size, for example if it computes a sha1 for binary data,
like git media. The file size that the previous implementation could
handle was limited by the available address space; large files for
example could not be handled with (32-bit) msysgit. The new
implementation can filter files of any size as long as the filter output
is small enough.
The new code path is only taken if the filter is required. The filter
consumes data directly from the fd. If it fails, the original data is
not immediately available. The condition can easily be handled as
a fatal error, which is expected for a required filter anyway.
If the filter was not required, the condition would need to be handled
in a different way, like seeking to 0 and reading the data. But this
would require more restructuring of the code and is probably not worth
it. The obvious approach of falling back to reading all data would not
help achieving the main purpose of this patch, which is to handle large
files with limited address space. If reading all data is an option, we
can simply take the old code path right away and mmap the entire file.
The environment variable GIT_MMAP_LIMIT, which has been introduced in
a previous commit is used to test that the expected code path is taken.
A related test that exercises required filters is modified to verify
that the data actually has been modified on its way from the file system
to the object store.
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Two test scripts (t0021 and t5551) had copy & paste code to set
EXPENSIVE prerequisite. Use the test_lazy_prereq helper to define
them in the common t/test-lib.sh.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Checking out 2GB or more through an external filter (see test) fails
on Mac OS X 10.8.4 (12E55) for a 64-bit executable with:
error: read from external filter cat failed
error: cannot feed the input to external filter cat
error: cat died of signal 13
error: external filter cat failed 141
error: external filter cat failed
The reason is that read() immediately returns with EINVAL when asked
to read more than 2GB. According to POSIX [1], if the value of
nbyte passed to read() is greater than SSIZE_MAX, the result is
implementation-defined. The write function has the same restriction
[2]. Since OS X still supports running 32-bit executables, the
32-bit limit (SSIZE_MAX = INT_MAX = 2GB - 1) seems to be also
imposed on 64-bit executables under certain conditions. For write,
the problem has been addressed earlier [6c642a].
Address the problem for read() and write() differently, by limiting
size of IO chunks unconditionally on all platforms in xread() and
xwrite(). Large chunks only cause problems, like causing latencies
when killing the process, even if OS X was not buggy. Doing IO in
reasonably sized smaller chunks should have no negative impact on
performance.
The compat wrapper clipped_write() introduced earlier [6c642a] is
not needed anymore. It will be reverted in a separate commit. The
new test catches read and write problems.
Note that 'git add' exits with 0 even if it prints filtering errors
to stderr. The test, therefore, checks stderr. 'git add' should
probably be changed (sometime in another commit) to exit with
nonzero if filtering fails. The test could then be changed to use
test_must_fail.
Thanks to the following people for suggestions and testing:
Johannes Sixt <j6t@kdbg.org>
John Keeping <john@keeping.me.uk>
Jonathan Nieder <jrnieder@gmail.com>
Kyle J. McKay <mackyle@gmail.com>
Linus Torvalds <torvalds@linux-foundation.org>
Torsten Bögershausen <tboegi@web.de>
[1] http://pubs.opengroup.org/onlinepubs/009695399/functions/read.html
[2] http://pubs.opengroup.org/onlinepubs/009695399/functions/write.html
[6c642a] commit 6c642a878688adf46b226903858b53e2d31ac5c3
compate/clipped-write.c: large write(2) fails on Mac OS X/XNU
Signed-off-by: Steffen Prohaska <prohaska@zib.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
By default, a missing filter driver or a failure from the filter driver is
not an error, but merely makes the filter operation a no-op pass through.
This is useful to massage the content into a shape that is more convenient
for the platform, filesystem, and the user to use, and the content filter
mechanism is not used to turn something unusable into usable.
However, we could also use of the content filtering mechanism and store
the content that cannot be directly used in the repository (e.g. a UUID
that refers to the true content stored outside git, or an encrypted
content) and turn it into a usable form upon checkout (e.g. download the
external content, or decrypt the encrypted content). For such a use case,
the content cannot be used when filter driver fails, and we need a way to
tell Git to abort the whole operation for such a failing or missing filter
driver.
Add a new "filter.<driver>.required" configuration variable to mark the
second use case. When it is set, git will abort the operation when the
filter driver does not exist or exits with a non-zero status code.
Signed-off-by: Jehan Bing <jehan@orb.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The last line of the test file "expanded-keywords" ended in a newline,
which is a valid terminator for ident. Use printf instead of echo to omit
it and thus really test if a file that ends unexpectedly in the middle of
an ident tag is handled properly.
Also take the oppertunity to calculate the expected ID dynamically
instead of hardcoding it into the test script. This should make future
changes easier.
Signed-off-by: Rene Scharfe <rene.scharfe@lsrfire.ath.cx>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The fake filter did not read from the standard input at all,
which caused the calling side to die with SIGPIPE, depending
on the timing.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Filtering to support keyword expansion may need the name of
the file being filtered. In particular, to support p4 keywords
like
$File: //depot/product/dir/script.sh $
the smudge filter needs to know the name of the file it is
smudging.
Allow "%f" in the custom filter command line specified in the
configuration. This will be substituted by the filename
inside a single-quote pair to be passed to the shell.
Signed-off-by: Pete Wyckoff <pw@padd.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If there are foreign $Id$ keywords in the repository, they are most
likely there for a reason. Let's keep them on checkout (which is also
what the documentation indicates). Foreign $Id$ keywords are now
recognized by there being multiple space separated fields in $Id:xxxxx$.
Signed-off-by: Henrik Grubbström <grubba@grubba.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The code to contract $Id:xxxxx$ strings could eat an arbitrary amount
of source text if the terminating $ was lost. It now refuses to
contract $Id:xxxxx$ strings spanning multiple lines.
Signed-off-by: Henrik Grubbström <grubba@grubba.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
On Windows, we need the shbang line to correctly invoke shell scripts via
a POSIX shell, except when the script is invoked via 'sh -c' because sh (a
bash) does "the right thing". But the clean and smudge filters will not
always be invoked via 'sh -c'; to futureproof, we should mark the the one
in t0021-conversion with #!$SHELL_PATH.
Signed-off-by: Johannes Sixt <j6t@kdbg.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Solaris' /usr/bin/tr doesn't seem to like multiple character
ranges in brackets (it simply prints "Bad string").
Instead, let's just enumerate the transformation we want.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This test uses a rot13 filter, which is its own inverse. It tested only
that the content was the same as the original after both the 'clean' and
the 'smudge' filter were applied. This way it would not detect whether
any filter was run at all. Hence, here we add another test that checks
that the repository contained content that was processed by the 'clean'
filter.
Signed-off-by: Johannes Sixt <johannes.sixt@telecom.at>
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
|
|
This test case would have caught the bug fixed by revision
c23290d5.
It puts various forms of $Id$ into a file in the repository,
without allowing git to collapse them to uniformity. Then enables the
$Id$ expansion on checkout, and checks that what is checked out has
coped with the various forms.
Signed-off-by: Andy Parkins <andyparkins@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
with other VCSs
$Id$ is present already in SVN and CVS; it would mean that people
converting their existing repositories won't have to make any changes to
the source files should they want to make use of the ident attribute.
Given that it's a feature that's meant to calm those very people, it
seems obtuse to make them edit every file just to make use of it.
I think that bzr uses $Id$; Mercurial has examples hooks for $Id$;
monotone has $Id$ on its wishlist. I can't think of a good reason not
to stick with the de-facto standard and call ours $Id$ instead of
$ident$.
Signed-off-by: Andy Parkins <andyparkins@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The interface is similar to the custom low-level merge drivers.
First you configure your filter driver by defining 'filter.<name>.*'
variables in the configuration.
filter.<name>.clean filter command to run upon checkin
filter.<name>.smudge filter command to run upon checkout
Then you assign filter attribute to each path, whose name
matches the custom filter driver's name.
Example:
(in .gitattributes)
*.c filter=indent
(in config)
[filter "indent"]
clean = indent
smudge = cat
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The 'ident' attribute set to path squashes "$ident:<any bytes
except dollor sign>$" to "$ident$" upon checkin, and expands it
to "$ident: <blob SHA-1> $" upon checkout.
As we have two conversions that affect checkin/checkout paths,
clarify how they interact with each other.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|