Age | Commit message (Collapse) | Author | Files | Lines |
|
Make all the different parts of the database backend connection
configurable. This adds the following string configuration variables:
- gitcvs.dbdriver
- gitcvs.dbname
- gitcvs.dbuser
- gitcvs.dbpass
The default values emulate the current behavior exactly for
backwards compatibility.
All configuration variables can also be specified for a specific
access method (i.e. in the form gitcvs.<method>.<var>)
The dbdriver/dbuser/dbpass variables are added for completness.
No other backend than SQLite is tested yet.
The dbname variable on the other hand is useful with this backend
already (to not discriminate against other possible backends
it was not splitted in dbdir and dbfile).
Both dbname and dbuser support dynamic variable substitution where
the available variables are:
%m -- the CVS 'module' (i.e. GIT 'head') worked on
%a -- CVS access method used (i.e. 'ext' or 'pserver')
%u -- User name of the user invoking git-cvsserver
%G -- .git directory name
%g -- .git directory name, mangled to be used in a filename,
currently this substitutes all chars except for [\w.-]
with '_'
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Allow to override the gitcvs.enabled and gitcvs.logfile configuration
variables for each access method (i.e. "ext" or "pserver") in the
form gitcvs.<method>.<var>
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This is intended to be used in the form gitcvs.<method>.<var>
but this patch doesn't introduce any users yet.
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
$state->{method} contains the CVS access method used,
either 'ext' or 'pserver'
Signed-off-by: Frank Lichtenheld <frank@lichtenheld.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
In addition to optimizing pathspecs that would never match,
which was done earlier, this optimizes pathspecs that would
always match (e.g. "arch/" while the traversal is already in
"arch/i386/" hierarchy).
This patch makes the worst case slightly more palatable, while
improving average case.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
If we already know that some of the pathspecs can match later
entries in the tree we are looking at, we do not have to do more
expensive strncmp() upfront before comparing the length of the
match pattern and the path, as a path longer than the match
pattern will not match it, and a path shorter than the match
pattern will match only if the path is a directory-component
wise prefix of the match pattern.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
When we are looking at a tree entry with pathspecs, if all the
pathspecs sort strictly earlier than the entry we are currently
looking at, there is no way later entries in the same tree would
match our pathspecs, because the entries are sorted.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This makes the tree descriptor contain a "struct name_entry" as part of
it, and it gets filled in so that it always contains a valid entry. On
some benchmarks, it improves performance by up to 15%.
That makes tree entry "extract" trivial, and means that we only actually
need to decode each tree entry just once: we decode the first one when
we initialize the tree descriptor, and each subsequent one when doing
"update_tree_entry()". In particular, this means that we don't need to
do strlen() both at extract time _and_ at update time.
Finally, it also allows more sharing of code (entry_extract(), that
wanted a "struct name_entry", just got totally trivial, along with the
"tree_entry()" function).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This removes slightly more lines than it adds, but the real reason for
doing this is that future optimizations will require more setup of the
tree descriptor, and so we want to do it in one place.
Also renamed the "desc.buf" field to "desc.buffer" just to trigger
compiler errors for old-style manual initializations, making sure I
didn't miss anything.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Since we have the "tree_entry_len()" helper function these days, and
don't need to do a full strlen(), there's no point in saving the path
length - it's just redundant information.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The earlier round makes the function return "is it different"
and it does not return a value suitable for sorting anymore. Reverse
the logic to return "are they the same suspect" instead, and rename
it to "same_suspect()".
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Don't try to remove the containing directory for every pruned object but
try only once after the directory has been scanned instead.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Change the feedback message if doing 'git checkout foo' when already on
branch "foo".
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
When creating a new object, we use "deflate(stream, Z_FINISH)" in a loop
until it no longer returns Z_OK, and then we do "deflateEnd()" to finish
up business.
That should all work, but the fact is, it's not how you're _supposed_ to
use the zlib return values properly:
- deflate() should never return Z_OK in the first place, except if we
need to increase the output buffer size (which we're not doing, and
should never need to do, since we pre-allocated a buffer that is
supposed to be able to hold the output in full). So the "while()" loop
was incorrect: Z_OK doesn't actually mean "ok, continue", it means "ok,
allocate more memory for me and continue"!
- if we got an error return, we would consider it to be end-of-stream,
but it could be some internal zlib error. In short, we should check
for Z_STREAM_END explicitly, since that's the only valid return value
anyway for the Z_FINISH case.
- we never checked deflateEnd() return codes at all.
Now, admittedly, none of these issues should ever happen, unless there is
some internal bug in zlib. So this patch should make zero difference, but
it seems to be the right thing to do.
We should probablybe anal and check the return value of "deflateInit()"
too!
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Looking at the SHA1 validation code due to the corruption that Alexander
Litvinov is seeing under Cygwin, I notice that one of the most central
places where we read objects, we actually do end up verifying the SHA1 of
the result, but then we happily parse it anyway.
And using "printf" to write the error message means that it not only can
get lost, but will actually mess up stdout, and cause other strange and
hard-to-debug failures downstream.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
When appending objects to a pack, make sure the appended data is really
what we expect instead of simply loading potentially corrupted objects
and legitimating them by computing a SHA1 of that corrupt data.
With this the sha1_object() can lose its test_for_collision parameter
which is now redundent.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Use hash_sha1_file() instead of duplicating code to compute object SHA1.
While at it make it accept a const pointer.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Waaaaaaay back Git was considered to be secure as it never overwrote
an object it already had. This was ensured by always unpacking the
packfile received over the network (both in fetch and receive-pack)
and our already existing logic to not create a loose object for an
object we already have.
Lately however we keep "large-ish" packfiles on both fetch and push
by running them through index-pack instead of unpack-objects. This
would let an attacker perform a birthday attack.
How? Assume the attacker knows a SHA-1 that has two different
data streams. He knows the client is likely to have the "good"
one. So he sends the "evil" variant to the other end as part of
a "large-ish" packfile. The recipient keeps that packfile, and
indexes it. Now since this is a birthday attack there is a SHA-1
collision; two objects exist in the repository with the same SHA-1.
They have *very* different data streams. One of them is "evil".
Currently the poor recipient cannot tell the two objects apart,
short of by examining the timestamp of the packfiles. But lets
say the recipient repacks before he realizes he's been attacked.
We may wind up packing the "evil" version of the object, and deleting
the "good" one. This is made *even more likely* by Junio's recent
rearrange_packed_git patch (b867092f).
It is extremely unlikely for a SHA1 collisions to occur, but if it
ever happens with a remote (hence untrusted) object we simply must
not let the fetch succeed.
Normally received packs should not contain objects we already have.
But when they do we must ensure duplicated objects with the same SHA1
actually contain the same data.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This fixes the single force (+) when fetched with fetch_per_ref.
Also use $LF as separator because IFS is $LF.
Signed-off-by: Santi Béjar <sbejar@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* git://git2.kernel.org/pub/scm/gitk/gitk:
[PATCH] gitk: bind <F5> key to Update (reread commits)
|
|
We already have -q in git clone. So for those who care to suppress
the noise during an http based clone, make -q actually do a quiet
http fetch.
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Cc: Fernando Herrera <fherrera@onirica.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The thing is, if the output buffer is empty, we should *still* actually
use the zlib routines to *unpack* that empty output buffer.
But we had a test that said "only unpack if we still expect more output".
So we wouldn't use up all the zlib stream, because we felt that we didn't
need it, because we already had all the bytes we wanted. And it was
"true": we did have all the output data. We just needed to also eat all
the input data!
We've had this bug before - thinking that we don't need to inflate()
anything because we already had it all..
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This is a simple but powerful continuous integration build system
for Git. It works by receiving push events from repositories
through the post-receive hook, aggregates them on a per-branch
basis into a first-come-first-serve build queue, and lets a
background build daemon perform builds one at a time.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
There has not been any work on the shallow stuff lately, so it is hard
to find out what it does, and how. This document describes the ideas
as well as the current problems, and can serve as a starting point for
shallow people.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Setting up a git-daemon came up the other day on IRC, and it is slightly
non trivial for the uninitiated.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Since in at least one use case, xdl_hash_record() takes over 15% of the
CPU time, it makes sense to even micro-optimize it. For many cases, no
whitespace special handling is needed, and in these cases we should not
even bother to check for whitespace in _every_ iteration of the loop.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The commit structures are guaranteed their uniqueness by the object
layer, so we can check their address and see if they are the same
without going down to the object sha1 level.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Signed-off-by: James Bowes <jbowes@dangerouslyinc.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This provides a smoother degradation in performance when the cache
gets trashed due to the delta_base_cache_limit being reached. Limited
testing with really small delta_base_cache_limit values appears to confirm
this.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Currently there are 3 different ways to deal with the cache size.
Let's stick to only one. The compiler is smart enough to produce the exact
same code in those cases anyway.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
I think we can start to slow down, as we now have covered
everything I listed earlier in the short-term release plan.
The last release 1.5.0 took painfully too long.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
An earlier conversion to run_command() from execlp() forgot that
run_command() takes an array that is terminated with NULL.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This is mainly just a cleanup patch, and sets up for later changes where
the tree-diff.c "interesting()" function can return more than just a
yes/no value.
In particular, it should be quite possible to say "no subsequent entries
in this tree can possibly be interesting any more", and thus allow the
callers to short-circuit the tree entirely.
In fact, changing the callers to do so is trivial, and is really all this
patch really does, because changing "interesting()" itself to say that
nothing further is going to be interesting is definitely more complicated,
considering that we may have arbitrary pathspecs.
But in cleaning up the callers, this actually fixes a potential small
performance issue in diff_tree(): if the second tree has a lot of
uninterestign crud in it, we would keep on doing the "is it interesting?"
check on the first tree for each uninteresting entry in the second one.
The answer is obviously not going to change, so that was just not helping.
The new code is clearer and simpler and avoids this issue entirely.
I also renamed "interesting()" to "tree_entry_interesting()", because I
got frustrated by the fact that
- we actually had *another* function called "interesting()" in another
file, and I couldn't tell from the profiles which one was the one that
mattered more.
- when rewriting it to return a ternary value, you can't just do
if (interesting(...))
...
any more, but want to assign the return value to a local variable. The
name of choice for that variable would normally be "interesting", so
I just wanted to make the function name be more specific, and avoid
that whole issue (even though I then didn't choose that name for either
of the users, just to avoid confusion in the patch itself ;)
In other words, this doesn't really change anything, but I think it's a
good thing to do, and if somebody comes along and writes the logic for
"yeah, none of the pathspecs you have are interesting", we now support
that trivially.
It could easily be a meaningful optimization for things like "blame",
where there's just one pathspec, and stopping when you've seen it would
allow you to avoid about 50% of the tree traversals on average.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This makes "track_tree_refs()" use the same "tree_entry()" function for
counting the entries as it does for actually traversing them a few lines
later.
Not a biggie, but the reason I care was that this was the only user of
"update_tree_entry()" that didn't actually *extract* the tree entry first.
It doesn't matter as things stand now, but it meant that a separate
test-patch I had that avoided a few more "strlen()" calls by just saving
the entry length in the entry descriptor and using it directly when
updating wouldn't work without this patch.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Run the pre-commit and post-commit hooks at appropriate places, and
display their output if any.
Signed-off-by: Alexandre Julliard <julliard@winehq.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* jb/gc:
Make gc a builtin.
|
|
* fl/cvsserver:
cvsserver: further improve messages on commit and status
cvsserver: Be more chatty
|
|
The new configuration variable core.deltaBaseCacheLimit allows the
user to control how much memory they are willing to give to Git for
caching base objects of deltas. This is not normally meant to be
a user tweakable knob; the "out of the box" settings are meant to
be suitable for almost all workloads.
We default to 16 MiB under the assumption that the cache is not
meant to consume all of the user's available memory, and that the
cache's main purpose was to cache trees, for faster path limiters
during revision traversal. Since trees tend to be relatively small
objects, this relatively small limit should still allow a large
number of objects.
On the other hand we don't want the cache to start storing 200
different versions of a 200 MiB blob, as this could easily blow
the entire address space of a 32 bit process.
We evict OBJ_BLOB from the cache first (credit goes to Junio) as
we want to favor OBJ_TREE within the cache. These are the objects
that have the highest inflate() startup penalty, as they tend to
be small and thus don't have that much of a chance to ammortize
that penalty over the entire data.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* sp/run-command:
Use run_command within send-pack
Use run_command within receive-pack to invoke index-pack
Use run_command within merge-index
Use run_command for proxy connections
Use RUN_GIT_CMD to run push backends
Correct new compiler warnings in builtin-revert
Replace fork_with_pipe in bundle with run_command
Teach run-command to redirect stdout to /dev/null
Teach run-command about stdout redirection
|
|
In the Linux kernel, for example, it's common to include Cc: lines
for cases when you want to remember to cc someone on a patch without
necessarily claiming they signed off on it. Make git-send-email
aware of these.
Signed-off-by: "J. Bruce Fields" <bfields@citi.umich.edu>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Also add support for vimdiff
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|
|
Signed-off-by: James Bowes <jbowes@dangerouslyinc.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
|
|
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* ar/diff:
Add tests for --quiet option of diff programs
try-to-simplify-commit: use diff-tree --quiet machinery.
revision.c: explain what tree_difference does
Teach --quiet to diff backends.
diff --quiet
Remove unused diffcore_std_no_resolve
Allow git-diff exit with codes similar to diff(1)
|
|
This is a micro-optimization that grew out of the mailing list discussion
about "strlen()" showing up in profiles.
We used to pass regular C strings around to the low-level tree walking
routines, and while this worked fine, it meant that we needed to call
strlen() on strings that the caller always actually knew the size of
anyway.
So pass the length of the string down wih the string, and avoid
unnecessary calls to strlen(). Also, when extracting a pathname from a
tree entry, use "tree_entry_len()" instead of strlen(), since the length
of the pathname is directly calculable from the decoded tree entry itself
without having to actually do another strlen().
This shaves off another ~5-10% from some loads that are very tree
intensive (notably doing commit filtering by a pathspec).
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>"
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
A malloc() + memcpy() will always be faster than mmap() +
malloc() + inflate(). If the data is already there it is
certainly better to copy it straight away.
With this patch below I can do 'git log drivers/scsi/ >
/dev/null' about 7% faster. I bet it might be even more on
those platforms with bad mmap() support.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This trivial 256-entry delta_base cache improves performance for some
loads by a factor of 2.5 or so.
Instead of always re-generating the delta bases (possibly over and over
and over again), just cache the last few ones. They often can get re-used.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This doesn't change any code, it just creates a point for where we'd
actually do the caching of delta bases that have been generated.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
|
|
Signed-off-by: Junio C Hamano <junkio@cox.net>
|