Age | Commit message (Collapse) | Author | Files | Lines |
|
Since we sort objects by type, hash, preferredness and then
size, after we have a delta against preferred base, there is no
point trying a delta with non-preferred base. This seems to
save expensive calls to diff-delta and it also seems to save the
output space as well.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Maybe we would want to make this default before it graduates to
the master branch, but in the meantime to help testing things,
this allows you to say "git push --thin destination".
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The new flag loosens the usual "self containedness" requirment
of packfiles, and sends deltified representation of objects when
we know the other side has the base objects needed to unpack
them. This would help reducing the transfer size.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This goes together with "rev-list --object-edge" change, to feed
pack-objects list of edge commits in addition to the usual
object list. Upon seeing such list, pack-objects loosens the
usual "self contained delta" constraints, and can produce delta
against blobs and trees contained in the edge commits without
storing the delta base objects themselves.
The resulting packfile is not usable in .git/object/packs, but
is a good way to implement "delta-only" transfer.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This tries to rework the solution for the excess delta chain
problem. An earlier commit worked it around ``cheaply'', but
repeated repacking risks unbound growth of delta chains.
This version counts the length of delta chain we are reusing
from the existing pack, and makes sure a base object that has
sufficiently long delta chain does not get deltified.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
A new flag -q makes underlying pack-objects less chatty.
A new flag -f forces delta to be recomputed from scratch.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This introduces --no-reuse-delta option to disable reusing of
existing delta, which is a large part of the optimization
introduced by this series. This may become necessary if
repeated repacking makes delta chain too long. With this, the
output of the command becomes identical to that of the older
implementation. But the performance suffers greatly.
It still allows reusing non-deltified representations; there is
no point uncompressing and recompressing the whole text.
It also adds a couple more statistics output, while squelching
it under -q flag, which the last round forgot to do.
$ time old-git-pack-objects --stdout >/dev/null <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects....................
real 12m8.530s user 11m1.450s sys 0m57.920s
$ time git-pack-objects --stdout >/dev/null <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects.....................
Total 184141, written 184141 (delta 138297), reused 178833 (delta 134081)
real 0m59.549s user 0m56.670s sys 0m2.400s
$ time git-pack-objects --stdout --no-reuse-delta >/dev/null <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects.....................
Total 184141, written 184141 (delta 134833), reused 47904 (delta 0)
real 11m13.830s user 9m45.240s sys 0m44.330s
There is one remaining issue when --no-reuse-delta option is not
used. It can create delta chains that are deeper than specified.
A<--B<--C<--D E F G
Suppose we have a delta chain A to D (A is stored in full either
in a pack or as a loose object. B is depth1 delta relative to A,
C is depth2 delta relative to B...) with loose objects E, F, G.
And we are going to pack all of them.
B, C and D are left as delta against A, B and C respectively.
So A, E, F, and G are examined for deltification, and let's say
we decided to keep E expanded, and store the rest as deltas like
this:
E<--F<--G<--A
Oops. We ended up making D a bit too deep, didn't we? B, C and
D form a chain on top of A!
This is because we did not know what the final depth of A would
be, when we checked objects and decided to keep the existing
delta. Unfortunately, deferring the decision until just before
the deltification is not an option. To be able to make B, C,
and D candidates for deltification with the rest, we need to
know the type and final unexpanded size of them, but the major
part of the optimization comes from the fact that we do not read
the delta data to do so -- getting the final size is quite an
expensive operation.
To prevent this from happening, we should keep A from being
deltified. But how would we tell that, cheaply?
To do this most precisely, after check_object() runs, each
object that is used as the base object of some existing delta
needs to be marked with the maximum depth of the objects we
decided to keep deltified (in this case, D is depth 3 relative
to A, so if no other delta chain that is longer than 3 based on
A exists, mark A with 3). Then when attempting to deltify A, we
would take that number into account to see if the final delta
chain that leads to D becomes too deep.
However, this is a bit cumbersome to compute, so we would cheat
and reduce the maximum depth for A arbitrarily to depth/4 in
this implementation.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
When generating a new pack, notice if we have already needed
objects in existing packs. If an object is stored deltified,
and its base object is also what we are going to pack, then
reuse the existing deltified representation unconditionally,
bypassing all the expensive find_deltas() and try_deltas()
calls.
Also, notice if what we are going to write out exactly match
what is already in an existing pack (either deltified or just
compressed). In such a case, we can just copy it instead of
going through the usual uncompressing & recompressing cycle.
Without this patch, in linux-2.6 repository with about 1500
loose objects and a single mega pack:
$ git-rev-list --objects v2.6.16-rc3 >RL
$ wc -l RL
184141 RL
$ time git-pack-objects p <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects....................
a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
real 12m4.323s
user 11m2.560s
sys 0m55.950s
With this patch, the same input:
$ time ../git.junio/git-pack-objects q <RL
Generating pack...
Done counting 184141 objects.
Packing 184141 objects.....................
a1fc7b3e537fcb9b3c46b7505df859f0a11e79d2
Total 184141, written 184141, reused 182441
real 1m2.608s
user 0m55.090s
sys 0m1.830s
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
We run svn log against a URL without a working copy for the first fetch,
so we end up a log that's sorted from highest to lowest. That's bad, we
always want lowest to highest. Just default to --revision 0:HEAD now if
-r isn't specified for the first fetch.
Also sort the revisions after we get them just in case somebody
accidentally reverses the argument to --revision for whatever reason.
Thanks again to Emmanuel Guerin for helping me find this.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Thanks to Emmanuel Guerin for finding the bug.
Signed-off-by: Eric Wong <normalperson@yhbt.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
|
|
Systems using some uClibc versions do not properly support
iconv stuff. This patch allows Git to be built on those
systems by passing NO_ICONV=YesPlease to make. The only
drawback is mailinfo won't do charset conversion in those
systems.
Signed-off-by: Fernando J. Pereda <ferdy@gentoo.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* jc/add:
Detect misspelled pathspec to git-add
|
|
|
|
Signed-off-by: Josef Weidendorfer <Josef.Weidendorfer@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
If Git is compiled with NO_CURL=YesPlease and one tries to
clone a http repository, git-clone tries to call the curl
binary. This trivial patch prints an error instead in such
situation.
Signed-off-by: Fernando J. Pereda <ferdy@gentoo.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The delta depth is unsigned.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This is in the same spirit as an earlier patch for git-commit.
It does an extra ls-files to avoid complaining when a fully
tracked directory name is given on the command line (otherwise
--others restriction would say the pathspec does not match).
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Earlier patch mistakenly used prefix_len when it meant
prefix_offset. The latter is to strip the leading directories
when run from a subdirectory.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* kh/svn:
git-svnimport: -r adds svn revision number to commit messages
|
|
* jc/commit:
commit: detect misspelled pathspec while making a partial commit.
combine-diff: diff-files fix (#2)
combine-diff: diff-files fix.
|
|
* jc/rebase:
rebase: allow a hook to refuse rebasing.
|
|
* ra/email:
send-email: Add --cc
send-email: Add some options for controlling how addresses are automatically added to the cc: list.
|
|
When we refused to switch branches, we incorrectly showed
differences from the branch we would have switched to.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
When you say "git commit Documentaiton" to make partial commit
for the files only in that directory, we did not detect that as
a misspelled pathname and attempted to commit index without
change. If nothing matched, there is no harm done, but if the
index gets modified otherwise by having another valid pathspec
or after an explicit update-index, a user will not notice
without paying attention to the "git status" preview.
This introduces --error-unmatch option to ls-files, and uses it
to detect this common user error.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
New -r flag for prepending the corresponding Subversion revision
number to each commit message.
Signed-off-by: Karl Hasselström <kha@treskal.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The raw format "git-diff-files -c" to show unmerged state forgot
to initialize the status fields from parents, causing NUL
characters to be emitted.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Conflicts:
Documentation/git-commit.txt - taking the post 1.2.0 semantics.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* pb/bisect:
Properly git-bisect reset after bisecting from non-master head
|
|
When showing a conflicted merge from index stages and working
tree file, we did not fetch the mode from the working tree,
and mistook that as a deleted file. Also if the manual
resolution (or automated resolution by git rerere) ended up
taking either parent's version, we did not show _anything_ for
that path. Either was quite bad and confusing.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
With the current Makefile we don't use the shell chosen by the
platform specific defines when we invoke GIT-VERSION-GEN.
Signed-off-by: Fredrik Kuivinen <freku045@student.liu.se>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
I noticed that we forgot to clean this file and kept it that
way, while trying to help with Andrew's bisect problem.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Noticed by Jon Nelson.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Since Junio used this in an example, and I've personally tried to use it, I
suppose the option should actually exist.
Signed-off-by: Ryan Anderson <ryan@michonline.com>
|
|
The documentation was mistakenly describing the --only semantics to
be default. The 1.2.0 release and its maintenance series 1.2.X will
keep the traditional --include semantics as the default. Clarify the
situation.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
added to the cc: list.
Signed-off-by: Ryan Anderson <ryan@michonline.com>
|
|
This lets a hook to interfere a rebase and help prevent certain
branches from being rebased by mistake. A sample hook to show
how to prevent a topic branch that has already been merged into
publish branch.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This changes the "git commit paths..." to default to --only
semantics from traditional --include semantics, as agreed on the
list.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This fixes the same issue as a previous fix by Alex Riesen does.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
git-bisect reset without an argument would return to master even
if the bisecting started at a non-master branch. This patch makes
it save the original branch name to .git/head-name and restore it
afterwards.
This is also compatible with Cogito and cg-seek, so cg-status will
show that we are seeked on the bisect branch and cg-reset will
properly restore the original branch.
git-bisect start will refuse to work if it is not on a bisect but
.git/head-name exists; this is to protect against conflicts with
other seeking tools.
Signed-off-by: Petr Baudis <pasky@suse.cz>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Earlier, when we switched a branch we used diff-files to show
paths that are dirty in the working tree. But we allow switching
branches with updated index ("read-tree -m -u $old $new" works that
way), and only showing paths that have differences in the working
tree but not paths that are different in index was confusing.
This shows both as modified from the top commit of the branch we
just have switched to.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
You could give -q to squelch it, but currently no tool does it.
This would make 'git clone host:repo here' over ssh not silent
again.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
FreeBSD 4.11 being one example: the built-in echo doesn't have -e,
and the installed /bin/echo does not do "-e" as well.
"printf" works, laking just "\e" and "\xAB'.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The hashed object lookup had a subtle bug in re-hashing: it did
for (i = 0; i < count; i++)
if (objs[i]) {
.. rehash ..
where "count" was the old hash couny. Oon the face of it is obvious, since
it clearly re-hashes all the old objects.
However, it's wrong.
If the last old hash entry before re-hashing was in use (or became in use
by the re-hashing), then when re-hashing could have inserted an object
into the hash entries with idx >= count due to overflow. When we then
rehash the last old entry, that old entry might become empty, which means
that the overflow entries should be re-hashed again.
In other words, the loop has to be fixed to either traverse the whole
array, rather than just the old count.
(There's room for a slight optimization: instead of counting all the way
up, we can break when we see the first empty slot that is above the old
"count". At that point we know we don't have any collissions that we might
have to fix up any more. This patch only does the trivial fix)
[jc: with trivial fix on trivial fix]
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Calling hashtable_index from find_object before objs is created
would result in division by zero failure. Avoid it.
Also the given object name may not be aligned suitably for
unsigned int; avoid dereferencing casted pointer.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
In a simple test, this brings down the CPU time from 47 sec to 22 sec.
Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|