Age | Commit message (Collapse) | Author | Files | Lines |
|
* ph/strbuf: (44 commits)
Make read_patch_file work on a strbuf.
strbuf_read_file enhancement, and use it.
strbuf change: be sure ->buf is never ever NULL.
double free in builtin-update-index.c
Clean up stripspace a bit, use strbuf even more.
Add strbuf_read_file().
rerere: Fix use of an empty strbuf.buf
Small cache_tree_write refactor.
Make builtin-rerere use of strbuf nicer and more efficient.
Add strbuf_cmp.
strbuf_setlen(): do not barf on setting length of an empty buffer to 0
sq_quote_argv and add_to_string rework with strbuf's.
Full rework of quote_c_style and write_name_quoted.
Rework unquote_c_style to work on a strbuf.
strbuf API additions and enhancements.
nfv?asprintf are broken without va_copy, workaround them.
Fix the expansion pattern of the pseudo-static path buffer.
builtin-for-each-ref.c::copy_name() - do not overstep the buffer.
builtin-apply.c: fix a tiny leak introduced during xmemdupz() conversion.
Use xmemdupz() in many places.
...
|
|
For that purpose, the ->buf is always initialized with a char * buf living
in the strbuf module. It is made a char * so that we can sloppily accept
things that perform: sb->buf[0] = '\0', and because you can't pass "" as an
initializer for ->buf without making gcc unhappy for very good reasons.
strbuf_init/_detach/_grow have been fixed to trust ->alloc and not ->buf
anymore.
as a consequence strbuf_detach is _mandatory_ to detach a buffer, copying
->buf isn't an option anymore, if ->buf is going to escape from the scope,
and eventually be free'd.
API changes:
* strbuf_setlen now always works, so just make strbuf_reset a convenience
macro.
* strbuf_detatch takes a size_t* optional argument (meaning it can be
NULL) to copy the buffer's len, as it was needed for this refactor to
make the code more readable, and working like the callers.
Signed-off-by: Pierre Habouzit <madcoder@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Pierre Habouzit <madcoder@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The function sounds boolean; make it behave as one, not "0 for
success, non-zero for failure".
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Pierre Habouzit <madcoder@debian.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* Now, those functions take an "out" strbuf argument, where they store their
result if any. In that case, it also returns 1, else it returns 0.
* those functions support "in place" editing, in the sense that it's OK to
call them this way:
convert_to_git(path, sb->buf, sb->len, sb);
When doable, conversions are done in place for real, else the strbuf
content is just replaced with the new one, transparentely for the caller.
If you want to create a new filter working this way, being the accumulation
of filter1, filter2, ... filtern, then your meta_filter would be:
int meta_filter(..., const char *src, size_t len, struct strbuf *sb)
{
int ret = 0;
ret |= filter1(...., src, len, sb);
if (ret) {
src = sb->buf;
len = sb->len;
}
ret |= filter2(...., src, len, sb);
if (ret) {
src = sb->buf;
len = sb->len;
}
....
return ret | filtern(..., src, len, sb);
}
That's why subfilters the convert_to_* functions called were also rewritten
to work this way.
Signed-off-by: Pierre Habouzit <madcoder@debian.org>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This brings builtin-stripspace, builtin-tag and mktag to use strbufs.
Signed-off-by: Pierre Habouzit <madcoder@debian.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Under some types of packfile corruption the zlib stream holding the
data for a delta within a packfile may fail to inflate, due to say
a CRC failure within the compressed data itself. When this occurs
the unpack_compressed_entry function will return NULL as a signal to
the caller that the data is not available. Unfortunately we then
tried to use that NULL as though it referenced a memory location
where a delta was stored and tried to apply it to the delta base.
Loading a byte from the NULL address typically causes a SIGSEGV.
cate on #git noticed this failure in `git fsck --full` where the
call to verify_pack() first noticed that the packfile was corrupt
by finding that the packfile's SHA-1 did not match the raw data of
the file. After finding this fsck went ahead and tried to verify
every object within the packfile, even though the packfile was
already known to be bad. If we are going to shovel bad data at
the delta unpacking code, we better handle it correctly.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Print the index version when an error occurs so the user
knows what type of header (and size) we thought the index
should have had.
Signed-off-by: Luiz Fernando N. Capitulino <lcapitulino@mandriva.com.br>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The new name is closer to the purpose of the function.
A NUL-terminated buffer makes things easier when callers need that.
Since the function returns only the memory written with data,
almost always allocating more space than needed because final
size is unknown, an extra NUL terminating the buffer is harmless.
It is not included in the returned size, so the function
remains working as before.
Also, now the function allows the buffer passed to be NULL at first,
and alloc_nr is now used for growing the buffer, instead size=*2.
Signed-off-by: Carlos Rica <jasampler@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* maint:
Document -<n> for git-format-patch
glossary: add 'reflog'
diff --no-index: fix --name-status with added files
Don't smash stack when $GIT_ALTERNATE_OBJECT_DIRECTORIES is too long
|
|
There is no restriction on the length of the name returned by
get_object_directory, other than the fact that it must be a stat'able
git object directory. That means its name may have length up to
PATH_MAX-1 (i.e., often 4095) not counting the trailing NUL.
Combine that with the assumption that the concatenation of that name and
suffixes like "/info/alternates" and "/pack/---long-name---.idx" will fit
in a buffer of length PATH_MAX, and you see the problem. Here's a fix:
sha1_file.c (prepare_packed_git_one): Lengthen "path" buffer
so we are guaranteed to be able to append "/pack/" without checking.
Skip any directory entry that is too long to be appended.
(read_info_alternates): Protect against a similar buffer overrun.
Before this change, using the following admittedly contrived environment
setting would cause many git commands to clobber their stack and segfault
on a system with PATH_MAX == 4096:
t=$(perl -e '$s=".git/objects";$n=(4096-6-length($s))/2;print "./"x$n . $s')
export GIT_ALTERNATE_OBJECT_DIRECTORIES=$t
touch g
./git-update-index --add g
If you run the above commands, you'll soon notice that many
git commands now segfault, so you'll want to do this:
unset GIT_ALTERNATE_OBJECT_DIRECTORIES
Signed-off-by: Jim Meyering <jim@meyering.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* maint:
config: Change output of --get-regexp for valueless keys
config: Complete documentation of --get-regexp
cleanup merge-base test script
Fix zero-object version-2 packs
Ignore submodule commits when fetching over dumb protocols
|
|
A pack-file can get created without any objects in it (to transfer "no
data" - which can happen if you use a reference git repo, for example,
or just otherwise just end up transferring only branch head information
and already have all the objects themselves).
And while we probably should never create an index for such a pack, if we
do (and we do), the index file size sanity checking was incorrect.
This fixes it.
Reported-and-tested-by: Jocke Tjernlund <tjernlund@tjernlund.se>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
There still are quite a few symbols that ought to be static.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
This uses "git-apply --whitespace=strip" to fix whitespace errors that have
crept in to our source files over time. There are a few files that need
to have trailing whitespaces (most notably, test vectors). The results
still passes the test, and build result in Documentation/ area is unchanged.
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
* sp/pack:
Style nit - don't put space after function names
Ensure the pack index is opened before access
Simplify index access condition in count-objects, pack-redundant
Test for recent rev-parse $abbrev_sha1 regression
rev-parse: Identify short sha1 sums correctly.
Attempt to delay prepare_alt_odb during get_sha1
Micro-optimize prepare_alt_odb
Lazily open pack index files on demand
|
|
* maint:
git-config: Improve documentation of git-config file handling
git-config: Various small fixes to asciidoc documentation
decode_85(): fix missing return.
fix signed range problems with hex conversions
|
|
* maint-1.5.1:
git-config: Improve documentation of git-config file handling
git-config: Various small fixes to asciidoc documentation
decode_85(): fix missing return.
fix signed range problems with hex conversions
|
|
Jon Smirl said:
| Once an object reference hits a pack file it is very likely that
| following references will hit the same pack file. So first place to
| look for an object is the same place the previous object was found.
This is indeed a good heuristic so here it is. The search always start
with the pack where the last object lookup succeeded. If the wanted
object is not available there then the search continues with the normal
pack ordering.
To test this I split the Linux repository into 66 packs and performed a
"time git-rev-list --objects --all > /dev/null". Best results are as
follows:
Pack Sort w/o this patch w/ this patch
-------------------------------------------------------------
recent objects last 26.4s 20.9s
recent objects first 24.9s 18.4s
This shows that the pack order based on object age has some influence,
but that the last-used-pack heuristic is even more significant in
reducing object lookup.
Signed-off-by: Nicolas Pitre <nico@cam.org> --- Note: the
--max-pack-size to git-repack currently produces packs with old objects
after those containing recent objects. The pack sort based on
filesystem timestamp is therefore backward for those. This needs to be
fixed of course, but at least it made me think about this variable for
the test.
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Make hexval_table[] "const". Also make sure that the accessor
function hexval() does not access the table with out-of-range
values by declaring its parameter "unsigned char", instead of
"unsigned int".
With this, gcc can just generate:
movzbl (%rdi), %eax
movsbl hexval_table(%rax),%edx
movzbl 1(%rdi), %eax
movsbl hexval_table(%rax),%eax
sall $4, %edx
orl %eax, %edx
for the code to generate a byte from two hex characters.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Our style is to not put a space after a function name. I did here,
and Junio applied the patch with the incorrect formatting. So I'm
cleaning up after myself since I noticed it upon review.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Calling getenv() is not that expensive, but its also not free,
and its certainly not cheaper than testing to see if alt_odb_tail
is not null.
Because we are calling prepare_alt_odb() from within find_sha1_file
every time we cannot find an object file locally we want to skip out
of prepare_alt_odb() as early as possible once we have initialized
our alternate list.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
In some repository configurations the user may have many packfiles,
but all of the recent commits/trees/tags/blobs are likely to
be in the most recent packfile (the one with the newest mtime).
It is therefore common to be able to complete an entire operation
by accessing only one packfile, even if there are 25 packfiles
available to the repository.
Rather than opening and mmaping the corresponding .idx file for
every pack found, we now only open and map the .idx when we suspect
there might be an object of interest in there.
Of course we cannot known in advance which packfile contains an
object, so we still need to scan the entire packed_git list to
locate anything. But odds are users want to access objects in the
most recently created packfiles first, and that may be all they
ever need for the current operation.
Junio observed in b867092f that placing recent packfiles before
older ones can slightly improve access times for recent objects,
without degrading it for historical object access.
This change improves upon Junio's observations by trying even harder
to avoid the .idx files that we won't need.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* np/pack:
deprecate the new loose object header format
make "repack -f" imply "pack-objects --no-reuse-object"
allow for undeltified objects not to be reused
|
|
This patch fixes all calls to xread() where the return value is not
stored into an ssize_t. The patch should not have any effect whatsoever,
other than putting better/more appropriate type names on variables.
Signed-off-by: Johan Herland <johan@herland.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Now that we encourage and actively preserve objects in a packed form
more agressively than we did at the time the new loose object format and
core.legacyheaders were introduced, that extra loose object format
doesn't appear to be worth it anymore.
Because the packing of loose objects has to go through the delta match
loop anyway, and since most of them should end up being deltified in
most cases, there is really little advantage to have this parallel loose
object format as the CPU savings it might provide is rather lost in the
noise in the end.
This patch gets rid of core.legacyheaders, preserve the legacy format as
the only writable loose object format and deprecate the other one to
keep things simpler.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* maint:
Start preparing for 1.5.1.3
Sanitize @to recipients.
git-svn: Ignore usernames in URLs in find_by_url
Document --dry-run and envelope-sender for git-send-email.
Allow users to optionally specify their envelope sender.
Ensure clean addresses are always used with Net::SMTP
Validate @recipients before using it for sendmail and Net::SMTP.
Perform correct quoting of recipient names.
Change the scope of the $cc variable as it is not needed outside of send_message.
Debugging cleanup improvements
Prefix Dry- to the message status to denote dry-runs.
Document --dry-run parameter to send-email.
git-svn: Don't rely on $_ after making a function call
Fix handle leak in write_tree
Actually handle some-low memory conditions
Conflicts:
RelNotes
git-send-email.perl
|
|
Tim Ansell discovered his Debian server didn't permit git-daemon to
use as much memory as it needed to handle cloning a project with
a 128 MiB packfile. Filtering the strace provided by Tim of the
rev-list child showed this gem of a sequence:
open("./objects/pack/pack-*.pack", O_RDONLY|O_LARGEFILE <unfinished ...>
<... open resumed> ) = 5
OK, so the packfile is fd 5...
mmap2(NULL, 33554432, PROT_READ, MAP_PRIVATE, 5, 0 <unfinished ...>
<... mmap2 resumed> ) = 0xb5e2d000
and we mapped one 32 MiB window from it at position 0...
mmap2(NULL, 31020635, PROT_READ, MAP_PRIVATE, 5, 0x6000 <unfinished ...>
<... mmap2 resumed> ) = -1 ENOMEM (Cannot allocate memory)
And we asked for another window further into the file. But got
denied. In Tim's case this was due to a resource limit on the
git-daemon process, and its children.
Now where are we in the code? We're down inside use_pack(),
after we have called unuse_one_window() enough times to make sure
we stay within our allowed maximum window size. However since we
didn't unmap the prior window at 0xb5e2d000 we aren't exceeding
the current limit (which probably was just the defaults).
But we're actually down inside xmmap()...
So we release the window we do have (by calling release_pack_memory),
assuming there is some memory pressure...
munmap(0xb5e2d000, 33554432 <unfinished ...>
<... munmap resumed> ) = 0
close(5 <unfinished ...>
<... close resumed> ) = 0
And that was the last window in this packfile. So we closed it.
Way to go us. Our xmmap did not expect release_pack_memory to
close the fd its about to map...
mmap2(NULL, 31020635, PROT_READ, MAP_PRIVATE, 5, 0x6000 <unfinished ...>
<... mmap2 resumed> ) = -1 EBADF (Bad file descriptor)
And so the Linux kernel happily tells us f' off.
write(2, "fatal: ", 7 <unfinished ...>
<... write resumed> ) = 7
write(2, "Out of memory? mmap failed: Bad "..., 47 <unfinished ...>
<... write resumed> ) = 47
And we report the bad file descriptor error, and not the ENOMEM,
and die, claiming we are out of memory. But actually that mmap
should have succeeded, as we had enough memory for that window,
seeing as how we released the prior one.
Originally when I developed the sliding window mmap feature I had
this exact same bug in fast-import, and I dealt with it by handing
in the struct packed_git* we want to open the new window for, as the
caller wasn't prepared to reopen the packfile if unuse_one_window
closed it. The same is true here from xmmap, but the caller doesn't
have the struct packed_git* handy. So I'm using the file descriptor
instead to perform the same test.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
* 'jc/attr': (28 commits)
lockfile: record the primary process.
convert.c: restructure the attribute checking part.
Fix bogus linked-list management for user defined merge drivers.
Simplify calling of CR/LF conversion routines
Document gitattributes(5)
Update 'crlf' attribute semantics.
Documentation: support manual section (5) - file formats.
Simplify code to find recursive merge driver.
Counto-fix in merge-recursive
Fix funny types used in attribute value representation
Allow low-level driver to specify different behaviour during internal merge.
Custom low-level merge driver: change the configuration scheme.
Allow the default low-level merge driver to be configured.
Custom low-level merge driver support.
Add a demonstration/test of customized merge.
Allow specifying specialized merge-backend per path.
merge-recursive: separate out xdl_merge() interface.
Allow more than true/false to attributes.
Document git-check-attr
Change attribute negation marker from '!' to '-'.
...
|
|
* lt/gitlink:
Tests for core subproject support
Expose subprojects as special files to "git diff" machinery
Fix some "git ls-files -o" fallout from gitlinks
Teach "git-read-tree -u" to check out submodules as a directory
Teach git list-objects logic to not follow gitlinks
Fix gitlink index entry filesystem matching
Teach "git-read-tree -u" to check out submodules as a directory
Teach git list-objects logic not to follow gitlinks
Don't show gitlink directories when we want "other" files
Teach git-update-index about gitlinks
Teach directory traversal about subprojects
Fix thinko in subproject entry sorting
Teach core object handling functions about gitlinks
Teach "fsck" not to follow subproject links
Add "S_IFDIRLNK" file mode infrastructure for git links
Add 'resolve_gitlink_ref()' helper function
Avoid overflowing name buffer in deep directory structures
diff-lib: use ce_mode_from_stat() rather than messing with modes manually
|
|
Signed-off-by: Alex Riesen <raa.lkml@gmail.com>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
... which consists of existing code split out of packed_delta_info()
for other callers to use it as well.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This teaches the really fundamental core SHA1 object handling routines
about gitlinks. We can compare trees with gitlinks in them (although we
can not actually generate patches for them yet - just raw git diffs),
and they show up as commits in "git ls-tree".
We also know to compare gitlinks as if they were directories (ie the
normal "sort as trees" rules apply).
[jc: amended a cut&paste error]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
With this patch, packs larger than 4GB are usable, even on a 32-bit machine
(at least on Linux). If off_t is not large enough to deal with a large
pack then die() is called instead of attempting to use the pack and
producing garbage.
This was tested with a 8GB pack specially created for the occasion on
a 32-bit machine.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This patch introduces the MSB() macro to obtain the desired number of
most significant bits from a given variable independently of the variable
type.
It is then used to better implement the overflow test on the OBJ_OFS_DELTA
base offset variable with the property of always working correctly
regardless of the type/size of that variable.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The coming index format change doesn't allow for the number of objects
to be determined from the size of the index file directly. Instead, Let's
initialize a field in the packed_git structure with the object count when
the index is validated since the count is always known at that point.
While at it let's reorder some struct packed_git fields to avoid padding
due to needed 64-bit alignment for some of them.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Let's avoid the open coded pack index reference in pack-object and use
nth_packed_object_sha1() instead. This will help encapsulating index
format differences in one place.
And while at it there is no reason to copy SHA1's over and over while a
direct pointer to it in the index will do just fine.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Acked-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This is in the same spirit as earlier fix to write_sha1_from_fd().
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
I stumbled across this in the context of the fchmod 0444 patch.
At first, I was going to unlink and call error like the two subsequent
tests do, but a failed write (above) provokes a "die", so I made
this do the same. This is testing for a write failure, after all.
Signed-off-by: Jim Meyering <jim@meyering.net>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
... like it is done everywhere else.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
When some operations are interrupted (or "die()'d" or crashed) then the
partial object/pack/index file may remain around. Make it more obvious
in their name that those files are temporary stuff and can be cleaned up
if no operation is in progress.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
When creating a new object, we use "deflate(stream, Z_FINISH)" in a loop
until it no longer returns Z_OK, and then we do "deflateEnd()" to finish
up business.
That should all work, but the fact is, it's not how you're _supposed_ to
use the zlib return values properly:
- deflate() should never return Z_OK in the first place, except if we
need to increase the output buffer size (which we're not doing, and
should never need to do, since we pre-allocated a buffer that is
supposed to be able to hold the output in full). So the "while()" loop
was incorrect: Z_OK doesn't actually mean "ok, continue", it means "ok,
allocate more memory for me and continue"!
- if we got an error return, we would consider it to be end-of-stream,
but it could be some internal zlib error. In short, we should check
for Z_STREAM_END explicitly, since that's the only valid return value
anyway for the Z_FINISH case.
- we never checked deflateEnd() return codes at all.
Now, admittedly, none of these issues should ever happen, unless there is
some internal bug in zlib. So this patch should make zero difference, but
it seems to be the right thing to do.
We should probablybe anal and check the return value of "deflateInit()"
too!
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Use hash_sha1_file() instead of duplicating code to compute object SHA1.
While at it make it accept a const pointer.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The thing is, if the output buffer is empty, we should *still* actually
use the zlib routines to *unpack* that empty output buffer.
But we had a test that said "only unpack if we still expect more output".
So we wouldn't use up all the zlib stream, because we felt that we didn't
need it, because we already had all the bytes we wanted. And it was
"true": we did have all the output data. We just needed to also eat all
the input data!
We've had this bug before - thinking that we don't need to inflate()
anything because we already had it all..
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
This provides a smoother degradation in performance when the cache
gets trashed due to the delta_base_cache_limit being reached. Limited
testing with really small delta_base_cache_limit values appears to confirm
this.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
Currently there are 3 different ways to deal with the cache size.
Let's stick to only one. The compiler is smart enough to produce the exact
same code in those cases anyway.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
The new configuration variable core.deltaBaseCacheLimit allows the
user to control how much memory they are willing to give to Git for
caching base objects of deltas. This is not normally meant to be
a user tweakable knob; the "out of the box" settings are meant to
be suitable for almost all workloads.
We default to 16 MiB under the assumption that the cache is not
meant to consume all of the user's available memory, and that the
cache's main purpose was to cache trees, for faster path limiters
during revision traversal. Since trees tend to be relatively small
objects, this relatively small limit should still allow a large
number of objects.
On the other hand we don't want the cache to start storing 200
different versions of a 200 MiB blob, as this could easily blow
the entire address space of a 32 bit process.
We evict OBJ_BLOB from the cache first (credit goes to Junio) as
we want to favor OBJ_TREE within the cache. These are the objects
that have the highest inflate() startup penalty, as they tend to
be small and thus don't have that much of a chance to ammortize
that penalty over the entire data.
Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|
|
A malloc() + memcpy() will always be faster than mmap() +
malloc() + inflate(). If the data is already there it is
certainly better to copy it straight away.
With this patch below I can do 'git log drivers/scsi/ >
/dev/null' about 7% faster. I bet it might be even more on
those platforms with bad mmap() support.
Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Junio C Hamano <junkio@cox.net>
|