summaryrefslogtreecommitdiff
path: root/object.c
AgeCommit message (Collapse)AuthorFilesLines
2013-05-29Merge branch 'jk/lookup-object-prefer-latest'Libravatar Junio C Hamano1-2/+12
Optimizes object lookup when the object hashtable starts to become crowded. * jk/lookup-object-prefer-latest: lookup_object: prioritize recently found objects
2013-05-02lookup_object: prioritize recently found objectsLibravatar Jeff King1-2/+12
The lookup_object function is backed by a hash table of all objects we have seen in the program. We manage collisions with a linear walk over the colliding entries, checking each with hashcmp(). The main cost of lookup is in these hashcmp() calls; finding our item in the first slot is cheaper than finding it in the second slot, which is cheaper than the third, and so on. If we assume that there is some locality to the object lookups (e.g., if X and Y collide, and we have just looked up X, the next lookup is more likely to be for X than for Y), then we can improve our average lookup speed by checking X before Y. This patch does so by swapping a found item to the front of the collision chain. The p0001 perf test reveals that this does indeed exploit locality in the case of "rev-list --all --objects": Test origin this tree ------------------------------------------------------------------------- 0001.1: rev-list --all 0.40(0.38+0.02) 0.40(0.36+0.03) +0.0% 0001.2: rev-list --all --objects 2.24(2.17+0.05) 1.86(1.79+0.05) -17.0% This is not surprising, as the full object traversal will hit the same tree entries over and over (e.g., for every commit that doesn't change "Documentation/", we will have to look up the same sha1 just to find out that we already processed it). The reason why this technique works (and does not violate any properties of the hash table) is subtle and bears some explanation. Let's imagine we get a lookup for sha1 `X`, and it hashes to bucket `i` in our table. That stretch of the table may look like: index | i-1 | i | i+1 | i+2 | ----------------------------------- entry ... | A | B | C | X | ... ----------------------------------- We start our probe at i, see that B does not match, nor does C, and finally find X. There may be multiple C's in the middle, but we know that there are no empty slots (or else we would not find X at all). We do not know the original index of B; it may be `i`, or it may be less than i (e.g., if it were `i-1`, it would collide with A and spill over into the `i` bucket). So it is acceptable for us to move it to the right of a contiguous stretch of entries (because we will find it from a linear walk starting anywhere at `i` or before), but never to the left (if we moved it to `i-1`, we would miss it when starting our walk at `i`). We do know the original index of X; it is `i`, so it is safe to place it anywhere in the contiguous stretch between `i` and where we found it (`i+2` in the this case). This patch does a pure swap; after finding X in the situation above, we would end with: index | i-1 | i | i+1 | i+2 | ----------------------------------- entry ... | A | X | C | B | ... ----------------------------------- We could instead bump X into the `i` slot, and then shift the whole contiguous chain down by one, resulting in: index | i-1 | i | i+1 | i+2 | ----------------------------------- entry ... | A | X | B | C | ... ----------------------------------- That puts our chain in true most-recently-used order. However, experiments show that it is not any faster (and in fact, is slightly slower due to the extra manipulation). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-03-17avoid segfaults on parse_object failureLibravatar Jeff King1-0/+10
Many call-sites of parse_object assume that they will get a non-NULL return value; this is not the case if we encounter an error while parsing the object. This patch adds a wrapper function around parse_object that handles dying automatically, and uses it anywhere we immediately try to access the return value as a non-NULL pointer (i.e., anywhere that we would currently segfault). This wrapper may also be useful in other places. The most obvious one is code like: o = parse_object(sha1); if (!o) die(...); However, these should not be mechanically converted to parse_object_or_die, as the die message is sometimes customized. Later patches can address these sites on a case-by-case basis. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-30remove superfluous newlines in error messagesLibravatar Pete Wyckoff1-3/+3
The error handling routines add a newline. Remove the duplicate ones in error messages. Signed-off-by: Pete Wyckoff <pw@padd.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-04-24Merge branch 'hv/submodule-recurse-push'Libravatar Junio C Hamano1-0/+11
"git push --recurse-submodules" learns to optionally look into the histories of submodules bound to the superproject and push them out. By Heiko Voigt * hv/submodule-recurse-push: push: teach --recurse-submodules the on-demand option Refactor submodule push check to use string list instead of integer Teach revision walking machinery to walk multiple times sequencially
2012-03-30Teach revision walking machinery to walk multiple times sequenciallyLibravatar Heiko Voigt1-0/+11
Previously it was not possible to iterate revisions twice using the revision walking api. We add a reset_revision_walk() which clears the used flags. This allows us to do multiple sequencial revision walks. We add the appropriate calls to the existing submodule machinery doing revision walks. This is done to avoid surprises if future code wants to call these functions more than once during the processes lifetime. Signed-off-by: Heiko Voigt <hvoigt@hvoigt.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-03-07parse_object: avoid putting whole blob in coreLibravatar Nguyễn Thái Ngọc Duy1-0/+11
Traditionally, all the callers of check_sha1_signature() first called read_sha1_file() to prepare the whole object data in core, and called this function. The function is used to revalidate what we read from the object database actually matches the object name we used to ask for the data from the object database. Update the API to allow callers to pass NULL as the object data, and have the function read and hash the object data using streaming API to recompute the object name, without having to hold everything in core at the same time. This is most useful in parse_object() that parses a blob object, because this caller does not have to keep the actual blob data around in memory after a "struct blob" is returned. Signed-off-by: Nguyễn Thái Ngọc Duy <pclouds@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2012-01-05parse_object: try internal cache before reading object dbLibravatar Jeff King1-2/+7
When parse_object is called, we do the following: 1. read the object data into a buffer via read_sha1_file 2. call parse_object_buffer, which then: a. calls the appropriate lookup_{commit,tree,blob,tag} to either create a new "struct object", or to find an existing one. We know the appropriate type from the lookup in step 1. b. calls the appropriate parse_{commit,tree,blob,tag} to parse the buffer for the new (or existing) object In step 2b, all of the called functions are no-ops for object "X" if "X->object.parsed" is set. I.e., when we have already parsed an object, we end up going to a lot of work just to find out at a low level that there is nothing left for us to do (and we throw away the data from read_sha1_file unread). We can optimize this by moving the check for "do we have an in-memory object" from 2a before the expensive call to read_sha1_file in step 1. This might seem circular, since step 2a uses the type information determined in step 1 to call the appropriate lookup function. However, we can notice that all of the lookup_* functions are backed by lookup_object. In other words, all of the objects are kept in a master hash table, and we don't actually need the type to do the "do we have it" part of the lookup, only to do the "and create it if it doesn't exist" part. This can save time whenever we call parse_object on the same sha1 twice in a single program. Some code paths already perform this optimization manually, with either: if (!obj->parsed) obj = parse_object(obj->sha1); if you already have a "struct object", or: struct object *obj = lookup_unknown_object(sha1); if (!obj || !obj->parsed) obj = parse_object(sha1); if you don't. This patch moves the optimization into parse_object itself. Most git operations won't notice any impact. Either they don't parse a lot of duplicate sha1s, or the calling code takes special care not to re-parse objects. I timed two code paths that do benefit (there may be more, but these two were immediately obvious and easy to time). The first is fast-export, which calls parse_object on each object it outputs, like this: object = parse_object(sha1); if (!object) die(...); if (object->flags & SHOWN) return; which means that just to realize we have already shown an object, we will read the whole object from disk! With this patch, my best-of-five time for "fast-export --all" on git.git dropped from 26.3s to 21.3s. The second case is upload-pack, which will call parse_object for each advertised ref (because it needs to peel tags to show "^{}" entries). This doesn't matter for most repositories, because they don't have a lot of refs pointing to the same objects. However, if you have a big alternates repository with a shared object db for a number of child repositories, then the alternates repository will have duplicated refs representing each of its children. For example, GitHub's alternates repository for git.git has ~120,000 refs, of which only ~3200 are unique. The time for upload-pack to print its list of advertised refs dropped from 3.4s to 0.76s. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-11-16receive-pack, fetch-pack: reject bogus pack that records objects twiceLibravatar Junio C Hamano1-0/+2
When receive-pack & fetch-pack are run and store the pack obtained over the wire to a local repository, they internally run the index-pack command with the --strict option. Make sure that we reject incoming packfile that records objects twice to avoid spreading such a damage. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-05-15read_sha1_file(): get rid of read_sha1_file_repl() madnessLibravatar Junio C Hamano1-2/+2
Most callers want to silently get a replacement object, and they do not care what the real name of the replacement object is. Worse yet, no sane interface to return the underlying object without replacement is provided. Remove the function and make only the few callers that want the name of the replacement object find it themselves. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-09-06Merge branch 'maint'Libravatar Junio C Hamano1-4/+4
* maint: tag.c: whitespace breakages fix Fix whitespace issue in object.c t5505: add missing &&
2010-09-06Merge branch 'xx/trivial' into maintLibravatar Junio C Hamano1-4/+4
* xx/trivial: tag.c: whitespace breakages fix Fix whitespace issue in object.c t5505: add missing &&
2010-09-05Fix whitespace issue in object.cLibravatar Jared Hance1-4/+4
Change some expanded tabs (spaces) to tabs in object.c. Signed-off-by: Jared Hance <jaredhance@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-09-03Merge branch 'nd/maint-fix-replace'Libravatar Junio C Hamano1-1/+1
* nd/maint-fix-replace: parse_object: pass on the original sha1, not the replaced one
2010-09-03parse_object: pass on the original sha1, not the replaced oneLibravatar Nguyễn Thái Ngọc Duy1-1/+1
Commit 0e87c36 (object: call "check_sha1_signature" with the replacement sha1) changed the first argument passed to parse_object_buffer() from "sha1" to "repl". With that change, the returned obj pointer has the replacement SHA1 in obj->sha1, not the original one. But when using lookup_commit() and then parse_commit() on a commit, we get an object pointer with the original sha1, but the commit content comes from the replacement commit. So the result we get from using parse_object() is different from the we get from using lookup_commit() followed by parse_commit(). It looks much simpler and safer to fix this inconsistency by passing "sha1" to parse_object_bufer() instead of "repl". The commit comment should be used to tell the the replacement commit is replacing another commit and why. So it should be easy to see that we have a replacement commit instead of an original one. And it is not a problem if the content of the commit is not consistent with the sha1 as cat-file piped to hash-object can be used to see the difference. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-04-19fix "bundle --stdin" segfaultLibravatar Jonathan Nieder1-2/+2
When passed an empty list, objects_array_remove_duplicates() corrupts it by changing the number of entries from 0 to 1. The problem lies in the condition of its main loop: for (ref = 0; ref < array->nr - 1; ref++) { The loop body manipulates the supplied object array. In the case of an empty array, it should not be doing anything at all. But array->nr is an unsigned quantity, so the code enters the loop, in particular increasing array->nr. Fix this by comparing (ref + 1 < array->nr) instead. This bug can be triggered by git bundle --stdin: $ echo HEAD | git bundle create some.bundle --stdin’ Segmentation fault (core dumped) The list of commits to bundle appears to be empty because of another bug: by the time the revision-walking machinery gets to look at it, standard input has already been consumed by rev-list, so this function gets an empty list of revisions. After this patch, git bundle --stdin still does not work; it just doesn’t segfault any more. Reported-by: Joey Hess <joey@kitenet.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-01-17object.c: remove unused functionsLibravatar Junio C Hamano1-21/+0
object_list_append() and object_list_length}() are not used anywhere. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-05-31object: call "check_sha1_signature" with the replacement sha1Libravatar Christian Couder1-4/+5
Otherwise we get a "sha1 mismatch" error for replaced objects. Signed-off-by: Christian Couder <chriscool@tuxfamily.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-05-20Unify signedness in hashing callsLibravatar Dan McGee1-4/+4
Our hash_obj and hashtable_index calls and functions were doing a lot of funny things with signedness. Unify all of it to 'unsigned int'. Signed-off-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-05-16Fix type-punning issuesLibravatar Dan McGee1-1/+2
In these two places we are casting part of our unsigned char sha1 array into an unsigned int, which violates GCCs strict-aliasing rules (and probably other compilers). Signed-off-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2009-01-17bundle: allow the same ref to be given more than onceLibravatar Junio C Hamano1-0/+19
"git bundle create x master master" used to create a bundle that lists the same branch (master) twice. Cloning from such a bundle resulted in a needless warning "warning: Duplicated ref: refs/remotes/origin/master". Signed-off-by: Junio C Hamano <gitster@pobox.com>
2008-02-03parse_object_buffer: don't ignore errors from the object specific parsing ↵Libravatar Martin Koegler1-4/+8
functions In the case of an malformed object, the object specific parsing functions would return an error, which is currently ignored. The object can be partial initialized in this case. This patch make parse_object_buffer propagate such errors. Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-12-22Don't dereference NULL upon lookup failure.Libravatar Jim Meyering1-13/+22
Instead, signal the error just like the case we do upon encountering an object with an unknown type. Signed-off-by: Jim Meyering <meyering@redhat.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-06-06Don't assume tree entries that are not dirs are blobsLibravatar Sam Vilain1-0/+3
When scanning the trees in track_tree_refs() there is a "lazy" test that assumes that entries are either directories or files. Don't do that. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2007-05-24Merge branch 'maint-1.5.1' into maintLibravatar Junio C Hamano1-0/+1
* maint-1.5.1: fix memory leak in parse_object when check_sha1_signature fails name-rev: tolerate clock skew in committer dates
2007-05-24fix memory leak in parse_object when check_sha1_signature failsLibravatar Carlos Rica1-0/+1
When check_sha1_signature fails, program is not terminated: it prints an error message and returns NULL, so the buffer returned by read_sha1_file should be freed before. Signed-off-by: Carlos Rica <jasampler@gmail.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-24add add_object_array_with_modeLibravatar Martin Koegler1-0/+6
Each object in struct object_array is extended with the mode. If not specified, S_IFINVALID is used. An object with an mode value can be added with add_object_array_with_mode. Signed-off-by: Martin Koegler <mkoegler@auto.tuwien.ac.at> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16Clean up object creation to use more common codeLibravatar Linus Torvalds1-7/+7
This replaces the fairly odd "created_object()" function that did _most_ of the object setup with a more complete "create_object()" function that also has a more natural calling convention. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-04-16Use proper object allocators for unknown object nodes tooLibravatar Linus Torvalds1-12/+3
We used to use a different allocator scheme for when we didn't know the object type. That meant that objects that were created without any up-front knowledge of the type would not go through the same allocation paths as normal object allocations, and would miss out on the statistics. But perhaps more importantly than the statistics (that are useful when looking at memory usage but not much else), if we want to make the object hash tables use a denser object pointer representation, we need to make sure that they all go through the same blocking allocator. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-03-20Don't ever return corrupt objects from "parse_object()"Libravatar Linus Torvalds1-2/+4
Looking at the SHA1 validation code due to the corruption that Alexander Litvinov is seeing under Cygwin, I notice that one of the most central places where we read objects, we actually do end up verifying the SHA1 of the result, but then we happily parse it anyway. And using "printf" to write the error message means that it not only can get lost, but will actually mess up stdout, and cause other strange and hard-to-debug failures downstream. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-27get rid of lookup_object_type()Libravatar Nicolas Pitre1-18/+0
This function is called only once in the whole source tree. Let's move its code inline instead, which is also in the spirit of removing as much object type char arrays as possible (not that this patch does anything for that but at least it is now a local matter). Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-27convert object type handling from a string to a numberLibravatar Nicolas Pitre1-8/+8
We currently have two parallel notation for dealing with object types in the code: a string and a numerical value. One of them is obviously redundent, and the most used one requires more stack space and a bunch of strcmp() all over the place. This is an initial step for the removal of the version using a char array found in object reading code paths. The patch is unfortunately large but there is no sane way to split it in smaller parts without breaking the system. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2007-02-27formalize typename(), and add its reverse type_from_string()Libravatar Nicolas Pitre1-3/+23
Sometime typename() is used, sometimes type_names[] is accessed directly. Let's enforce typename() all the time which allows for validating the type. Also let's add a function to go from a name to a type and use it instead of manual memcpy() when appropriate. Signed-off-by: Nicolas Pitre <nico@cam.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-09-16Add git-for-each-ref: helper for language bindingsLibravatar Junio C Hamano1-27/+41
This adds a new command, git-for-each-ref. You can have it iterate over refs and have it output various aspects of the objects they refer to. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-27Use xcalloc instead of callocLibravatar Jonas Fonseca1-1/+1
Signed-off-by: Jonas Fonseca <fonseca@diku.dk> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-23Convert memcpy(a,b,20) to hashcpy(a,b).Libravatar Shawn Pearce1-1/+1
This abstracts away the size of the hash values when copying them from memory location to memory location, much as the introduction of hashcmp abstracted away hash value comparsion. A few call sites were using char* rather than unsigned char* so I added the cast rather than open hashcpy to be void*. This is a reasonable tradeoff as most call sites already use unsigned char* and the existing hashcmp is also declared to be unsigned char*. [jc: Splitted the patch to "master" part, to be followed by a patch for merge-recursive.c which is not in "master" yet. Fixed the cast in the latter hunk to combine-diff.c which was wrong in the original. Also converted ones left-over in combine-diff.c, diff-lib.c and upload-pack.c ] Signed-off-by: Shawn O. Pearce <spearce@spearce.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-08-17Do not use memcmp(sha1_1, sha1_2, 20) with hardcoded length.Libravatar David Rientjes1-1/+1
Introduces global inline: hashcmp(const unsigned char *sha1, const unsigned char *sha2) Uses memcmp for comparison and returns the result based on the length of the hash name (a future runtime decision). Acked-by: Alex Riesen <raa.lkml@gmail.com> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-07-12Remove TYPE_* constant macros and use object_type enums consistently.Libravatar Linus Torvalds1-3/+4
This updates the type-enumeration constants introduced to reduce the memory footprint of "struct object" to match the type bits already used in the packfile format, by removing the former (i.e. TYPE_* constant macros) and using the latter (i.e. enum object_type) throughout the code for consistency. Eventually we can stop passing around the "type strings" entirely, and this will help - no confusion about two different integer enumeration. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-07-01git object hash cleanupsLibravatar Linus Torvalds1-44/+53
This IMNSHO cleans up the object hashing. The hash expansion is separated out into a function of its own, the hash array (and size) names are made more obvious, and the code is generally made to look a bit more like the object-ref hashing. It also gets rid of "find_object()" returning an index (or negative position if no object is found), since that is made redundant by the simplified object rehashing. The basic operation is now "lookup_object()" which just returns the object itself. There's an almost unmeasurable speed increase, but more importantly, I think the end result is more readable. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-29Abstract out accesses to object hash arrayLibravatar Linus Torvalds1-3/+12
There are a few special places where some programs accessed the object hash array directly, which bothered me because I wanted to play with some simple re-organizations. So this patch makes the object hash array data structures all entirely local to object.c, and the few users who wanted to look at it now get to use a function to query how many object index entries there can be, and to actually access the array. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-19Add "named object array" conceptLibravatar Linus Torvalds1-0/+17
We've had this notion of a "object_list" for a long time, which eventually grew a "name" member because some users (notably git-rev-list) wanted to name each object as it is generated. That object_list is great for some things, but it isn't all that wonderful for others, and the "name" member is generally not used by everybody. This patch splits the users of the object_list array up into two: the traditional list users, who want the list-like format, and who don't actually use or want the name. And another class of users that really used the list as an extensible array, and generally wanted to name the objects. The patch is fairly straightforward, but it's also biggish. Most of it really just cleans things up: switching the revision parsing and listing over to the array makes things like the builtin-diff usage much simpler (we now see exactly how many members the array has, and we don't get the objects reversed from the order they were on the command line). One of the main reasons for doing this at all is that the malloc overhead of the simple object list was actually pretty high, and the array is just a lot denser. So this patch brings down memory usage by git-rev-list by just under 3% (on top of all the other memory use optimizations) on the mozilla archive. It does add more lines than it removes, and more importantly, it adds a whole new infrastructure for maintaining lists of objects, but on the other hand, the new dynamic array code is pretty obvious. The change to builtin-diff-tree.c shows a fairly good example of why an array interface is sometimes more natural, and just much simpler for everybody. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-18Remove "refs" field from "struct object"Libravatar Linus Torvalds1-70/+0
This shrinks "struct object" to the absolutely minimal size possible. It now contains /only/ the object flags and the SHA1 hash name of the object. The "refs" field, which is really needed only for fsck, is maintained in a separate hashed lookup-table, allowing all normal users to totally ignore it. This helps memory usage, although not as much as I hoped: it looks like the allocation overhead of malloc (and the alignment constraints in particular) means that while the structure size shrinks, the actual allocation overhead mostly does not. [ That said: memory usage is actually down, but not as much as it should be: I suspect just one of the object types actually ended up shrinking its effective allocation size. To get to the next level, we probably need specialized allocators that don't pad the allocation more than necessary. ] The separation makes for some code cleanup, though, and makes the ref tracking that fsck wants a clearly separate thing. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-06-17Shrink "struct object" a bitLibravatar Linus Torvalds1-2/+6
This shrinks "struct object" by a small amount, by getting rid of the "struct type *" pointer and replacing it with a 3-bit bitfield instead. In addition, we merge the bitfields and the "flags" field, which incidentally should also remove a useless 4-byte padding from the object when in 64-bit mode. Now, our "struct object" is still too damn large, but it's now less obviously bloated, and of the remaining fields, only the "util" (which is not used by most things) is clearly something that should be eventually discarded. This shrinks the "git-rev-list --all" memory use by about 2.5% on the kernel archive (and, perhaps more importantly, on the larger mozilla archive). That may not sound like much, but I suspect it's more on a 64-bit platform. There are other remaining inefficiencies (the parent lists, for example, probably have horrible malloc overhead), but this was pretty obvious. Most of the patch is just changing the comparison of the "type" pointer from one of the constant string pointers to the appropriate new TYPE_xxx small integer constant. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-29Make "tree_entry" have a SHA1 instead of a union of object pointersLibravatar Linus Torvalds1-1/+1
This is preparatory work for further cleanups, where we try to make tree_entry look more like the more efficient tree-walk descriptor. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-05-29Make "struct tree" contain the pointer to the tree bufferLibravatar Linus Torvalds1-1/+4
This allows us to avoid allocating information for names etc, because we can just use the information from the tree buffer directly. Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-04Replace xmalloc+memset(0) with xcalloc.Libravatar Peter Eriksen1-4/+2
Signed-off-by: Peter Eriksen <s022018@student.dtu.dk> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-04-04Use blob_, commit_, tag_, and tree_type throughout.Libravatar Peter Eriksen1-4/+4
This replaces occurences of "blob", "commit", "tag", and "tree", where they're really used as type specifiers, which we already have defined global constants for. Signed-off-by: Peter Eriksen <s022018@student.dtu.dk> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-12Fix object re-hashingLibravatar Linus Torvalds1-1/+1
The hashed object lookup had a subtle bug in re-hashing: it did for (i = 0; i < count; i++) if (objs[i]) { .. rehash .. where "count" was the old hash couny. Oon the face of it is obvious, since it clearly re-hashes all the old objects. However, it's wrong. If the last old hash entry before re-hashing was in use (or became in use by the re-hashing), then when re-hashing could have inserted an object into the hash entries with idx >= count due to overflow. When we then rehash the last old entry, that old entry might become empty, which means that the overflow entries should be re-hashed again. In other words, the loop has to be fixed to either traverse the whole array, rather than just the old count. (There's room for a slight optimization: instead of counting all the way up, we can break when we see the first empty slot that is above the old "count". At that point we know we don't have any collissions that we might have to fix up any more. This patch only does the trivial fix) [jc: with trivial fix on trivial fix] Signed-off-by: Linus Torvalds <torvalds@osdl.org> Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-12hashtable-based objects: minimum fixups.Libravatar Junio C Hamano1-2/+4
Calling hashtable_index from find_object before objs is created would result in division by zero failure. Avoid it. Also the given object name may not be aligned suitably for unsigned int; avoid dereferencing casted pointer. Signed-off-by: Junio C Hamano <junkio@cox.net>
2006-02-12Use a hashtable for objects instead of a sorted listLibravatar Johannes Schindelin1-29/+40
In a simple test, this brings down the CPU time from 47 sec to 22 sec. Signed-off-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Junio C Hamano <junkio@cox.net>