summaryrefslogtreecommitdiff
path: root/contrib/fast-import/import-zips.py
diff options
context:
space:
mode:
authorLibravatar Jeff King <peff@peff.net>2017-01-27 17:05:36 -0500
committerLibravatar Junio C Hamano <gitster@pobox.com>2017-01-27 16:25:16 -0800
commit42b766d765feb2e0867954eb665ff05e3441b547 (patch)
tree879e7bbabb63619a5d6fefee4c82fc23cf15a1a4 /contrib/fast-import/import-zips.py
parentpack-objects: enforce --depth limit in reused deltas (diff)
downloadtgif-42b766d765feb2e0867954eb665ff05e3441b547.tar.xz
pack-objects: convert recursion to iteration in break_delta_chain()
The break_delta_chain() function is recursive over the depth of a given delta chain, which can lead to possibly running out of stack space. Normally delta depth is quite small, but if there _is_ a pathological case, this is where we would find and fix it, so we should be more careful. We can do it without recursion at all, but there's a little bit of cleverness needed to do so. It's easiest to explain by covering the less-clever strategies first. The obvious thing to try is just keeping our own stack on the heap. Whenever we would recurse, push the new entry onto the stack and loop instead. But this gets tricky; when we see an ACTIVE entry, we need to care if we just pushed it (in which case it's a cycle) or if we just popped it (in which case we dealt with its bases, and no we need to clear the ACTIVE flag and compute its depth). You can hack around that in various ways, like keeping a "just pushed" flag, but the logic gets muddled. However, we can observe that we do all of our pushes first, and then all of our pops afterwards. In other words, we can do this in two passes. First dig down to the base, stopping when we see a cycle, and pushing each item onto our stack. Then pop the stack elements, clearing the ACTIVE flag and computing the depth for each. This works, and is reasonably elegant. However, why do we need the stack for the second pass? We can just walk the delta pointers again. There's one complication. Popping the stack went over our list in reverse, so we could compute the depth of each entry by incrementing the depth of its base, which we will have just computed. To go forward in the second pass, we have to compute the total depth on the way down, and then assign it as we go. This patch implements this final strategy, because it not only keeps the memory off the stack, but it eliminates it entirely. Credit for the cleverness in that approach goes to Michael Haggerty; bugs are mine. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'contrib/fast-import/import-zips.py')
0 files changed, 0 insertions, 0 deletions