summaryrefslogtreecommitdiff
path: root/builtin/init-db.c
diff options
context:
space:
mode:
authorLibravatar Junio C Hamano <gitster@pobox.com>2011-05-08 01:47:35 -0700
committerLibravatar Junio C Hamano <gitster@pobox.com>2011-05-13 16:11:18 -0700
commit4dd1fbc7b1df0030f813a05cee19cad2c7a9cbf9 (patch)
tree6672594f53e8688c10ccff922d8d52c16385ed36 /builtin/init-db.c
parentindex_fd(): split into two helper functions (diff)
downloadtgif-4dd1fbc7b1df0030f813a05cee19cad2c7a9cbf9.tar.xz
Bigfile: teach "git add" to send a large file straight to a pack
When adding a new content to the repository, we have always slurped the blob in its entirety in-core first, and computed the object name and compressed it into a loose object file. Handling large binary files (e.g. video and audio asset for games) has been problematic because of this design. At the middle level of "git add" callchain is an internal API index_fd() that takes an open file descriptor to read from the working tree file being added with its size. Teach it to call out to fast-import when adding a large blob. The write-out codepath in entry.c::write_entry() should be taught to stream, instead of reading everything in core. This should not be so hard to implement, especially if we limit ourselves only to loose object files and non-delta representation in packfiles. Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'builtin/init-db.c')
0 files changed, 0 insertions, 0 deletions