summaryrefslogtreecommitdiff
path: root/t/t4211/history.export
diff options
context:
space:
mode:
authorLibravatar Jeff King <peff@peff.net>2014-02-12 11:48:28 -0500
committerLibravatar Junio C Hamano <gitster@pobox.com>2014-02-12 11:21:29 -0800
commit6b5b3a27b7faf9d72efec28fa017408daf45cd00 (patch)
treeeed098a0f65d77d4f11d235009abc26f2939503f /t/t4211/history.export
parentewah: support platforms that require aligned reads (diff)
downloadtgif-6b5b3a27b7faf9d72efec28fa017408daf45cd00.tar.xz
ewah: unconditionally ntohll ewah data
Commit a201c20 tried to optimize out a loop like: for (i = 0; i < len; i++) data[i] = ntohll(data[i]); in the big-endian case, because we know that ntohll is a noop, and we do not need to pay the cost of the loop at all. However, it mistakenly assumed that __BYTE_ORDER was always defined, whereas it may not be on systems which do not define it by default, and where we did not need to define it to set up the ntohll macro. This includes OS X and Windows. We could muck with the ordering in compat/bswap.h to make sure it is defined unconditionally, but it is simpler to still to just execute the loop unconditionally. That avoids the application code knowing anything about these magic macros, and lets it depend only on having ntohll defined. And since the resulting loop looks like (on a big-endian system): for (i = 0; i < len; i++) data[i] = data[i]; any decent compiler can probably optimize it out. Original report and analysis by Brian Gernhardt. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 't/t4211/history.export')
0 files changed, 0 insertions, 0 deletions