diff options
author | Thomas Gummerer <t.gummerer@gmail.com> | 2018-09-13 23:38:34 +0100 |
---|---|---|
committer | Junio C Hamano <gitster@pobox.com> | 2018-09-14 09:10:26 -0700 |
commit | e467a90c7a82a047629aafa4e97daefa3872ec35 (patch) | |
tree | 3031a073b49f5d50fd0c19c280ae5bba431c5631 /Documentation/git-reset.txt | |
parent | range-diff: update stale summary of --no-dual-color (diff) | |
download | tgif-e467a90c7a82a047629aafa4e97daefa3872ec35.tar.xz |
linear-assignment: fix potential out of bounds memory access
Currently the 'compute_assignment()' function may read memory out
of bounds, even if used correctly. Namely this happens when we only
have one column. In that case we try to calculate the initial
minimum cost using '!j1' as column in the reduction transfer code.
That in turn causes us to try and get the cost from column 1 in the
cost matrix, which does not exist, and thus results in an out of
bounds memory read.
In the original paper [1], the example code initializes that minimum
cost to "infinite". We could emulate something similar by setting the
minimum cost to INT_MAX, which would result in the same minimum cost
as the current algorithm, as we'd always go into the if condition at
least once, except when we only have one column, and column_count thus
equals 1.
If column_count does equal 1, the condition in the loop would always
be false, and we'd end up with a minimum of INT_MAX, which may lead to
integer overflows later in the algorithm.
For a column count of 1, we however do not even really need to go
through the whole algorithm. A column count of 1 means that there's
no possible assignments, and we can just zero out the column2row and
row2column arrays, and return early from the function, while keeping
the reduction transfer part of the function the same as it is
currently.
Another solution would be to just not call the 'compute_assignment()'
function from the range diff code in this case, however it's better to
make the compute_assignment function more robust, so future callers
don't run into this potential problem.
Note that the test only fails under valgrind on Linux, but the same
command has been reported to segfault on Mac OS.
[1]: Jonker, R., & Volgenant, A. (1987). A shortest augmenting path
algorithm for dense and sparse linear assignment
problems. Computing, 38(4), 325–340.
Reported-by: ryenus <ryenus@gmail.com>
Helped-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Thomas Gummerer <t.gummerer@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 'Documentation/git-reset.txt')
0 files changed, 0 insertions, 0 deletions