Age | Commit message (Collapse) | Author | Files | Lines |
|
We added an fsck check in ed8b10f631 (fsck: check
.gitmodules content, 2018-05-02) as a defense against the
vulnerability from 0383bbb901 (submodule-config: verify
submodule names as paths, 2018-04-30). With the idea that
up-to-date hosting sites could protect downstream unpatched
clients that fetch from them.
As part of that defense, we reject any ".gitmodules" entry
that is not syntactically valid. The theory is that if we
cannot even parse the file, we cannot accurately check it
for vulnerabilities. And anybody with a broken .gitmodules
file would eventually want to know anyway.
But there are a few reasons this is a bad tradeoff in
practice:
- for this particular vulnerability, the client has to be
able to parse the file. So you cannot sneak an attack
through using a broken file, assuming the config parsers
for the process running fsck and the eventual victim are
functionally equivalent.
- a broken .gitmodules file is not necessarily a problem.
Our fsck check detects .gitmodules in _any_ tree, not
just at the root. And the presence of a .gitmodules file
does not necessarily mean it will be used; you'd have to
also have gitlinks in the tree. The cgit repository, for
example, has a file named .gitmodules from a
pre-submodule attempt at sharing code, but does not
actually have any gitlinks.
- when the fsck check is used to reject a push, it's often
hard to work around. The pusher may not have full control
over the destination repository (e.g., if it's on a
hosting server, they may need to contact the hosting
site's support). And the broken .gitmodules may be too
far back in history for rewriting to be feasible (again,
this is an issue for cgit).
So we're being unnecessarily restrictive without actually
improving the security in a meaningful way. It would be more
convenient to downgrade this check to "info", which means
we'd still comment on it, but not reject a push. Site admins
can already do this via config, but we should ship sensible
defaults.
There are a few counterpoints to consider in favor of
keeping the check as an error:
- the first point above assumes that the config parsers for
the victim and the fsck process are equivalent. This is
pretty true now, but as time goes on will become less so.
Hosting sites are likely to upgrade their version of Git,
whereas vulnerable clients will be stagnant (if they did
upgrade, they'd cease to be vulnerable!). So in theory we
may see drift over time between what two config parsers
will accept.
In practice, this is probably OK. The config format is
pretty established at this point and shouldn't change a
lot. And the farther we get from the announcement of the
vulnerability, the less interesting this extra layer of
protection becomes. I.e., it was _most_ valuable on day
0, when everybody's client was still vulnerable and
hosting sites could protect people. But as time goes on
and people upgrade, the population of vulnerable clients
becomes smaller and smaller.
- In theory this could protect us from other
vulnerabilities in the future. E.g., .gitmodules are the
only way for a malicious repository to feed data to the
config parser, so this check could similarly protect
clients from a future (to-be-found) bug there.
But that's trading a hypothetical case for real-world
pain today. If we do find such a bug, the hosting site
would need to be updated to fix it, too. At which point
we could figure out whether it's possible to detect
_just_ the malicious case without hurting existing
broken-but-not-evil cases.
- Until recently, we hadn't made any restrictions on
.gitmodules content. So now in tightening that we're
hitting cases where certain things used to work, but
don't anymore. There's some moderate pain now. But as
time goes on, we'll see more (and more varied) cases that
will make tightening harder in the future. So there's
some argument for putting rules in place _now_, before
users grow more cases that violate them.
Again, this is trading pain now for hypothetical benefit
in the future. And if we try hard in the future to keep
our tightening to a minimum (i.e., rejecting true
maliciousness without hurting broken-but-not-evil repos),
then that reduces even the hypothetical benefit.
Considering both sets of arguments, it makes sense to loosen
this check for now.
Note that we have to tweak the test in t7415 since fsck will
no longer consider this a fatal error. But we still check
that it reports the warning, and that we don't get the
spurious error from the config code.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
If there's a parsing error we'll already report it via the
usual fsck report() function (or not, if the user has asked
to skip this object or warning type). The error message from
the config parser just adds confusion. Let's suppress it.
Note that we didn't test this case at all, so I've added
coverage in t7415. We may end up toning down or removing
this fsck check in the future. So take this test as checking
what happens now with a focus on stderr, and not any
ironclad guarantee that we must detect and report parse
failures in the future.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
"index-pack --strict" has been taught to make sure that it runs the
final object integrity checks after making the freshly indexed
packfile available to itself.
* jk/index-pack-maint:
index-pack: correct install_packed_git() args
index-pack: handle --strict checks of non-repo packs
prepare_commit_graft: treat non-repository as a noop
|
|
Commit 159e7b080b (fsck: detect gitmodules files,
2018-05-02) taught fsck to look at the content of
.gitmodules files. If the object turns out not to be a blob
at all, we just complain and punt on checking the content.
And since this was such an obvious and trivial code path, I
didn't even bother to add a test.
Except it _does_ do one non-trivial thing, which is call the
report() function, which wants us to pass a pointer to a
"struct object". Which we don't have (we have only a "struct
object_id"). So we erroneously pass a NULL object to
report(), which gets dereferenced and causes a segfault.
It seems like we could refactor report() to just take the
object_id itself. But we pass the object pointer along to
a callback function, and indeed this ends up in
builtin/fsck.c's objreport() which does want to look at
other parts of the object (like the type).
So instead, let's just use lookup_unknown_object() to get
the real "struct object", and pass that.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Early versions of the fsck .gitmodules detection code
actually required a tree to be at the root of a commit for
it to be checked for .gitmodules. What we ended up with in
159e7b080b (fsck: detect gitmodules files, 2018-05-02),
though, finds a .gitmodules file in _any_ tree (see that
commit for more discussion).
As a result, there's no need to create a commit in our
tests. Let's drop it in the name of simplicity. And since
that was the only thing referencing $tree, we can pull our
tree creation out of a command substitution.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Commit 73c3f0f704 (index-pack: check .gitmodules files with
--strict, 2018-05-04) added a call to add_packed_git(), with
the intent that the newly-indexed objects would be available
to the process when we run fsck_finish(). But that's not
what add_packed_git() does. It only allocates the struct,
and you must install_packed_git() on the result. So that
call was effectively doing nothing (except leaking a
struct).
But wait, we passed all of the tests! Does that mean we
don't need the call at all?
For normal cases, no. When we run "index-pack --stdin"
inside a repository, we write the new pack into the object
directory. If fsck_finish() needs to access one of the new
objects, then our initial lookup will fail to find it, but
we'll follow up by running reprepare_packed_git() and
looking again. That logic was meant to handle somebody else
repacking simultaneously, but it ends up working for us
here.
But there is a case that does need this, that we were not
testing. You can run "git index-pack foo.pack" on any file,
even when it is not inside the object directory. Or you may
not even be in a repository at all! This case fails without
doing the proper install_packed_git() call.
We can make this work by adding the install call.
Note that we should be prepared to handle add_packed_git()
failing. We can just silently ignore this case, though. If
fsck_finish() later needs the objects and they're not
available, it will complain itself. And if it doesn't
(because we were able to resolve the whole fsck in the first
pass), then it actually isn't an interesting error at all.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
We've recently forbidden .gitmodules to be a symlink in
verify_path(). And it's an easy way to circumvent our fsck
checks for .gitmodules content. So let's complain when we
see it.
Signed-off-by: Jeff King <peff@peff.net>
|
|
Now that the internal fsck code has all of the plumbing we
need, we can start checking incoming .gitmodules files.
Naively, it seems like we would just need to add a call to
fsck_finish() after we've processed all of the objects. And
that would be enough to cover the initial test included
here. But there are two extra bits:
1. We currently don't bother calling fsck_object() at all
for blobs, since it has traditionally been a noop. We'd
actually catch these blobs in fsck_finish() at the end,
but it's more efficient to check them when we already
have the object loaded in memory.
2. The second pass done by fsck_finish() needs to access
the objects, but we're actually indexing the pack in
this process. In theory we could give the fsck code a
special callback for accessing the in-pack data, but
it's actually quite tricky:
a. We don't have an internal efficient index mapping
oids to packfile offsets. We only generate it on
the fly as part of writing out the .idx file.
b. We'd still have to reconstruct deltas, which means
we'd basically have to replicate all of the
reading logic in packfile.c.
Instead, let's avoid running fsck_finish() until after
we've written out the .idx file, and then just add it
to our internal packed_git list.
This does mean that the objects are "in the repository"
before we finish our fsck checks. But unpack-objects
already exhibits this same behavior, and it's an
acceptable tradeoff here for the same reason: the
quarantine mechanism means that pushes will be
fully protected.
In addition to a basic push test in t7415, we add a sneaky
pack that reverses the usual object order in the pack,
requiring that index-pack access the tree and blob during
the "finish" step.
This already works for unpack-objects (since it will have
written out loose objects), but we'll check it with this
sneaky pack for good measure.
Signed-off-by: Jeff King <peff@peff.net>
|
|
As with the previous commit, we must call fsck's "finish"
function in order to catch any queued objects for
.gitmodules checks.
This second pass will be able to access any incoming
objects, because we will have exploded them to loose objects
by now.
This isn't quite ideal, because it means that bad objects
may have been written to the object database (and a
subsequent operation could then reference them, even if the
other side doesn't send the objects again). However, this is
sufficient when used with receive.fsckObjects, since those
loose objects will all be placed in a temporary quarantine
area that will get wiped if we find any problems.
Signed-off-by: Jeff King <peff@peff.net>
|
|
Now that the internal fsck code is capable of checking
.gitmodules files, we just need to teach its callers to use
the "finish" function to check any queued objects.
With this, we can now catch the malicious case in t7415 with
git-fsck.
Signed-off-by: Jeff King <peff@peff.net>
|
|
Submodule "names" come from the untrusted .gitmodules file,
but we blindly append them to $GIT_DIR/modules to create our
on-disk repo paths. This means you can do bad things by
putting "../" into the name (among other things).
Let's sanity-check these names to avoid building a path that
can be exploited. There are two main decisions:
1. What should the allowed syntax be?
It's tempting to reuse verify_path(), since submodule
names typically come from in-repo paths. But there are
two reasons not to:
a. It's technically more strict than what we need, as
we really care only about breaking out of the
$GIT_DIR/modules/ hierarchy. E.g., having a
submodule named "foo/.git" isn't actually
dangerous, and it's possible that somebody has
manually given such a funny name.
b. Since we'll eventually use this checking logic in
fsck to prevent downstream repositories, it should
be consistent across platforms. Because
verify_path() relies on is_dir_sep(), it wouldn't
block "foo\..\bar" on a non-Windows machine.
2. Where should we enforce it? These days most of the
.gitmodules reads go through submodule-config.c, so
I've put it there in the reading step. That should
cover all of the C code.
We also construct the name for "git submodule add"
inside the git-submodule.sh script. This is probably
not a big deal for security since the name is coming
from the user anyway, but it would be polite to remind
them if the name they pick is invalid (and we need to
expose the name-checker to the shell anyway for our
test scripts).
This patch issues a warning when reading .gitmodules
and just ignores the related config entry completely.
This will generally end up producing a sensible error,
as it works the same as a .gitmodules file which is
missing a submodule entry (so "submodule update" will
barf, but "git clone --recurse-submodules" will print
an error but not abort the clone.
There is one minor oddity, which is that we print the
warning once per malformed config key (since that's how
the config subsystem gives us the entries). So in the
new test, for example, the user would see three
warnings. That's OK, since the intent is that this case
should never come up outside of malicious repositories
(and then it might even benefit the user to see the
message multiple times).
Credit for finding this vulnerability and the proof of
concept from which the test script was adapted goes to
Etienne Stalmans.
Signed-off-by: Jeff King <peff@peff.net>
|