Age | Commit message (Collapse) | Author | Files | Lines |
|
* maint-2.23:
Git 2.23.4
Git 2.22.5
Git 2.21.4
Git 2.20.5
Git 2.19.6
Git 2.18.5
Git 2.17.6
unpack_trees(): start with a fresh lstat cache
run-command: invalidate lstat cache after a command finished
checkout: fix bug that makes checkout follow symlinks in leading path
|
|
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
* maint-2.22:
Git 2.22.5
Git 2.21.4
Git 2.20.5
Git 2.19.6
Git 2.18.5
Git 2.17.6
unpack_trees(): start with a fresh lstat cache
run-command: invalidate lstat cache after a command finished
checkout: fix bug that makes checkout follow symlinks in leading path
|
|
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
* maint-2.21:
Git 2.21.4
Git 2.20.5
Git 2.19.6
Git 2.18.5
Git 2.17.6
unpack_trees(): start with a fresh lstat cache
run-command: invalidate lstat cache after a command finished
checkout: fix bug that makes checkout follow symlinks in leading path
|
|
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
* maint-2.20:
Git 2.20.5
Git 2.19.6
Git 2.18.5
Git 2.17.6
unpack_trees(): start with a fresh lstat cache
run-command: invalidate lstat cache after a command finished
checkout: fix bug that makes checkout follow symlinks in leading path
|
|
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
* maint-2.19:
Git 2.19.6
Git 2.18.5
Git 2.17.6
unpack_trees(): start with a fresh lstat cache
run-command: invalidate lstat cache after a command finished
checkout: fix bug that makes checkout follow symlinks in leading path
|
|
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
* maint-2.18:
Git 2.18.5
Git 2.17.6
unpack_trees(): start with a fresh lstat cache
run-command: invalidate lstat cache after a command finished
checkout: fix bug that makes checkout follow symlinks in leading path
|
|
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
* maint-2.17:
Git 2.17.6
unpack_trees(): start with a fresh lstat cache
run-command: invalidate lstat cache after a command finished
checkout: fix bug that makes checkout follow symlinks in leading path
|
|
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
We really want to avoid relying on stale information.
Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
In the previous commit, we intercepted calls to `rmdir()` to invalidate
the lstat cache in the successful case, so that the lstat cache could
not have the idea that a directory exists where there is none.
The same situation can arise, of course, when a separate process is
spawned (most notably, this is the case in `submodule_move_head()`).
Obviously, we cannot know whether a directory was removed in that
process, therefore we must invalidate the lstat cache afterwards.
Note: in contrast to `lstat_cache_aware_rmdir()`, we invalidate the
lstat cache even in case of an error: the process might have removed a
directory and still have failed afterwards.
Co-authored-by: Matheus Tavares <matheus.bernardino@usp.br>
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
Before checking out a file, we have to confirm that all of its leading
components are real existing directories. And to reduce the number of
lstat() calls in this process, we cache the last leading path known to
contain only directories. However, when a path collision occurs (e.g.
when checking out case-sensitive files in case-insensitive file
systems), a cached path might have its file type changed on disk,
leaving the cache on an invalid state. Normally, this doesn't bring
any bad consequences as we usually check out files in index order, and
therefore, by the time the cached path becomes outdated, we no longer
need it anyway (because all files in that directory would have already
been written).
But, there are some users of the checkout machinery that do not always
follow the index order. In particular: checkout-index writes the paths
in the same order that they appear on the CLI (or stdin); and the
delayed checkout feature -- used when a long-running filter process
replies with "status=delayed" -- postpones the checkout of some entries,
thus modifying the checkout order.
When we have to check out an out-of-order entry and the lstat() cache is
invalid (due to a previous path collision), checkout_entry() may end up
using the invalid data and thrusting that the leading components are
real directories when, in reality, they are not. In the best case
scenario, where the directory was replaced by a regular file, the user
will get an error: "fatal: unable to create file 'foo/bar': Not a
directory". But if the directory was replaced by a symlink, checkout
could actually end up following the symlink and writing the file at a
wrong place, even outside the repository. Since delayed checkout is
affected by this bug, it could be used by an attacker to write
arbitrary files during the clone of a maliciously crafted repository.
Some candidate solutions considered were to disable the lstat() cache
during unordered checkouts or sort the entries before passing them to
the checkout machinery. But both ideas include some performance penalty
and they don't future-proof the code against new unordered use cases.
Instead, we now manually reset the lstat cache whenever we successfully
remove a directory. Note: We are not even checking whether the directory
was the same as the lstat cache points to because we might face a
scenario where the paths refer to the same location but differ due to
case folding, precomposed UTF-8 issues, or the presence of `..`
components in the path. Two regression tests, with case-collisions and
utf8-collisions, are also added for both checkout-index and delayed
checkout.
Note: to make the previously mentioned clone attack unfeasible, it would
be sufficient to reset the lstat cache only after the remove_subtree()
call inside checkout_entry(). This is the place where we would remove a
directory whose path collides with the path of another entry that we are
currently trying to check out (possibly a symlink). However, in the
interest of a thorough fix that does not leave Git open to
similar-but-not-identical attack vectors, we decided to intercept
all `rmdir()` calls in one fell swoop.
This addresses CVE-2021-21300.
Co-authored-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Matheus Tavares <matheus.bernardino@usp.br>
|
|
This merges up the security fix from v2.17.5.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
This merges up the security fix from v2.17.5.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
This merges up the security fix from v2.17.5.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
This merges up the security fix from v2.17.5.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
This merges up the security fix from v2.17.5.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
This merges up the security fix from v2.17.5.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
This merges up the security fix from v2.17.5.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
Git's URL parser interprets
https:///example.com/repo.git
to have no host and a path of "example.com/repo.git". Curl, on the
other hand, internally redirects it to https://example.com/repo.git. As
a result, until "credential: parse URL without host as empty host, not
unset", tricking a user into fetching from such a URL would cause Git to
send credentials for another host to example.com.
Teach fsck to block and detect .gitmodules files using such a URL to
prevent sharing them with Git versions that are not yet protected.
A relative URL in a .gitmodules file could also be used to trigger this.
The relative URL resolver used for .gitmodules does not normalize
sequences of slashes and can follow ".." components out of the path part
and to the host part of a URL, meaning that such a relative URL can be
used to traverse from a https://foo.example.com/innocent superproject to
a https:///attacker.example.com/exploit submodule. Fortunately,
redundant extra slashes in .gitmodules are rare, so we can catch this by
detecting one after a leading sequence of "./" and "../" components.
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
|
|
Until "credential: refuse to operate when missing host or protocol",
Git's credential handling code interpreted URLs with empty scheme to
mean "give me credentials matching this host for any protocol".
Luckily libcurl does not recognize such URLs (it tries to look for a
protocol named "" and fails). Just in case that changes, let's reject
them within Git as well. This way, credential_from_url is guaranteed to
always produce a "struct credential" with protocol and host set.
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
libcurl permits making requests without a URL scheme specified. In
this case, it guesses the URL from the hostname, so I can run
git ls-remote http::ftp.example.com/path/to/repo
and it would make an FTP request.
Any user intentionally using such a URL is likely to have made a typo.
Unfortunately, credential_from_url is not able to determine the host and
protocol in order to determine appropriate credentials to send, and
until "credential: refuse to operate when missing host or protocol",
this resulted in another host's credentials being leaked to the named
host.
Teach credential_from_url_gently to consider such a URL to be invalid
so that fsck can detect and block gitmodules files with such URLs,
allowing server operators to avoid serving them to downstream users
running older versions of Git.
This also means that when such URLs are passed on the command line, Git
will print a clearer error so affected users can switch to the simpler
URL that explicitly specifies the host and protocol they intend.
One subtlety: .gitmodules files can contain relative URLs, representing
a URL relative to the URL they were cloned from. The relative URL
resolver used for .gitmodules can follow ".." components out of the path
part and past the host part of a URL, meaning that such a relative URL
can be used to traverse from a https://foo.example.com/innocent
superproject to a https::attacker.example.com/exploit submodule.
Fortunately a leading ':' in the first path component after a series of
leading './' and '../' components is unlikely to show up in other
contexts, so we can catch this by detecting that pattern.
Reported-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
|
|
When we try to initialize credential loading by URL and find that the
URL is invalid, we set all fields to NULL in order to avoid acting on
malicious input. Later when we request credentials, we diagonse the
erroneous input:
fatal: refusing to work with credential missing host field
This is problematic in two ways:
- The message doesn't tell the user *why* we are missing the host
field, so they can't tell from this message alone how to recover.
There can be intervening messages after the original warning of
bad input, so the user may not have the context to put two and two
together.
- The error only occurs when we actually need to get a credential. If
the URL permits anonymous access, the only encouragement the user gets
to correct their bogus URL is a quiet warning.
This is inconsistent with the check we perform in fsck, where any use
of such a URL as a submodule is an error.
When we see such a bogus URL, let's not try to be nice and continue
without helpers. Instead, die() immediately. This is simpler and
obviously safe. And there's very little chance of disrupting a normal
workflow.
It's _possible_ that somebody has a legitimate URL with a raw newline in
it. It already wouldn't work with credential helpers, so this patch
steps that up from an inconvenience to "we will refuse to work with it
at all". If such a case does exist, we should figure out a way to work
with it (especially if the newline is only in the path component, which
we normally don't even pass to helpers). But until we see a real report,
we're better off being defensive.
Reported-by: Carlo Arenas <carenas@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
In 07259e74ec1 (fsck: detect gitmodules URLs with embedded newlines,
2020-03-11), git fsck learned to check whether URLs in .gitmodules could
be understood by the credential machinery when they are handled by
git-remote-curl.
However, the check is overbroad: it checks all URLs instead of only
URLs that would be passed to git-remote-curl. In principle a git:// or
file:/// URL does not need to follow the same conventions as an http://
URL; in particular, git:// and file:// protocols are not succeptible to
issues in the credential API because they do not support attaching
credentials.
In the HTTP case, the URL in .gitmodules does not always match the URL
that would be passed to git-remote-curl and the credential machinery:
Git's URL syntax allows specifying a remote helper followed by a "::"
delimiter and a URL to be passed to it, so that
git ls-remote http::https://example.com/repo.git
invokes git-remote-http with https://example.com/repo.git as its URL
argument. With today's checks, that distinction does not make a
difference, but for a check we are about to introduce (for empty URL
schemes) it will matter.
.gitmodules files also support relative URLs. To ensure coverage for the
https based embedded-newline attack, urldecode and check them directly
for embedded newlines.
Helped-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Reviewed-by: Jeff King <peff@peff.net>
|
|
The credential helper protocol was designed to be very flexible: the
fields it takes as input are treated as a pattern, and any missing
fields are taken as wildcards. This allows unusual things like:
echo protocol=https | git credential reject
to delete all stored https credentials (assuming the helpers themselves
treat the input that way). But when helpers are invoked automatically by
Git, this flexibility works against us. If for whatever reason we don't
have a "host" field, then we'd match _any_ host. When you're filling a
credential to send to a remote server, this is almost certainly not what
you want.
Prevent this at the layer that writes to the credential helper. Add a
check to the credential API that the host and protocol are always passed
in, and add an assertion to the credential_write function that speaks
credential helper protocol to be doubly sure.
There are a few ways this can be triggered in practice:
- the "git credential" command passes along arbitrary credential
parameters it reads from stdin.
- until the previous patch, when the host field of a URL is empty, we
would leave it unset (rather than setting it to the empty string)
- a URL like "example.com/foo.git" is treated by curl as if "http://"
was present, but our parser sees it as a non-URL and leaves all
fields unset
- the recent fix for URLs with embedded newlines blanks the URL but
otherwise continues. Rather than having the desired effect of
looking up no credential at all, many helpers will return _any_
credential
Our earlier test for an embedded newline didn't catch this because it
only checked that the credential was cleared, but didn't configure an
actual helper. Configuring the "verbatim" helper in the test would show
that it is invoked (it's obviously a silly helper which doesn't look at
its input, but the point is that it shouldn't be run at all). Since
we're switching this case to die(), we don't need to bother with a
helper. We can see the new behavior just by checking that the operation
fails.
We'll add new tests covering partial input as well (these can be
triggered through various means with url-parsing, but it's simpler to
just check them directly, as we know we are covered even if the url
parser changes behavior in the future).
[jn: changed to die() instead of logging and showing a manual
username/password prompt]
Reported-by: Carlo Arenas <carenas@gmail.com>
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
We may feed a URL like "cert:///path/to/cert.pem" into the credential
machinery to get the key for a client-side certificate. That
credential has no hostname field, which is about to be disallowed (to
avoid confusion with protocols where a helper _would_ expect a
hostname).
This means as of the next patch, credential helpers won't work for
unlocking certs. Let's fix that by doing two things:
- when we parse a url with an empty host, set the host field to the
empty string (asking only to match stored entries with an empty
host) rather than NULL (asking to match _any_ host).
- when we build a cert:// credential by hand, similarly assign an
empty string
It's the latter that is more likely to impact real users in practice,
since it's what's used for http connections. But we don't have good
infrastructure to test it.
The url-parsing version will help anybody using git-credential in a
script, and is easy to test.
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
Many of the tests in t0300 give partial inputs to git-credential,
omitting a protocol or hostname. We're checking only high-level things
like whether and how helpers are invoked at all, and we don't care about
specific hosts. However, in preparation for tightening up the rules
about when we're willing to run a helper, let's start using input that's
a bit more realistic: pretend as if http://example.com is being
examined.
This shouldn't change the point of any of the tests, but do note we have
to adjust the expected output to accommodate this (filling a credential
will repeat back the protocol/host fields to stdout, and the helper
debug messages and askpass prompt will change on stderr).
Signed-off-by: Jeff King <peff@peff.net>
Reviewed-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
We test a toy credential helper that writes "quit=1" and confirms that
we stop running other helpers. However, that helper is unrealistic in
that it does not bother to read its stdin at all.
For now we don't send any input to it, because we feed git-credential a
blank credential. But that will change in the next patch, which will
cause this test to racily fail, as git-credential will get SIGPIPE
writing to the helper rather than exiting because it was asked to.
Let's make this one-off helper more like our other sample helpers, and
have it source the "dump" script. That will read stdin, fixing the
SIGPIPE problem. But it will also write what it sees to stderr. We can
make the test more robust by checking that output, which confirms that
we do run the quit helper, don't run any other helpers, and exit for the
reason we expected.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
The credential protocol can't handle values with newlines. We already
detect and block any such URLs from being used with credential helpers,
but let's also add an fsck check to detect and block gitmodules files
with such URLs. That will let us notice the problem earlier when
transfer.fsckObjects is turned on. And in particular it will prevent bad
objects from spreading, which may protect downstream users running older
versions of Git.
We'll file this under the existing gitmodulesUrl flag, which covers URLs
with option injection. There's really no need to distinguish the exact
flaw in the URL in this context. Likewise, I've expanded the description
of t7416 to cover all types of bogus URLs.
|
|
The credential protocol can't represent newlines in values, but URLs can
embed percent-encoded newlines in various components. A previous commit
taught the low-level writing routines to die() when encountering this,
but we can be a little friendlier to the user by detecting them earlier
and handling them gracefully.
This patch teaches credential_from_url() to notice such components,
issue a warning, and blank the credential (which will generally result
in prompting the user for a username and password). We blank the whole
credential in this case. Another option would be to blank only the
invalid component. However, we're probably better off not feeding a
partially-parsed URL result to a credential helper. We don't know how a
given helper would handle it, so we're better off to err on the side of
matching nothing rather than something unexpected.
The die() call in credential_write() is _probably_ impossible to reach
after this patch. Values should end up in credential structs only by URL
parsing (which is covered here), or by reading credential protocol input
(which by definition cannot read a newline into a value). But we should
definitely keep the low-level check, as it's our final and most accurate
line of defense against protocol injection attacks. Arguably it could
become a BUG(), but it probably doesn't matter much either way.
Note that the public interface of credential_from_url() grows a little
more than we need here. We'll use the extra flexibility in a future
patch to help fsck catch these cases.
|
|
The credential tests have a "check" function which feeds some input to
git-credential and checks the stdout and stderr. We look for exact
matches in the output. For stdout, this makes sense; the output is
the credential protocol. But for stderr, we may be showing various
diagnostic messages, or the prompts fed to the askpass program, which
could be translated. Let's mark them as such.
|
|
The credential protocol that we use to speak to helpers can't represent
values with newlines in them. This was an intentional design choice to
keep the protocol simple, since none of the values we pass should
generally have newlines.
However, if we _do_ encounter a newline in a value, we blindly transmit
it in credential_write(). Such values may break the protocol syntax, or
worse, inject new valid lines into the protocol stream.
The most likely way for a newline to end up in a credential struct is by
decoding a URL with a percent-encoded newline. However, since the bug
occurs at the moment we write the value to the protocol, we'll catch it
there. That should leave no possibility of accidentally missing a code
path that can trigger the problem.
At this level of the code we have little choice but to die(). However,
since we'd not ever expect to see this case outside of a malicious URL,
that's an acceptable outcome.
Reported-by: Felix Wilhelm <fwilhelm@google.com>
|
|
With the upgrade, the library names changed from libeay32/ssleay32 to
libcrypto/libssl.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|
|
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|
|
* maint-2.23: (44 commits)
Git 2.23.1
Git 2.22.2
Git 2.21.1
mingw: sh arguments need quoting in more circumstances
mingw: fix quoting of empty arguments for `sh`
mingw: use MSYS2 quoting even when spawning shell scripts
mingw: detect when MSYS2's sh is to be spawned more robustly
t7415: drop v2.20.x-specific work-around
Git 2.20.2
t7415: adjust test for dubiously-nested submodule gitdirs for v2.20.x
Git 2.19.3
Git 2.18.2
Git 2.17.3
Git 2.16.6
test-drop-caches: use `has_dos_drive_prefix()`
Git 2.15.4
Git 2.14.6
mingw: handle `subst`-ed "DOS drives"
mingw: refuse to access paths with trailing spaces or periods
mingw: refuse to access paths with illegal characters
...
|
|
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de>
|