Bug 1010607 - security provided by openssh seems questionable
security provided by openssh seems questionable
Status: CLOSED ERRATA
Product: Fedora
Classification: Fedora
Component: openssh (Show other bugs)
19
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Petr Lautrbach
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-21 21:35 EDT by Peter Backes
Modified: 2015-02-19 04:24 EST (History)
24 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-02-18 06:14:04 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Peter Backes 2013-09-21 21:35:28 EDT
Description of problem:

First, let me stress that I am not an expert in cryptography.

After the relevations of the Snowden affair, I tried to do an audit of the cryptography provided by openssh, at a basic level that I think is accessible to me.

I noticed the following issues:

1. It is fairly hard to actually find out the exact cryptographic algorithms and their paramters used for a particular ssh session.  ssh -vvvv provides some hints, but you seem to need to already know what's going on to decipher the cryptic debug messages. This should be made a lot easier and more transparent. It should state simply a clearly, for example, that it is doing a Diffie-Hellman key exchange with parameters such and such.

2. For legal reasons, Fedora's openssh comes without elliptic curve cryptography (ECC) support, see bug 319901. It has been argued that this is a bad thing since ECC provides high security efficiently. It has also been argued that this might be a good thing, since constants used in ECC might contain a NSA backdoor (see also http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-secure-surveillance). One way or another, the absence of any *option* to use ECC can't be a good thing.

3. Hoever, with ECC disabled, openssh defaults to discrete logarithm Diffie-Hellman (DH) for key exchange. This method uses dh.c:dh_estimate() to compute the group size DH parameter:

/*
 * Estimates the group order for a Diffie-Hellman group that has an
 * attack complexity approximately the same as O(2**bits).  Estimate
 * with:  O(exp(1.9223 * (ln q)^(1/3) (ln ln q)^(2/3)))
 */

int
dh_estimate(int bits)
{

        if (bits <= 128)
                return (1024);  /* O(2**86) */
        if (bits <= 192)
                return (2048);  /* O(2**116) */
        return (4096);          /* O(2**156) */
}


Here the parameter "bits" is max(key size, block size, iv size, mac size), see kex.c:kex_choose_conf(), in the loop before "kex->we_need = need", also note the multiplication by 8 (because we_need is measured in bytes) in the call "nbits = dh_estimate(kex->we_need * 8)" in kexgexc.c:kexgex_client().

Now I do not have the slightest idea what the author of this piece of code is trying to say with the comments (what is q supposed to be, and its relation to the only parameter "bits"? Where do the number 86, 116, 156 come from?) However, I understand that a block cipher with 128 bit key size gives us a 1024 group size.

This is not in line with standard advice about cryptographic key lengths, which states that a 128 bit block cipher should be used with a group size of 3072 bits (192 bits with 7680 bit group size, 256 bits with 15360 bit group size), see http://www.keylength.com/en/compare/ (NIST; others are even more conservative). In fact, while today it practically impossible to break a 128 bit block cipher, it is considered fairly realistic to break a 1024 bit RSA, DSA or DH key.

I ask for a second opinion on my analysis, please. If I am right, dh_estimate() should be changed as follows:

/*
 * Estimates the group order for a Diffie-Hellman group that has an
 * attack complexity approximately the same as O(2**bits).  Estimates
 * are from "Recommendation for Key Management,"
 * Special Publication 800-57 Part 1 Rev. 3, NIST, 07/2012.
 */
int
dh_estimate(int bits)
{

        if (bits <= 128)
                return (3072);
        if (bits <= 192)
                return (7680);
        return (15360);
}

Note that dh.c contains the definition

#define DH_GRP_MAX      8192

which is less than 15360, so this probably has to be changed, and possibly other places have to be changed as a consequence (again there is no useful explanation for this definiton and its interdependences or consequences in the source code). If this cannot be done (perhaps the maximum is mandated by some part of the ssh protocol, or something), "return (4096);" should be replaced instead by "return (8192);"

Version-Release number of selected component (if applicable):
openssh-6.2p2-5.fc19.i686

How reproducible:
always

Steps to Reproduce:
1. ssh -vvvv localhost

Actual results:
...
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
...

Expected results:
...
debug1: kexgex_client: Using discrete logarithm Diffie-Hellman key exchange with 256 bit private key size and 3072 bit group size (we_need*8 = 128)
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<16384) sent
...
Comment 1 Robert Scheck 2013-09-23 10:13:13 EDT
Peter, thank you for opening this. I personally have the same feelings - even
I have them more for RHEL than for Fedora, but thus opened case 00947783 in the
Red Hat customer portal to get attention to this additionally.
Comment 2 Vincent Danen 2013-09-26 13:25:56 EDT
I think this kind of report should be made to upstream openssh.  I don't know much about this cryptograhic stuff, but did find the following:

http://www.openssh.org/txt/rfc4419.txt

In particular, in section 3 it indicates:

"
   Servers and clients SHOULD support groups with a modulus length of k
   bits, where 1024 <= k <= 8192.  The recommended values for min and
   max are 1024 and 8192, respectively.

   Either side MUST NOT send or accept e or f values that are not in the
   range [1, p-1].  If this condition is violated, the key exchange
   fails.  To prevent confinement attacks, they MUST accept the shared
   secret K only if 1 < K < p - 1.

   The server should return the smallest group it knows that is larger
   than the size the client requested.  If the server does not know a
   group that is larger than the client request, then it SHOULD return
   the largest group it knows.  In all cases, the size of the returned
   group SHOULD be at least 1024 bits.
"

Did you read the RFC?  It has a lot that is over my head, but it might help you understand this.
Comment 3 Peter Backes 2013-09-26 22:00:36 EDT
(In reply to Vincent Danen from comment #2)
> I think this kind of report should be made to upstream openssh.  I don't
> know much about this cryptograhic stuff, but did find the following:
> 
> http://www.openssh.org/txt/rfc4419.txt
> 
> In particular, in section 3 it indicates:
> 
> "..."
> 
> Did you read the RFC?  It has a lot that is over my head, but it might help
> you understand this.

I notified openssh.com about this bug, and I got essentially the same reply as yours, saying the cryptographic issue has to be discussed by others, but pointing to the RFC and suggesting to read it. 

They point to a workaround: On the server, use a text editor to delete the smaller groups from /etc/ssh/moduli.

I read that passage of the RFC very carefully and it seems not to contradict what I say. It does not even mention the choice of modulus length (aka group size, aka group order) given the choice of block cipher key size. There is some discussion of a different and related quantity, the size of private exponents, in section 6.2, saying it should be "at least twice as long as the key material that is generated from the shared secret". I can agree with this and it seems to be done right by openssh.

The statement "Servers and clients SHOULD support groups with a modulus length of k bits, where 1024 <= k <= 8192. The recommended values for min and max are 1024 and 8192, respectively." is pretty unclear. What are the "full implications" here (as per the RFC2119 definition of "SHOULD")? The RFC lacks a rationale for choosing max = 8192 and there are no hints about the implications of making it larger than suggested.

Something else I noticed: The RFC does not discuss the implications of its use of a hash function ("The hash H is computed as the HASH hash"), especially how the cryptographic strength of that function relates to the choice of group size and cipher key size. Apparently SHA1 and SHA256 are supported (section 4). It is commonly assumed that the cryptographic strength of the SHA1 (160 bit) hash is about the same as 1024 bit D-H/DSA/RSA and 80 bit symmetric cipher size (for SHA256 it's 3072 and 128 bits, respectively). Given this, one might get the impression that while a range "1024 <= k <= 8192" of values is possible, the choice needs to be tailored to the hash, and, eg., using SHA1 with k > 1024 doesn't make things any better. But I am not sure about this, because the hash is an extremely short-term value, used only to authenticate the key exchange, and one might argue that thus any attack on the hash has to break it in a very short amount of time. This is just my feeling, I may very well be wrong (eg. perhaps an attack can use precomputed values). Anyway, this seems to be something that someone with better knowledge in cryptography should have a closer look at.

BTW, I read some postings on the openssh mailing list discussing similar issues (DSA key restriction to 1024 bits), and it was my understanding that they seem to be aware that there are some weaknesses, but the recommendation is usually to simply rely on ECC--which, however, is not available in Fedora. (And NIST standard curves are rumored to contain a NSA backdoor similar to Dual_EC_DRBG, with the same rumors now being spread with respect to SHA-3, see https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3)
Comment 4 Vincent Danen 2013-09-27 18:54:30 EDT
I'm not certain on this, as I'm not a crypto expert by any means.  I do know that DSA is to be 1024 bits because of the FIPS 186-2 specification (and the only reason I know this is the manpage tells me so).

It's unfortunate you got that response from the openssh folks; I would have expected them to have a more knowledgeable answer.  I'm cc'ing Steve Grubb who is quite familiar with these sorts of things and I trust his judgement/knowledge so hopefully he will be able to look at this and perhaps provide some insight.  This is pretty far beyond anything I can usefully contribute to.
Comment 5 Peter Backes 2013-09-28 00:01:55 EDT
(In reply to Vincent Danen from comment #4)
> I'm not certain on this, as I'm not a crypto expert by any means.  I do know
> that DSA is to be 1024 bits because of the FIPS 186-2 specification (and the
> only reason I know this is the manpage tells me so).

It seems to be a little more complicated, see

https://bugzilla.mindrot.org/show_bug.cgi?id=2115
https://bugzilla.mindrot.org/show_bug.cgi?id=2109

The issue is that the SSH protocol currently demands SHA1 hashes in this context, and, according to FIPS, DSA should be used with 1024 bits when used in combination with SHA1. However, I am pretty sure that prohibiting keys with > 1024 bits is the wrong conclusion. Even if no additional security may be provided by DSA with much more than 1024 bits in combination with SHA1, it doesn't do harm, either. And it may be completely reasonable to be a little bit more careful. For example, RFC3766 recommends that 1233 bit DSA should be used for SHA1. So there is really no reason for this restriction; a mere warning when generating DSA keys > 1300 bits would be completely sufficient. One way or another, SHA1 and 1024 (or even 1300) bit DSA provide no security against a sophisticated adversary. See also https://bugzilla.redhat.com/show_bug.cgi?id=1010092

At least "The openssh-unix-dev list has a suggestion for adding ssh-rsa-sha256 and ssh-dss-sha256" (http://lists.mindrot.org/pipermail/openssh-unix-dev/2013-June/031432.html), and I think a 512 bit hash should also be added. However, even SHA2 (aka SHA-224, SHA-256, SHA-384 and SHA-512) cannot be considered appropriate cryptographic hashes anymore. A significant weakness was found in the Merkle–Damgaard construction in 2004 (see slide 10 and 11 in the NIST presentation https://docs.google.com/file/d/0BzRYQSHuuMYOQXdHWkRiZXlURVE), and all hashes that make use of it (MD4, MD5, RIPE-MD, RIPE-MD160, SHA0, SHA1, SHA2) are not something to rely on for the future. SHA-3 should be used instead, and I think that Keccak without the NIST modifications, Grostl and Skein should be provided as alternatives (just as I hope for Serpent and Twofish to be offered as alternatives for AES, which is currently the only state-of-the-art cipher offered by openssh).
Comment 6 Steve Grubb 2013-09-28 11:04:24 EDT
Just getting into this conversation...the first thing is ssh has to follow the RFC or you will have problems connecting to other systems. If the RFC needs adjusting, then IETF should be involved to ensure interoperability. But the problem with making the keys bigger is that it slows down cryptography.

Also, you can adjust cipher preferences and HMAC's by configuration. We take openssh through FIPS validations to make sure its following all the standards correctly In the Security Policy,

http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp1792.pdf

section 9.1 we show how to select ciphers and HMAC. Current versions support sha2 based HMACs. Our implementation also takes environmental variables that let you pick the seed size and location for openssl initialization. So, you have a lot of flexibility by configuration.

On another point, DSA keys must be greater than 1024 bits for signature generation by Dec 31, 2013. DSA keys of 1024 bits are only used for signature verification as you don't want to be locked out of you servers or documents during the transition time. This is dictated by NIST SP 800-131a.

Regarding ECC, there is a lot of hysteria right now. The NIST curves had to satisfy a lot of requirements. One of them is efficiency. The chosen curves have the property where the primes of the form 2^n +/- 2^m+/-2^l... +/- 2^0 where the number of terms are small. This allows for efficient calculation of the modular reduction without the need for an actual divide. This is not to say that there couldn't be a backdoor, but the choice had a rationale based on efficiency in implementation.

Regarding SHA3, NIST has not published the specification yet. Its due in October. I read the same set of slides that is being claimed as to a weakening of the standard. The way I read it is that by reducing the size choices, implementations could meet the strength requirements with a more optimized implementation by removing some choices. Part of what they require is implementation efficiency both in software and hardware.

At this point I don't know if there's anything to do. We are bound by RFC's and there are configurable cipher choices. For anyone needing extra security, you can setup a VPN between systems and then ssh through the tunnel. Just choose 2 different algorithms and make sure the outer has more strength than the inner connection.
Comment 7 Peter Backes 2013-09-28 12:27:13 EDT
(In reply to Steve Grubb from comment #6)
> Just getting into this conversation...the first thing is ssh has to follow
> the RFC or you will have problems connecting to other systems. If the RFC
> needs adjusting, then IETF should be involved to ensure interoperability.

I don't see any need to adjust any RFC. Do you see a need? If so, what exactly do you think needs to be changed?

> But the problem with making the keys bigger is that it slows down
> cryptography.

That statement does not hold without strong reservation. RSA key generation time complexity is superlinear with increasing key size. However, I was talking about discrete logarithm Diffie-Hellman (DLDH), and I have yet to see any time issues for increasing bit lengths. Do you see time issues? Even if there are time issues, that doesn't justify significantly weakening the cryptography. Users can choose a weaker cipher if they want more speed, without fooling themselves (to be able to achieve 1024 bit DLDH speed and 128 bit AES strength at the same time)!

> Also, you can adjust cipher preferences and HMAC's by configuration. We take
> openssh through FIPS validations to make sure its following all the
> standards correctly In the Security Policy,
> 
> http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp1792.pdf
> 
> section 9.1 we show how to select ciphers and HMAC. Current versions support
> sha2 based HMACs. Our implementation also takes environmental variables that
> let you pick the seed size and location for openssl initialization. So, you
> have a lot of flexibility by configuration.

That may be true, but it wasn't my point. My point was that current Fedora versions of openssh

- seem not to follow the NIST's "Recommendation for Key Management" when it comes to DLDH kex group size vs. cipher key size

- do not support any state-of-the-art cipher other than AES

- do not support any of the SHA-3 finalists

- do not support any hash other than SHA1 for RSA/DSA authentication.

> On another point, DSA keys must be greater than 1024 bits for signature
> generation by Dec 31, 2013. DSA keys of 1024 bits are only used for
> signature verification as you don't want to be locked out of you servers or
> documents during the transition time. This is dictated by NIST SP 800-131a.

I honestly don't understand that paragraph. It is not prudent to use 1024 bit RSA/DSA/DLDH keys these days, and hasn't been for quite some time. 1024 bit DLDH has never made any sense in combination with 128 bit ciphers such as AES.  I don't think these statements contradict any standard. Do you see any such contradiction?

> Regarding ECC, there is a lot of hysteria right now. The NIST curves had to
> satisfy a lot of requirements. One of them is efficiency. The chosen curves
> have the property where the primes of the form 2^n +/- 2^m+/-2^l... +/- 2^0
> where the number of terms are small. This allows for efficient calculation
> of the modular reduction without the need for an actual divide. This is not
> to say that there couldn't be a backdoor, but the choice had a rationale
> based on efficiency in implementation.

There are apparently constants in the standard for which no rationale or other reasonable explanation is provided. I am not against offering the NIST curves, but DJB's Curve25519 should be offered as an alternative.

> Regarding SHA3, NIST has not published the specification yet. Its due in
> October. I read the same set of slides that is being claimed as to a
> weakening of the standard. The way I read it is that by reducing the size
> choices, implementations could meet the strength requirements with a more
> optimized implementation by removing some choices. Part of what they require
> is implementation efficiency both in software and hardware.

That is not an argument against offering the original Keccak, plus other finalists, as a choice to the user, which is what I proposed.

> At this point I don't know if there's anything to do. We are bound by RFC's
> and there are configurable cipher choices. For anyone needing extra
> security, you can setup a VPN between systems and then ssh through the
> tunnel. Just choose 2 different algorithms and make sure the outer has more
> strength than the inner connection.

Using a VPN with stronger encryption is not a reasonable alternative to fixing openssh. I don't think this needs to be elaborated.

If you think anything I say contradicts any RFC: Where exactly do you think is that contradiction?

It is a fact that Fedora doesn't support ECC. It seems also a fact that Fedora's openssh, lacking ECC support, falls back to DLDH. Further, it seems to be a fact that under these circumstances, openssh violates NIST SP 800-57 Part 1 Rev. 3, by using a 1024 bit DLDH group size for 128 bit ciphers. Mutatis mutandis the same holds for 196 and 256 bit ciphers. I proposed changes to dh_estimate(). What do you think about these changes?
Comment 8 Stephan Mueller 2013-09-30 03:25:35 EDT
(In reply to Peter Backes from comment #0)

> 
> 3. Hoever, with ECC disabled, openssh defaults to discrete logarithm
> Diffie-Hellman (DH) for key exchange. This method uses dh.c:dh_estimate() to
> compute the group size DH parameter:
> 
> /*
>  * Estimates the group order for a Diffie-Hellman group that has an
>  * attack complexity approximately the same as O(2**bits).  Estimate
>  * with:  O(exp(1.9223 * (ln q)^(1/3) (ln ln q)^(2/3)))
>  */
> 
> int
> dh_estimate(int bits)
> {
> 
>         if (bits <= 128)
>                 return (1024);  /* O(2**86) */
>         if (bits <= 192)
>                 return (2048);  /* O(2**116) */
>         return (4096);          /* O(2**156) */
> }
> 
> 
> Here the parameter "bits" is max(key size, block size, iv size, mac size),
> see kex.c:kex_choose_conf(), in the loop before "kex->we_need = need", also
> note the multiplication by 8 (because we_need is measured in bytes) in the
> call "nbits = dh_estimate(kex->we_need * 8)" in kexgexc.c:kexgex_client().
> 
> Now I do not have the slightest idea what the author of this piece of code
> is trying to say with the comments (what is q supposed to be, and its
> relation to the only parameter "bits"? Where do the number 86, 116, 156 come
> from?) However, I understand that a block cipher with 128 bit key size gives
> us a 1024 group size.
> 
> This is not in line with standard advice about cryptographic key lengths,
> which states that a 128 bit block cipher should be used with a group size of
> 3072 bits (192 bits with 7680 bit group size, 256 bits with 15360 bit group
> size), see http://www.keylength.com/en/compare/ (NIST; others are even more
> conservative). In fact, while today it practically impossible to break a 128
> bit block cipher, it is considered fairly realistic to break a 1024 bit RSA,
> DSA or DH key.

That is an interesting topic that you bring up. But that topic is already well handled at least when it comes to FIPS 140-2.

Before we start, the numbers that are returned by dh_estimate define the size of the p value of the DH key agreement. DH is a finite field cryptography mechanism where the key size (value p and q -- in OpenSSL, the size of q is derived from the size of p as follows: 2048 > p >= 1024 --> q = 160; p >= 2048 -> q = 256) defines the strength of the mechanism.

Moreover, the calculation in the given function depends on the Oakley group configured for the key exchange in OpenSSH (see man sshd_config: KexAlgorithms). The following applies:

diffie-hellman-group1-sha1 defined by RFC2409 section 6.2: p=1024, SHA-1

diffie-hellman-group14-sha1 defined by RFC3526 section 6.2: p=2048 SHA-1

diffie-hellman-group-exchange-sha1 uses /etc/ssh/moduli (or diffie-hellman-group14-sha1 if no suitable moduli found): /etc/ssh/moduli sizes is 8192 >= p >= 1024, SHA-1

diffie-hellman-group-exchange-sha256 uses /etc/ssh/moduli (or diffie-hellman-group14 if no suitable moduli found): /etc/ssh/moduli sizes is 8192 >= p >= 1024 SHA-256

(I am skipping the ECC configs)

It is absolutely true what you say that the security strength of DH when measured in terms of bits is not equal to the size of p and q. In FIPS 140-2, the "implementation guidance" provided in http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf defines in section 7.5 a very important comparison table:

p=1024, q=160 --> 80 bits of security
p=2048, q=256 --> 128 bits of security

(to get 256 bits, you would need p=15360)

The key now is that the security strength of DH is *smaller* than the security strength of the agreed-on key (note: the symmetric cipher security strength is equal to the bit size of their keys: AES128 has 128 bit security, AES256 has 256 bits of security).

That said, in principle DH with the chosen parameters is too weak to communicate strong symmetric keys. But that fact is known to NIST and crypto standards. That is the sole reason why in FIPS 140-2 certificates (and the respecitive security policy documents) there is a caveat added to DH indicating its security strength.

Moreover, that issue you pointed out is not limited to DH, it affects all asymmetric ciphers (e.g. RSA key wrapping in TLS, DH key agreement in TLS) as well as random numbers! For example, /dev/urandom is governed by a whitening function of SHA-1 that is folded in half. That means, in the absence of hardware entropy (worst case), the output of /dev/urandom has a cryptographic strength of 80 bits only. Now, that output is used to seed a deterministc RNG that you use to generate, say, AES keys. But these keys can *never ever* be stronger than 80 bits of security! Hence, the FIPS 140-2 certificate specifies this as a caveat as well.

As this is the current level of cryptography, we have to handle it accordingly. The reason is that for finite field cryptography (and integer factorization cryptography like RSA), the computing requirements are tremendous compared to symmetric algos. That means, handling a key of 15360 will take you minutes.

Just try it: generate a 15360 RSA key and ensure that the random number generator is not the limiting factor!

> 
> I ask for a second opinion on my analysis, please. If I am right,
> dh_estimate() should be changed as follows:
> 
> /*
>  * Estimates the group order for a Diffie-Hellman group that has an
>  * attack complexity approximately the same as O(2**bits).  Estimates
>  * are from "Recommendation for Key Management,"
>  * Special Publication 800-57 Part 1 Rev. 3, NIST, 07/2012.
>  */
> int
> dh_estimate(int bits)
> {
> 
>         if (bits <= 128)
>                 return (3072);
>         if (bits <= 192)
>                 return (7680);
>         return (15360);

Try that one: you will not like the result -- it will takes a real noticable time to connect on even the fastest machines.

> }
> 
> Note that dh.c contains the definition
> 
> #define DH_GRP_MAX      8192

Same here, I would not change that unless you are willing to feel the brunt of users that try to connect and see nothing happening for a long time due to computing resources.
Comment 9 Stephan Mueller 2013-09-30 04:07:32 EDT
(In reply to Peter Backes from comment #0)

>  * Special Publication 800-57 Part 1 Rev. 3, NIST, 07/2012.

One more thing. You are referring to that SP which is backed by the FIPS 140-2 IG section 7.5 I referenced above.

However, NIST also published their cipher transitioning guidance in SP800-131A, which is mandatory for all ciphers to be used in the US government. See chapter 5 there.

It explicitly allows DH with p=2048 and q>224 past December 2013 (to read the table there, be aware that SSHv2 is a protocol that is defined with SP800-135 and DH is defined with SP800-56A --> apply the first row in the given table).
Comment 10 Stephan Mueller 2013-09-30 04:18:56 EDT
(In reply to Stephan Mueller from comment #9)
> (In reply to Peter Backes from comment #0)
> 
> >  * Special Publication 800-57 Part 1 Rev. 3, NIST, 07/2012.
> 
> One more thing. You are referring to that SP which is backed by the FIPS
> 140-2 IG section 7.5 I referenced above.
> 
> However, NIST also published their cipher transitioning guidance in
> SP800-131A, which is mandatory for all ciphers to be used in the US
> government. See chapter 5 there.


And even one more thing: SP800-131A defines the minimum allowed security strength of any protocol. Currently that stands at 80 bits. That is the sole reason why DH p=1024 or DSA p=1024 or RSA 1024 is allowed, even though they fall way short of providing the required security strength.

Starting with 2014, that limit is now bumped to 112 bits. Again, that is the reason why DH p=2048 and q=224 would still be allowed (but DSA1, RSA 1024 or lower DH key sizes are not).

This requirement also applies to the entropy injected into deterministic random number generators.

So, to comply with the new size requirement, all you have to do is to configure KexAlgorithms accordingly based on the strength statement I gave above for the Oakley groups.
Comment 11 Peter Backes 2013-09-30 06:36:03 EDT
(In reply to Stephan Mueller from comment #8)
> Before we start, the numbers that are returned by dh_estimate define the
> size of the p value of the DH key agreement. DH is a finite field
> cryptography mechanism where the key size (value p and q -- in OpenSSL, the
> size of q is derived from the size of p as follows: 2048 > p >= 1024 --> q =
> 160; p >= 2048 -> q = 256) defines the strength of the mechanism.

This is a bug report about openssh, not about openssl. dh_estimate() returns the Diffie-Hellman group size to be used, given the paramter "bits". There is variable "p", nor "q". What you could say is at most that the function returns "p". See my paragraph "Here the parameter...".

> Moreover, the calculation in the given function depends on the Oakley group
> configured for the key exchange in OpenSSH (see man sshd_config:
> KexAlgorithms). The following applies:
> 
> diffie-hellman-group1-sha1 defined by RFC2409 section 6.2: p=1024, SHA-1
>
> diffie-hellman-group14-sha1 defined by RFC3526 section 6.2: p=2048 SHA-1
> 
> diffie-hellman-group-exchange-sha1 uses /etc/ssh/moduli (or
> diffie-hellman-group14-sha1 if no suitable moduli found): /etc/ssh/moduli
> sizes is 8192 >= p >= 1024, SHA-1

There is no mention of diffie-hellman-group1-sha1 in RFC2409. Section 6.2 contains no definition. It doesn't mention SHA-1.

RFC3526 does not contain any section 6.2. It does not mention diffie-hellman-group14-sha1.

Anyway, that's not the issue. Here, SHA-1 is obviously the weakest link, so more than 1024 bits won't make a difference.

> diffie-hellman-group-exchange-sha256 uses /etc/ssh/moduli (or
> diffie-hellman-group14 if no suitable moduli found): /etc/ssh/moduli sizes
> is 8192 >= p >= 1024 SHA-256

That is certainly true, but does not contradict what I said.

> It is absolutely true what you say that the security strength of DH when
> measured in terms of bits is not equal to the size of p and q. In FIPS
> 140-2, the "implementation guidance" provided in
> http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf
> defines in section 7.5 a very important comparison table:
> 
> p=1024, q=160 --> 80 bits of security
> p=2048, q=256 --> 128 bits of security
> 
> (to get 256 bits, you would need p=15360)

That's not true. Table 2 says Diffie-Hellman L = 3072, N = 256 are equivalent to 128 bits symmetric cipher, which is what I assumed in the bug report from the very beginning. What you say relates to table 4 which contains MINIMUM key sizes for the recommended algorithms to be used for Federal Government UNCLASSIFIED applications, with the additional note that "Between 2011 and 2030, a minimum of 112 bits of security shall be provided", which is not the case for 1024 bit Diffie-Hellman. Note again: These are MINIMUMS for UNCLASSIFIED information, not recommended values or even maximums!

> The key now is that the security strength of DH is *smaller* than the
> security strength of the agreed-on key (note: the symmetric cipher security
> strength is equal to the bit size of their keys: AES128 has 128 bit
> security, AES256 has 256 bits of security).
> 
> That said, in principle DH with the chosen parameters is too weak to
> communicate strong symmetric keys. But that fact is known to NIST and crypto
> standards. That is the sole reason why in FIPS 140-2 certificates (and the
> respecitive security policy documents) there is a caveat added to DH
> indicating its security strength.

It is quite irrelevant what minimums some standards proposes. We need to have reasonable and prudent security! Even if 1024 bit Diffie-Hellman were compliant, it wouldn't be prudent, nor reasonable.

> Moreover, that issue you pointed out is not limited to DH, it affects all
> asymmetric ciphers (e.g. RSA key wrapping in TLS, DH key agreement in TLS)
> as well as random numbers! For example, /dev/urandom is governed by a
> whitening function of SHA-1 that is folded in half. That means, in the
> absence of hardware entropy (worst case), the output of /dev/urandom has a
> cryptographic strength of 80 bits only. Now, that output is used to seed a
> deterministc RNG that you use to generate, say, AES keys. But these keys can
> *never ever* be stronger than 80 bits of security! Hence, the FIPS 140-2
> certificate specifies this as a caveat as well.

That doesn't justify anything. It's shocking.

> As this is the current level of cryptography, we have to handle it
> accordingly. The reason is that for finite field cryptography (and integer
> factorization cryptography like RSA), the computing requirements are
> tremendous compared to symmetric algos. That means, handling a key of 15360
> will take you minutes.
> 
> Just try it: generate a 15360 RSA key and ensure that the random number
> generator is not the limiting factor!

What you say holds for RSA key generation only, where it is not a big issue, since we need to generate the key only once and can use it over the long term. We are discussing Diffie-Hellman here. See my paragraph beginning with "That statement does not hold without strong reservation". I already discussed the claim that speed is an issue here. I tried increasing the bits with Diffie-Hellman. I couldn't notice any speed issue. And that makes sense. That's the whole point of preferring Diffie-Hellman over ephemeral RSA keys. Generating RSA keys used to be pretty slow even for 1024 bits.

> Try that one: you will not like the result -- it will takes a real noticable
> time to connect on even the fastest machines.
> 
> ...
> 
> Same here, I would not change that unless you are willing to feel the brunt
> of users that try to connect and see nothing happening for a long time due
> to computing resources.

I cannot confirm this claim. It is empirically wrong, and seems theoretically wrong as well.

(In reply to Stephan Mueller from comment #9)

> However, NIST also published their cipher transitioning guidance in
> SP800-131A, which is mandatory for all ciphers to be used in the US
> government. See chapter 5 there.
> 
> It explicitly allows DH with p=2048 and q>224 past December 2013 (to read
> the table there, be aware that SSHv2 is a protocol that is defined with
> SP800-135 and DH is defined with SP800-56A --> apply the first row in the
> given table).

By default, Fedora's openssh uses 1024 group size only with AES. Not 2048.

(In reply to Stephan Mueller from comment #10)
> And even one more thing: SP800-131A defines the minimum allowed security
> strength of any protocol. Currently that stands at 80 bits. That is the sole
> reason why DH p=1024 or DSA p=1024 or RSA 1024 is allowed, even though they
> fall way short of providing the required security strength.
> 
> Starting with 2014, that limit is now bumped to 112 bits. Again, that is the
> reason why DH p=2048 and q=224 would still be allowed (but DSA1, RSA 1024 or
> lower DH key sizes are not).
> 
> This requirement also applies to the entropy injected into deterministic
> random number generators.
> 
> So, to comply with the new size requirement, all you have to do is to
> configure KexAlgorithms accordingly based on the strength statement I gave
> above for the Oakley groups.

Please let's not discuss what the standard explicitly allows as the absolute minimum (obviously to take care of legacy software). Let's discuss what is prudent. It is prudent to use 3072 bit (slightly more doesn't hurt) Diffie-Hellman and 256 bit hashes with 128 bit ciphers such as AES.

There is no option in KexAlgorithms provided that causes openssh to use > 1024 bits for Diffie-Hellman. You need to use ECC if you want more security, but that isn't provided on Fedora. Or use ugly workarounds to force openssh to use > 1024 bits, for example modifying /etc/ssh/moduli on the server, or tweaking the MACs option on the client, which may increase Diffie-Hellman group size as a side effect.

Even if there were such KeyAlgorithms, it wouldn't make dh_estimate() any more reasonable.

If similar issues apply to random number generators etc. that's not a justification, but calls for them to be fixed as well, obviously.
Comment 12 Stephan Mueller 2013-09-30 07:09:58 EDT
(In reply to Peter Backes from comment #11)
> (In reply to Stephan Mueller from comment #8)
> > Before we start, the numbers that are returned by dh_estimate define the
> > size of the p value of the DH key agreement. DH is a finite field
> > cryptography mechanism where the key size (value p and q -- in OpenSSL, the
> > size of q is derived from the size of p as follows: 2048 > p >= 1024 --> q =
> > 160; p >= 2048 -> q = 256) defines the strength of the mechanism.
> 
> This is a bug report about openssh, not about openssl. dh_estimate() returns
> the Diffie-Hellman group size to be used, given the paramter "bits". There
> is variable "p", nor "q". What you could say is at most that the function
> returns "p". See my paragraph "Here the parameter...".

Of course it is about OpenSSH, but it uses OpenSSL for the heavy lifting in DH. How about looking into dh_gen_key:

if (DH_generate_key(dh) == 0)
                        fatal("DH_generate_key");
...


> 
> > Moreover, the calculation in the given function depends on the Oakley group
> > configured for the key exchange in OpenSSH (see man sshd_config:
> > KexAlgorithms). The following applies:
> > 
> > diffie-hellman-group1-sha1 defined by RFC2409 section 6.2: p=1024, SHA-1
> >
> > diffie-hellman-group14-sha1 defined by RFC3526 section 6.2: p=2048 SHA-1
> > 
> > diffie-hellman-group-exchange-sha1 uses /etc/ssh/moduli (or
> > diffie-hellman-group14-sha1 if no suitable moduli found): /etc/ssh/moduli
> > sizes is 8192 >= p >= 1024, SHA-1
> 
> There is no mention of diffie-hellman-group1-sha1 in RFC2409. Section 6.2
> contains no definition. It doesn't mention SHA-1.

Well, the names are different. But if you dare to compare the values defined in 6.2 with the respective Oakley group name used in sshd, you will find what I mentioned.

> 
> RFC3526 does not contain any section 6.2. It does not mention
> diffie-hellman-group14-sha1.

Sorry, typo, I meant chapter 3.
> 
> Anyway, that's not the issue. Here, SHA-1 is obviously the weakest link, so
> more than 1024 bits won't make a difference.
> 
> > diffie-hellman-group-exchange-sha256 uses /etc/ssh/moduli (or
> > diffie-hellman-group14 if no suitable moduli found): /etc/ssh/moduli sizes
> > is 8192 >= p >= 1024 SHA-256
> 
> That is certainly true, but does not contradict what I said.

I did not mean that as a contradiction, just a clarification.
> 
> > It is absolutely true what you say that the security strength of DH when
> > measured in terms of bits is not equal to the size of p and q. In FIPS
> > 140-2, the "implementation guidance" provided in
> > http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf
> > defines in section 7.5 a very important comparison table:
> > 
> > p=1024, q=160 --> 80 bits of security
> > p=2048, q=256 --> 128 bits of security
> > 
> > (to get 256 bits, you would need p=15360)
> 
> That's not true. Table 2 says Diffie-Hellman L = 3072, N = 256 are

yeah, typo again -- I should not type that fast. It is 112 bits. Yet that is still enough as per SP800-131A.

> equivalent to 128 bits symmetric cipher, which is what I assumed in the bug
> report from the very beginning. What you say relates to table 4 which
> contains MINIMUM key sizes for the recommended algorithms to be used for
> Federal Government UNCLASSIFIED applications, with the additional note that
> "Between 2011 and 2030, a minimum of 112 bits of security shall be
> provided", which is not the case for 1024 bit Diffie-Hellman. Note again:
> These are MINIMUMS for UNCLASSIFIED information, not recommended values or
> even maximums!

As noted, I concur in the p=1024 case that it provides 80 bits only. And that is still ok until the end of this year as per SP800-131A.
> 
> > The key now is that the security strength of DH is *smaller* than the
> > security strength of the agreed-on key (note: the symmetric cipher security
> > strength is equal to the bit size of their keys: AES128 has 128 bit
> > security, AES256 has 256 bits of security).
> > 
> > That said, in principle DH with the chosen parameters is too weak to
> > communicate strong symmetric keys. But that fact is known to NIST and crypto
> > standards. That is the sole reason why in FIPS 140-2 certificates (and the
> > respecitive security policy documents) there is a caveat added to DH
> > indicating its security strength.
> 
> It is quite irrelevant what minimums some standards proposes. We need to
> have reasonable and prudent security! Even if 1024 bit Diffie-Hellman were
> compliant, it wouldn't be prudent, nor reasonable.

If you think that this is the case, sure, redefine it. However, I am just explaining to the standards you refer to and applicable to NIST.
> 
> > Moreover, that issue you pointed out is not limited to DH, it affects all
> > asymmetric ciphers (e.g. RSA key wrapping in TLS, DH key agreement in TLS)
> > as well as random numbers! For example, /dev/urandom is governed by a
> > whitening function of SHA-1 that is folded in half. That means, in the
> > absence of hardware entropy (worst case), the output of /dev/urandom has a
> > cryptographic strength of 80 bits only. Now, that output is used to seed a
> > deterministc RNG that you use to generate, say, AES keys. But these keys can
> > *never ever* be stronger than 80 bits of security! Hence, the FIPS 140-2
> > certificate specifies this as a caveat as well.
> 
> That doesn't justify anything. It's shocking.

It justifies quite a bit: If you increase the strength in the processing ciphers but have limited entropy to begin with, you to not increase the strength overall.

So, while it may be worthwile to increase the strength (which btw can be done using a configuration value to at least comply with NIST requirements), more effort should be put into the entropy source.

That is the way it is. And we all live with that currently. Yes, a SHA-256 may be helpful for /dev/urandom (and to a lesser degree to /dev/random too). But it requires quite some re-writing of /dev/random. The easiest would simply be removing the folding of the calculated SHA-1 in /dev/random.

> 
> > Try that one: you will not like the result -- it will takes a real noticable
> > time to connect on even the fastest machines.
> > 
> > ...
> > 
> > Same here, I would not change that unless you are willing to feel the brunt
> > of users that try to connect and see nothing happening for a long time due
> > to computing resources.
> 
> I cannot confirm this claim. It is empirically wrong, and seems
> theoretically wrong as well.

Ok, I stand corrected.
> 
> (In reply to Stephan Mueller from comment #9)
> 
> > However, NIST also published their cipher transitioning guidance in
> > SP800-131A, which is mandatory for all ciphers to be used in the US
> > government. See chapter 5 there.
> > 
> > It explicitly allows DH with p=2048 and q>224 past December 2013 (to read
> > the table there, be aware that SSHv2 is a protocol that is defined with
> > SP800-135 and DH is defined with SP800-56A --> apply the first row in the
> > given table).
> 
> By default, Fedora's openssh uses 1024 group size only with AES. Not 2048.

Yes, that is the default. Yet, you can reconfigure it to something that brings you to 2048 bits by just changing sshd_config as mentioned above with the Oakley groups. When in addition you touch /etc/ssh/moduli, you can bring it to 8192.
> 
> (In reply to Stephan Mueller from comment #10)
> > And even one more thing: SP800-131A defines the minimum allowed security
> > strength of any protocol. Currently that stands at 80 bits. That is the sole
> > reason why DH p=1024 or DSA p=1024 or RSA 1024 is allowed, even though they
> > fall way short of providing the required security strength.
> > 
> > Starting with 2014, that limit is now bumped to 112 bits. Again, that is the
> > reason why DH p=2048 and q=224 would still be allowed (but DSA1, RSA 1024 or
> > lower DH key sizes are not).
> > 
> > This requirement also applies to the entropy injected into deterministic
> > random number generators.
> > 
> > So, to comply with the new size requirement, all you have to do is to
> > configure KexAlgorithms accordingly based on the strength statement I gave
> > above for the Oakley groups.
> 
> Please let's not discuss what the standard explicitly allows as the absolute
> minimum (obviously to take care of legacy software). Let's discuss what is
> prudent. It is prudent to use 3072 bit (slightly more doesn't hurt)
> Diffie-Hellman and 256 bit hashes with 128 bit ciphers such as AES.

It is prudent to do so, sure. Yet it is equally prudent to not forget the RNG.

And while we are at it. the default RNG for OpenSSH in Fedora is the OpenSSL DRNG based on SHA-1. So, you get only 160 bits of entropy from it. That means, any AES higher than 128 will only deliver up to 160 bits of entropy when only considering the DRNG side and forgetting the Linux /dev/random device for the moment. When including /dev/urandom into the picture, you get 80 bits overall.

So, to make the overall strength better, the first thing you should consider is using /dev/random (although it may be questionable whether you get more than 80 bits from it in the worst case - but skip this hint for the moment).

(I actually do not even want to mention that the default in upstream OpenSSH seems to be an arc4random re-implementation which is even weaker than the OpenSSL default DRNG).
> 
> There is no option in KexAlgorithms provided that causes openssh to use >
> 1024 bits for Diffie-Hellman. You need to use ECC if you want more security,

As I mentioned above, I think you can get significant more strength even without ECC.

> but that isn't provided on Fedora. Or use ugly workarounds to force openssh
> to use > 1024 bits, for example modifying /etc/ssh/moduli on the server, or
> tweaking the MACs option on the client, which may increase Diffie-Hellman
> group size as a side effect.
> 
> Even if there were such KeyAlgorithms, it wouldn't make dh_estimate() any
> more reasonable.
> 
> If similar issues apply to random number generators etc. that's not a
> justification, but calls for them to be fixed as well, obviously.

Sure, I am all for it. Yet, I just want to point out where the current limits are drawn for the standards you cite, such as the Special Publications.
Comment 13 Peter Backes 2013-10-01 02:30:33 EDT
(In reply to Stephan Mueller from comment #12)
> Of course it is about OpenSSH, but it uses OpenSSL for the heavy lifting in
> DH.

Yes, but all the decisions about how many bits to use seem to be made by OpenSSH and are not delegated to OpenSSL.

> Well, the names are different. But if you dare to compare the values defined
> in 6.2 with the respective Oakley group name used in sshd, you will find
> what I mentioned.

OK.  What you seem to say is that using -oKexAlgorithms=diffie-hellman-group14-sha1 causes openssh to use 2048 bit Diffie-Hellman. (Any other of the choices you mention result in 1024 bit Diffie-Hellman.)

> As noted, I concur in the p=1024 case that it provides 80 bits only. And
> that is still ok until the end of this year as per SP800-131A.

Three months are left until the end of this year... And that's only to keep up with the bare minimum!

> It justifies quite a bit: If you increase the strength in the processing
> ciphers but have limited entropy to begin with, you to not increase the
> strength overall.
>
> ...
>
> It is prudent to do so, sure. Yet it is equally prudent to not forget the
> RNG.
> 
> And while we are at it. the default RNG for OpenSSH in Fedora is the OpenSSL
> DRNG based on SHA-1. So, you get only 160 bits of entropy from it. That
> means, any AES higher than 128 will only deliver up to 160 bits of entropy
> when only considering the DRNG side and forgetting the Linux /dev/random
> device for the moment. When including /dev/urandom into the picture, you get
> 80 bits overall.
> 
> So, to make the overall strength better, the first thing you should consider
> is using /dev/random (although it may be questionable whether you get more
> than 80 bits from it in the worst case - but skip this hint for the moment).
> 
> (I actually do not even want to mention that the default in upstream OpenSSH
> seems to be an arc4random re-implementation which is even weaker than the
> OpenSSL default DRNG).

Certainly. But RNGs are independent issues, given that those RNGs are not in openssh itself. Perhaps more bug reports should be opened to cover those issue as well.

> That is the way it is. And we all live with that currently. Yes, a SHA-256
> may be helpful for /dev/urandom (and to a lesser degree to /dev/random too).
> But it requires quite some re-writing of /dev/random. The easiest would
> simply be removing the folding of the calculated SHA-1 in /dev/random.

It seems questionable that there is a hard-coded limit on cryptographic strength imposed by the kernel random number generator. Shouldn't  cryptographic strength be the choice of the user space process? If I understood correctly what you say, it means that if you want to use a cryptographic strength equivalent to a 256 bit symmtric cipher, you cannot use /dev/urandom and perhaps not even /dev/random, as long as the kernel doesn't use a 512 bit hash.

> Yes, that is the default. Yet, you can reconfigure it to something that
> brings you to 2048 bits by just changing sshd_config as mentioned above with
> the Oakley groups. When in addition you touch /etc/ssh/moduli, you can bring
> it to 8192.

OK, but I'd consider that to be an ugly workaround. I still don't feel any of what you say reasonably explains why dh_estimate() is the way it is, given there seem to be no real speed issues.

> yeah, typo again -- I should not type that fast. It is 112 bits. Yet that is
> still enough as per SP800-131A.
> 
> ...
> 
> Sure, I am all for it. Yet, I just want to point out where the current
> limits are drawn for the standards you cite, such as the Special
> Publications.

I'm sorry to be such a nit picker, but they're minimums, not limits or "enough". ;)

Fedora's openssh in the default case does not provide even 112 bits, as already mentioned. I hope the resolution of this bug report will be different from making diffie-hellman-group14-sha1 top priority for KexAlgorithms in /etc/ssh/ssh_config (for which I can't say if it has other implications not understood well by anyone of us).

I'm honestly shocked a bit about the state of open source crypto. Given all this, the relevant statements from the Snowden documents really come as no surprise.
Comment 14 Alex Smirnoff 2013-10-08 06:03:42 EDT
Doesn't using AES256 provide "magic" workaround?
Comment 15 Stephan Mueller 2013-10-08 06:09:53 EDT
(In reply to Alex Smirnoff from comment #14)
> Doesn't using AES256 provide "magic" workaround?

Well, this is one aspect. But the overall cryptographic strength of the SSH connection is the minimal cryptographic strength of the following mechanisms:

- RNG/seed source for generating the different random values

- Diffie Hellman mechanism (as discussed above, the default is 80 bits, and we could bring it to some 150 bits without code change)

- the symmetric cipher

So, your proposal of AES256 implies a crypto strength of 256 bits for the symmetric ciphers. Yet, the other parts are still where they are.

I hazard to add the MAC as it provides integrity and not privacy.
Comment 16 Alex Smirnoff 2013-10-08 06:25:32 EDT
> So, your proposal of AES256 implies a crypto strength of 256 bits for the symmetric ciphers. Yet, the other parts are still where they are.

My initial impression was that openssh adjusts DH accordingly if you change symmetric cipher strength, am I wrong?
Comment 17 Peter Backes 2013-10-08 06:35:59 EDT
(In reply to Stephan Mueller from comment #15)

> I hazard to add the MAC as it provides integrity and not privacy.

Without integrity, a sophisticated attacker can subvert privacy by a man in the middle attack.

MAC might be less problematic for a different reason, though: Because of its short-term character. I am not an expert, however, and I may well be wrong.

(In reply to Alex Smirnoff from comment #16)
> > So, your proposal of AES256 implies a crypto strength of 256 bits for the symmetric ciphers. Yet, the other parts are still where they are.
> 
> My initial impression was that openssh adjusts DH accordingly if you change
> symmetric cipher strength, am I wrong?

You are right. (But: You won't get 256 bits symmetric cryptographic strength. DH will use 4096 bits, as you can see from dh_estimate(). That's somewhere between  128 and 192 bits.)

Again, note that the DH mechanism itself seems to make use of a hash function, and that hash function is SHA1 for all the DL-DH kex algorithms available on Fedora. For example diffie-hellman-group-exchange-sha1. Again, what I said above ("less problematic") may or may not apply.
Comment 18 Darren Tucker 2013-10-08 07:36:18 EDT
Hi.

(In reply to Peter Backes from comment #3)
[...]
> I notified openssh.com about this bug, and I got essentially the same reply
> as yours, saying the cryptographic issue has to be discussed by others 

I wrote the reply you're referring to.  The "others" to which I referred are the other openssh developers, who are far more crypto-savvy than I am.

> but pointing to the RFC and suggesting to read it.

that was specifically addressing your question about where the 8192-bit limit came from.

For what it's worth, we agree they should be increased, and they will be in the next release (probably the values from NIST limited by RFC4419: 3k/7k/8k).
Comment 19 Stephan Mueller 2013-10-08 08:15:55 EDT
(In reply to Peter Backes from comment #17)
> 
> You are right. (But: You won't get 256 bits symmetric cryptographic
> strength. DH will use 4096 bits, as you can see from dh_estimate(). That's
> somewhere between  128 and 192 bits.)

I am at a loss why you insist that dh_estimate does anything material to the DH strength here.

Let us disect:

void
kexgex_client(Kex *kex)
{
...
        nbits = dh_estimate(kex->we_need * 8);
...
                packet_put_int(nbits);
...
        if ((p = BN_new()) == NULL)
                fatal("BN_new");
        packet_get_bignum2(p);
...
        dh_gen_key(dh, kex->we_need * 8);

In dh_gen_key:

void
dh_gen_key(DH *dh, int need)
{
...
               if (DH_generate_key(dh) == 0)
...

In OpenSSL:

static int generate_key(DH *dh)
        {
...
                        /* secret exponent length */
                        l = dh->length ? dh->length : BN_num_bits(dh->p)-1;

---> client receives size from the server as expected via the DH definition (see wikipedia: p and q are sent from the server)

On the server side, the KEX is implemented by:

        kex->kex[KEX_DH_GRP1_SHA1] = kexdh_server;
        kex->kex[KEX_DH_GRP14_SHA1] = kexdh_server;
        kex->kex[KEX_DH_GEX_SHA1] = kexgex_server;
        kex->kex[KEX_DH_GEX_SHA256] = kexgex_server;
        kex->kex[KEX_ECDH_SHA2] = kexecdh_server;

Let us look into one of these funcs:

void
kexdh_server(Kex *kex)
{
...
        /* generate server DH public key */
        switch (kex->kex_type) {
        case KEX_DH_GRP1_SHA1:
                dh = dh_new_group1();
                break;
        case KEX_DH_GRP14_SHA1:
                dh = dh_new_group14();

...
        dh_gen_key(dh, kex->we_need * 8);

Looking into dh_new_group1:

DH *
dh_new_group1(void)
{
...
        return (dh_new_group_asc(gen, group1));

Looking into dh_new_group_asc

        if (BN_hex2bn(&dh->p, modulus) == 0)
                fatal("BN_hex2bn p");

==> p is defined by the Oakley Group definition and is in fact the value of the Oakley group.

The kex->we_need is only used for sanity checking in dh_gen_key!

Thus, nothing depends on the cipher except sanity check as defined in dh_estimate, but the strength is defined by the oakley group sizes.

Thus, I am questioning that when you cange dh_estimate you really changed the underlying DH key sizes and thus the strength of DH.
Comment 20 Peter Backes 2013-10-08 09:29:32 EDT
(In reply to Stephan Mueller from comment #19)
> Let us look into one of these funcs:

It is insufficient to "look into one". You need to look into all of them. 
KEX_DH_GRP1_SHA1 is not even used by default.
Comment 21 Stephan Mueller 2013-10-08 09:44:12 EDT
(In reply to Peter Backes from comment #20)
> (In reply to Stephan Mueller from comment #19)
> > Let us look into one of these funcs:
> 
> It is insufficient to "look into one". You need to look into all of them. 
> KEX_DH_GRP1_SHA1 is not even used by default.

Well, I am not sure what is to be proven by this, but lets continue:

kexgex_server:

void
kexgex_server(Kex *kex)
{
...
                onbits = nbits = packet_get_int();
...
                nbits = MAX(DH_GRP_MIN, nbits);
                nbits = MIN(DH_GRP_MAX, nbits);
...
        /* Contact privileged parent */
        dh = PRIVSEP(choose_dh(min, nbits, max));

choose_dh:
...
        if ((f = fopen(_PATH_DH_MODULI, "r")) == NULL &&
            (f = fopen(_PATH_DH_PRIMES, "r")) == NULL) {
...
               if ((dhg.size > wantbits && dhg.size < best) ||
                    (dhg.size > best && best < wantbits)) {
                        best = dhg.size;
                        bestcount = 0;

--> yes, the client sends a request to the server with some sizes, but the server decides the ultimate length based on the moduli file.


Now the last one:

void
kexecdh_server(Kex *kex)
{
...
        group = EC_KEY_get0_group(server_key);
...
        /* Calculate shared_secret */
        klen = (EC_GROUP_get_degree(group) + 7) / 8;
        kbuf = xmalloc(klen);
        if (ECDH_compute_key(kbuf, klen, client_public,
            server_key, NULL) != (int)klen)
...

---> The DH size depends on the ECC curve.

So, I do not see that dh_estimate is the definition of the modulus size. It sets some boundaries for it, but not more.
Comment 22 Peter Backes 2013-10-08 10:04:03 EDT
(In reply to Stephan Mueller from comment #21)
> Well, I am not sure what is to be proven by this, but lets continue:

I am not sure either. What is the point you are trying to make?

> --> yes, the client sends a request to the server with some sizes, but the
> server decides the ultimate length based on the moduli file.

I assume the server has to reply with a group at least as big as requested...

> ---> The DH size depends on the ECC curve.

ECC is not supported on Fedora, because of patenting issues.

> So, I do not see that dh_estimate is the definition of the modulus size. It
> sets some boundaries for it, but not more.

It seems to set lower boundaries for the default method on Fedora (kex gex), see also "They point to a workaround: On the server, use a text editor to delete the smaller groups from /etc/ssh/moduli."
Comment 23 Stephan Mueller 2013-10-08 10:12:52 EDT
(In reply to Peter Backes from comment #22)
> (In reply to Stephan Mueller from comment #21)
> > Well, I am not sure what is to be proven by this, but lets continue:
> 
> I am not sure either. What is the point you are trying to make?

My point is that it is neither dh_estimate nor the symmetric cipher configuration that defines the strength of the DH exchange. And that is the entire bug report all about.

The DH strength rests on the Oakley group, the moduli file or the ECC curve parameters.
> 
> > --> yes, the client sends a request to the server with some sizes, but the
> > server decides the ultimate length based on the moduli file.
> 
> I assume the server has to reply with a group at least as big as requested...


Sure, it replies with the value p that is derived from the moduli file where the lower boundary is defined by the dh_estimate specification.
> 
> > ---> The DH size depends on the ECC curve.
> 
> ECC is not supported on Fedora, because of patenting issues.

First you wanted me to elaborate on all, now you discard things?
> 
> > So, I do not see that dh_estimate is the definition of the modulus size. It
> > sets some boundaries for it, but not more.
> 
> It seems to set lower boundaries for the default method on Fedora (kex gex),
> see also "They point to a workaround: On the server, use a text editor to
> delete the smaller groups from /etc/ssh/moduli."

Bingo. But any cipher configuration or alteration of dh_estimate may be good, but is by no means sufficient nor addresses the heart of your reported concern.
Comment 24 Peter Backes 2013-10-08 11:09:06 EDT
(In reply to Stephan Mueller from comment #23)
> My point is that it is neither dh_estimate nor the symmetric cipher
> configuration that defines the strength of the DH exchange. And that is the
> entire bug report all about.
>
> The DH strength rests on the Oakley group, the moduli file or the ECC curve
> parameters.

I don't understand what you mean by "rests". Let's not struggle about words, like whether something is "defined" by, or "rests" on something else. Let's discuss facts. The facts are: With Fedora's default kex gex method, the client and the server agree on some DH kex strength. If the client asks for weak kex strength, it risks actually getting it (and in fact will actually get it, if you don't mess with the module file on the server). What strength the client asks for is determined by dh_estimate(). Currently, the strength determined by dh_estimate() is sub-standard. Hence, in effect, sub-standard strength will actually be used.

> Sure, it replies with the value p that is derived from the moduli file where
> the lower boundary is defined by the dh_estimate specification.

This is expressing in technical language what I was saying.

> First you wanted me to elaborate on all, now you discard things?

Of course one can discard things, but only AFTER looking at them and having a reason for doing so. You said you need to look at only one to make your proof and I disagreed. By not looking at all of the methods, you didn't consider the one that is actually used by default, and for which your argument happens not to hold.

> Bingo. But any cipher configuration or alteration of dh_estimate may be
> good, but is by no means sufficient nor addresses the heart of your reported
> concern.

As far as I can see, it does. Of course there might be other issues (random number generators, weak hashes, etc), but I don't see any remaining problem with the kex gex group size negotiation (except the upper bound of 8192). If you think there is an issue, please explain. From what you wrote so far, I don't see it.
Comment 25 Stephan Mueller 2013-10-08 11:30:20 EDT
(In reply to Peter Backes from comment #24)
> 
> I don't understand what you mean by "rests". Let's not struggle about words,
> like whether something is "defined" by, or "rests" on something else. Let's
> discuss facts. The facts are: With Fedora's default kex gex method, the
> client and the server agree on some DH kex strength. If the client asks for

Correct.

> weak kex strength, it risks actually getting it (and in fact will actually

... in the default config of the SSH server ...

> get it, if you don't mess with the module file on the server). What strength
> the client asks for is determined by dh_estimate(). Currently, the strength

Yes, that is correct. But the client may get even a weaker DH than dh_estimate would calculate.

Assume you configure your SSH server with 

KexAlgorithms diffie-hellman-group-exchange-sha1

your DH strength is only and ever 80 bits (due to p=1024 based on the chosen p value).

Thus, dh_estimate calculates a desire of the client, but not more.

> determined by dh_estimate() is sub-standard. Hence, in effect, sub-standard
> strength will actually be used.

That is not necessarily the case -- and that is my point!

Only if you set KexAlgorithm for the client as well, you can prevent a weak server setting.
> 
> > Sure, it replies with the value p that is derived from the moduli file where
> > the lower boundary is defined by the dh_estimate specification.
> 
> This is expressing in technical language what I was saying.

Not completely as just mentioned.

> > Bingo. But any cipher configuration or alteration of dh_estimate may be
> > good, but is by no means sufficient nor addresses the heart of your reported
> > concern.
> 
> As far as I can see, it does. Of course there might be other issues (random
> number generators, weak hashes, etc), but I don't see any remaining problem
> with the kex gex group size negotiation (except the upper bound of 8192). If
> you think there is an issue, please explain. From what you wrote so far, I
> don't see it.

As mentioned above, dh_estimate is a desire. But the ultimate decision maker is the server. And he can happily override that desire even to the low side. Thus, changing dh_estimate is NOT going to be sufficient.

And, my point is that up to 8192 bits there is a more elegant way to solve the 
issue than changing the code: configure KexAlgorithms and potentially change /etc/ssh/moduli file that is distributed in the RPM to remove the weaker entries.

If you want to change the code, ensure that the default for KexAlgorithms does not contain the algos implying weaker DH sizes.

If you want higher strengths than 8192, you have to again disable the weaker KexAlgorithms *and* add longer entries to /etc/ssh/moduli.
Comment 26 Peter Backes 2013-10-08 12:35:14 EDT
(In reply to Stephan Mueller from comment #25)
> Only if you set KexAlgorithm for the client as well, you can prevent a weak
> server setting.

Should deprecated, weak algorithms be disabled by default? I don't think that's such a big issue, IF it is easy for the user to see what he is actually using. I think I stated that point quite strongly in the bug report.

> As mentioned above, dh_estimate is a desire. But the ultimate decision maker
> is the server. And he can happily override that desire even to the low side.
> Thus, changing dh_estimate is NOT going to be sufficient.

There is no "ultimate" decision maker. Both client and server make their own decisions. If both disagree, the connection won't be established. Neither can the client force the server to obey its wishes, nor the other way around.

> And, my point is that up to 8192 bits there is a more elegant way to solve
> the 
> issue than changing the code: configure KexAlgorithms and potentially change
> /etc/ssh/moduli file that is distributed in the RPM to remove the weaker
> entries.
>
> If you want to change the code, ensure that the default for KexAlgorithms
> does not contain the algos implying weaker DH sizes.
> 
> If you want higher strengths than 8192, you have to again disable the weaker
> KexAlgorithms *and* add longer entries to /etc/ssh/moduli.

It seems to me that there is no client option that makes it use 8192 bits without changing things on the server side.

I think what you are criticizing is a related, but different issue: The client doesn't warn about, or even prevent, use of a kex algorithm that is weaker than the cipher, ie., it doesn't prevent use of those KexAlgorithms that may be reasonable choices only when a weaker cipher has been chosen. Perhaps KexAlgorithms, MACs, Ciphers settings should be replaced by a single one called CipherSuites that lets you specify the exact combinations you want to permit. But, again, that's a different issue. How about opening a bug report for it and/or discussing on the openssh mailing list?
Comment 27 Fedora Update System 2013-11-18 08:30:32 EST
openssh-6.2p2-6.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/openssh-6.2p2-6.fc19
Comment 28 Fedora Update System 2013-11-18 10:25:46 EST
openssh-6.1p1-10.fc18 has been submitted as an update for Fedora 18.
https://admin.fedoraproject.org/updates/openssh-6.1p1-10.fc18
Comment 29 Fedora Update System 2013-11-19 00:27:12 EST
openssh-6.2p2-6.fc19 has been pushed to the Fedora 19 stable repository.  If problems still persist, please make note of it in this bug report.
Comment 30 Fedora Update System 2013-12-08 21:03:10 EST
openssh-6.1p1-10.fc18 has been pushed to the Fedora 18 stable repository.  If problems still persist, please make note of it in this bug report.
Comment 31 Andy Lutomirski 2013-12-16 18:21:37 EST
Is there any reason this bug is still open?  It seems like the updates should fix it.
Comment 32 Peter Backes 2014-01-18 14:42:49 EST
(In reply to Darren Tucker from comment #18)
> For what it's worth, we agree they should be increased, and they will be in
> the next release (probably the values from NIST limited by RFC4419:
> 3k/7k/8k).

With respect to http://marc.info/?l=openssh-unix-dev&m=138991843024913&w=2

I agree that the basic issue is fixed now, mostly.

Some notes:

- lack of verbosity is still an issue. Using 8192 bits with 256 bits should give a warning to the user. Also, with -vvvv, the user should have an easy way to figure out what algorithm is actually used. See initial bug report (first line below "Expected results").

- Someone should actually try what happens if "return (15360);" is used. Is it a problem in practice? The RFC does not say you "MUST" use at most 8192 bits, it merely says you SHOULD, after all.

- The same holds for increasing DH_GRP_MAX to, say, 65536. Any issues in practice?
Comment 33 Darren Tucker 2014-01-19 05:34:10 EST
(In reply to Peter Backes from comment #32)
[...]
> - The same holds for increasing DH_GRP_MAX to, say, 65536. Any issues in
> practice?

First problem is likely going to be OpenSSL, their crypto/dh/dh.h file has:

#ifndef OPENSSL_DH_MAX_MODULUS_BITS
# define OPENSSL_DH_MAX_MODULUS_BITS    10000
#endif

Second problem (at least for OpenSSH servers): none of them have groups bigger than 8k bit in their moduli files.  That would be fixable, and as long as the minimum doesn't go above that connections would still work.  Don't know about other implementations.
Comment 34 Peter Backes 2014-02-08 14:35:25 EST
(In reply to Darren Tucker from comment #33)
> First problem is likely going to be OpenSSL

bug 1062925
Comment 35 Christoph Anton Mitterer 2014-10-18 01:23:07 EDT
Stupid question,... shouldn't one remove the moduli records with smaller sizes from the modulus file, so that the server doesn't allow smaller moduli anymore?

Or is this already guaranteed somehow else?
Comment 36 Christoph Anton Mitterer 2014-10-28 16:32:44 EDT
For the record:

After some further (inconclusive) discussion[0] on the upstream list, I've filed two tickets related to the above:
#2302 (https://bugzilla.mindrot.org/show_bug.cgi?id=2302)
Asking to not fall back to diffie-hellman-group14-sha1 when this was
"explicitly" disabled via the KEX algo preference list of either ssh or
sshd.

#2303 (https://bugzilla.mindrot.org/show_bug.cgi?id=2303)
Asking one to allow to specify the min and max values for DH GEX on the
ssh/libssh side.



[0] https://lists.mindrot.org/pipermail/openssh-unix-dev/2014-October/033051.html
Comment 37 Fedora End Of Life 2015-01-09 17:35:55 EST
This message is a notice that Fedora 19 is now at end of life. Fedora 
has stopped maintaining and issuing updates for Fedora 19. It is 
Fedora's policy to close all bug reports from releases that are no 
longer maintained. Approximately 4 (four) weeks from now this bug will
be closed as EOL if it remains open with a Fedora 'version' of '19'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora 19 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora, you are encouraged  change the 'version' to a later Fedora 
version prior this bug is closed as described in the policy above.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events. Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.
Comment 38 Fedora End Of Life 2015-02-18 06:14:04 EST
Fedora 19 changed to end-of-life (EOL) status on 2015-01-06. Fedora 19 is
no longer maintained, which means that it will not receive any further
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of
Fedora please feel free to reopen this bug against that version. If you
are unable to reopen this bug, please file a new report against the
current release. If you experience problems, please add a comment to this
bug.

Thank you for reporting this bug and we are sorry it could not be fixed.
Comment 39 Christoph Anton Mitterer 2015-02-18 16:15:17 EST
So that's how RH deals with undealt security issues? Waiting till some system automatically marks them as "done"... just because they were reported during an older distro version... while it's _absolutely_ clear that nothing has changed since then?!
Disturbign... o.O



Can someone of the responsible people or with enough rights reopen the bug and mark it against some non-expiring release? Thanks!
Or do I really have to copy&paste everything into a new bug?

Chris.
Comment 40 Andy Lutomirski 2015-02-18 18:18:29 EST
To the contrary, the original issue here has long been fixed, but no one bothered closing the bug.

If you think there's still a problem, it might make sense to file a new bug.

If it makes everyone feel better, I changed the resolution on this bug :)
Comment 41 Christoph Anton Mitterer 2015-02-18 19:47:11 EST
I can't really agree with that... AFAIK OpenSSH still accepts DH groups between 1024 and 8192 only, i.e. it has an unnecessary "low" maximum and a far too low minimum.

At the server side one can at least change the minimum, simply by removing the respective groups from /etc/ssh/moduli (even though there is an undocumented trap, namely that SSH falls back to a IIRC 2048 bit group, when *all* groups are removed).

At the client side one has no way at all to configure the min/max accepted group sizes with DH-GEX.


I can open a new bug, but I somehow miss the point of loosing this bug's history, when it could be simply re-opened?!
Comment 42 Andy Lutomirski 2015-02-18 20:18:13 EST
I'm not convinced that there's a problem, nor am I convinced that SSH should change.

If the server tries to force a weak group (using a non-default configuration!), then is it really a problem if the client accepts?  Clearly it is if a downgrade attack is possible, but I don't know of one here.

I admit it seems problematic for the client to use diffie-hellman-group14-sha1 if the client explicitly turned it off, but is there a real attack here?
Comment 43 Christoph Anton Mitterer 2015-02-18 21:53:48 EST
Admittedly, a downgrade attack is not possible here (to my best knowledge) since *if* the client says e.g. I don't accept DH-GEX at all, the server can't change that.

Neither can an attacker bring a server/client to use something weaker as they both want to do anyway, and one could argue that if server and/or client are stupid to use weak groups... their fault.


But I guess that's just the problem... client and servers, per default may use weak groups and only the server can change that right now (as explained above)... and most server admins likely don't do.

But it definitely makes sense, that the clients have a way to specify a min group size, just to notice "accidentally" weakly configured servers (so one can perhaps tell the admin to improve things.


Apart from that, my request was also that a) both shouldn't use lower group sizes per default... and b) should allow higher then 8192.
In other words, the real attack here is: servers use weak defaults, clients have no way to easily "notice&block" this


Btw, it doesn't fall back to diffie-hellman-group14-sha1... it rather uses a internally hard coded 2048 bit sized group, but still exchanges that via DH-GEX.
Comment 44 Tomas Mraz 2015-02-19 04:24:49 EST
This, if anywhere, should be discussed and resolved upstream. I don't think Fedora should override the decision of upstream in this regard.

Note You need to log in before you can comment on or make changes to this bug.