Page MenuHomeFreeBSD

opencrypto: Loosen restriction on HMAC key sizes
ClosedPublic

Authored by cem on Sep 21 2017, 6:48 PM.
Tags
None
Referenced Files
Unknown Object (File)
Feb 6 2024, 10:28 PM
Unknown Object (File)
Feb 5 2024, 1:36 PM
Unknown Object (File)
Jan 31 2024, 4:34 AM
Unknown Object (File)
Jan 31 2024, 4:34 AM
Unknown Object (File)
Jan 31 2024, 4:34 AM
Unknown Object (File)
Jan 28 2024, 11:12 PM
Unknown Object (File)
Jan 3 2024, 4:50 AM
Unknown Object (File)
Dec 20 2023, 5:41 AM
Subscribers

Details

Summary

Theoretically, HMACs do not actually have any limit on key sizes.
Transforms should compact input keys larger than the HMAC block size by
using the transform (hash) on the input key.

(Short input keys are padded out with zeros to the HMAC block size.)

Still, not all FreeBSD crypto drivers that provide HMAC functionality
handle longer-than-blocksize keys appropriately, so enforce a "maximum" key
length in the crypto API for auth_hashes that previously expressed a
requirement. (The "maximum" is the size of a single HMAC block for the
given transform.) Unconstrained auth_hashes are left as-is.

I believe the previous hardcoded sizes were committed in the original
import of opencrypto from OpenBSD and are due to specific protocol
details of IPSec. Note that none of the previous sizes actually matched
the appropriate HMAC block size.

The previous hardcoded sizes made the SHA tests in cryptotest.py
useless for testing FreeBSD crypto drivers; none of the NIST-KAT example
inputs had keys sized to the previous expectations.

The following drivers were audited to check that they handled keys up to
the block size of the HMAC safely:

Software HMAC:
  * padlock(4)
  * cesa
  * glxsb
  * safe(4)
  * ubsec(4)

Hardware accelerated HMAC:
  * ccr(4)
  * hifn(4)
  * sec(4) (Only supports up to 64 byte keys despite claiming to
    support SHA2 HMACs, but validates input key sizes)
  * cryptocteon (MIPS)
  * nlmsec (MIPS)
  * rmi (MIPS) (Amusingly, does not appear to use key material at all
    -- presumed broken)

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

Have you checked any drivers to see if they handle the larger keys? ccr(4) does I believe (and out-of-tree qat(4) does). I haven't checked our software ones to see if they do.

sys/opencrypto/cryptodev.c
524 ↗(On Diff #33272)

I would perhaps check max_keysize != 0 first.

In D12437#257939, @jhb wrote:

Have you checked any drivers to see if they handle the larger keys? ccr(4) does I believe (and out-of-tree qat(4) does). I haven't checked our software ones to see if they do.

I haven't, no.

sys/opencrypto/cryptodev.c
524 ↗(On Diff #33272)

Sure, I can make that change.

Via padlock just redirects into the software opencrypto implementations, which can handle arbitrary key lengths up to block size.

cesa (also just software HMAC) can handle any key length up to block size.

glxsb is identical to cesa for HMAC.

hifn handles lengths up to the block size fine, although it hardcodes it using a constant (HIFN_MAC_KEY_LENGTH). (No support for SHA2.)

safe handles keys up to block size fine. (Similar software approach as glxsb, cesa, but slightly different style.)

sec seems to handle arbitrary key sizes up to the block size (hardcoded as SEC_MAX_KEY_LEN). Note that this is lower than the 128 byte block size of SHA2, which it also claims to support.

ubsec: Identical to safe.

cryptoocteon: Seems to support arbitrary keysizes up to hardcoded key buffer size of 64 bytes, but no claim to support SHA2 anyway.

nlmsec: Hardcoded limit of 20 bytes buffer for all keys. Does not validate inputs. Only claims to support MD5 and SHA1.

mips/rmi/dev/sec: Doesn't appear to ever use the provided key material for HMAC????

So the only danger appears to be nlmsec.

Oh, hah, I misread. The NLM key material buffer is in quadwords! So that's actually 160 bytes -- totally fine for SHA1/MD5. It needs to fit all key material, but typical symmetric crypto keys are vastly smaller than 96 bytes.

cem marked 2 inline comments as done.

Reverse order of conditional.

By larger keys I meant keys larger than the block size. ccr(4) handles this explicitly in ccr_init_hmac_digest() for example. However, this same bit of code needs to be duplicated in all drivers which seems a bit odd. For keys larger than the block size I would rather have the crypto framework handle this I think by hashing the key before passing it down to the driver. Similarly, I would expect a layer above the drivers to handle any padding required for "short" keys so that drivers always get a key that is a block size.

In D12437#258077, @jhb wrote:

By larger keys I meant keys larger than the block size. ccr(4) handles this explicitly in ccr_init_hmac_digest() for example. However, this same bit of code needs to be duplicated in all drivers which seems a bit odd.

Ah. None of the drivers I reviewed handles larger than block size correctly. It would be a little odd to have to duplicate that in all of the drivers.

For keys larger than the block size I would rather have the crypto framework handle this I think by hashing the key before passing it down to the driver.

Me too, but I think it is orthogonal to this enhancement.

Similarly, I would expect a layer above the drivers to handle any padding required for "short" keys so that drivers always get a key that is a block size.

Drivers already handle short keys (and it's easy). I don't see as much of a need for this.

In D12437#258088, @cem wrote:
In D12437#258077, @jhb wrote:

By larger keys I meant keys larger than the block size. ccr(4) handles this explicitly in ccr_init_hmac_digest() for example. However, this same bit of code needs to be duplicated in all drivers which seems a bit odd.

Ah. None of the drivers I reviewed handles larger than block size correctly. It would be a little odd to have to duplicate that in all of the drivers.

That is what Linux does. I'd probably like us to avoid that if possible.

For keys larger than the block size I would rather have the crypto framework handle this I think by hashing the key before passing it down to the driver.

Me too, but I think it is orthogonal to this enhancement.

Similarly, I would expect a layer above the drivers to handle any padding required for "short" keys so that drivers always get a key that is a block size.

Drivers already handle short keys (and it's easy). I don't see as much of a need for this.

Well, I think the only functional result of this change is to permit shorter keys passed via /dev/crypto, yes? You could achieve that by just adding a single 'memset' in cryptodev.c to zero pad the remaining part of the key? Then drivers still always get a key that is a full block size without every driver having to duplicate the same code. (It would be one less thing for drivers to have to worry about). OTOH, another way we can reduce the duplicated code is add some helper functions to compute HMAC IPAD/OPAD that drivers can use.

I guess I would probably just leave the existing 'keysize' in place and change the one '!=' check in cryptodev.c to '>' and have just that one thing be the entire commit. The key is actually a fixed size in these algorithms (typically the block size) since you XOR the fixed-size key with the IPAD/OPAD. However, there is a "user supplied key" and there is the "key the algorithm actually uses". cryptodev is accepting a "user supplied key", and that key has to be transformed (in some cases) into a "key the algorithm actually uses". I view the constants and 'keysize' field as documenting the size of the latter value ("key the algorithm actually uses"), not the former, in which case they are valid as is.

sys/opencrypto/cryptodev.c
524 ↗(On Diff #33272)

Is there any reason to permit keys for MACs that don't use a key btw? That is, why do we need the '!= 0' check? It seems sensible to me that if you use an algorithm that doesn't use a key and you pass in an actual key it should still fail with EINVAL after this change.

In D12437#258443, @jhb wrote:

Well, I think the only functional result of this change is to permit shorter keys passed via /dev/crypto, yes?

Longer keys, too. The current restrictions on key size in /dev/crypto have no relationship to the underlying HMAC block size. E.g.:

AlgoBogus key size requirementActual HMAC block size
MD51664
SHA12064
SHA2563264
SHA38448128
SHA51264128

The current mandatory key sizes have no basis in reality. They just happen to be what IPSec uses, I think.

You could achieve that by just adding a single 'memset' in cryptodev.c to zero pad the remaining part of the key? Then drivers still always get a key that is a full block size without every driver having to duplicate the same code. (It would be one less thing for drivers to have to worry about). OTOH, another way we can reduce the duplicated code is add some helper functions to compute HMAC IPAD/OPAD that drivers can use.

Yes, and I'm fine with either of those cleanups, but existing drivers do not need that change to support partial block keys because the current key sizes are already partial block (see above table) and they had to support those.

I guess I would probably just leave the existing 'keysize' in place and change the one '!=' check in cryptodev.c to '>' and have just that one thing be the entire commit.

No no, that doesn't fix the problem. I think the table makes this more clear.

In D12437#258443, @jhb wrote:
In D12437#258088, @cem wrote:
In D12437#258077, @jhb wrote:

By larger keys I meant keys larger than the block size. ccr(4) handles this explicitly in ccr_init_hmac_digest() for example. However, this same bit of code needs to be duplicated in all drivers which seems a bit odd.

Ah. None of the drivers I reviewed handles larger than block size correctly. It would be a little odd to have to duplicate that in all of the drivers.

That is what Linux does. I'd probably like us to avoid that if possible.

I've punted on that problem for now — it's up to clients to give us a HMAC block size or smaller key (just as it was before). It would be a further enhancement to nicely hash larger keys in some generic way. The current interface never allowed these larger keys.

I wanted to reduce the scope and risk of this change, so I did not attempt to support longer than blocksize keys.

Hmm, the commit message is perhaps not clear as it conflates the two meanings of "key" when it first states "there is no limit", which applies to "user supplied key" but it is about correcting the size of the "key the algorithm uses". I would perhaps leave the member as 'keysize' instead of 'max_keysize'. Perhaps we can replace it in the future with a flag indicating if a key is valid at all and if it is the implied value is 'blocksize'.

In D12437#258455, @jhb wrote:

Hmm, the commit message is perhaps not clear as it conflates the two meanings of "key" when it first states "there is no limit", which applies to "user supplied key" but it is about correcting the size of the "key the algorithm uses".

I'm not sure I understand the concern or what changes you would like made. I'll try to clarify the commit message slightly.

I would perhaps leave the member as 'keysize' instead of 'max_keysize'. Perhaps we can replace it in the future with a flag indicating if a key is valid at all and if it is the implied value is 'blocksize'.

I did consider just removing keysize entirely and using the blocksize field as the limit, but worried about missing an instance of the struct (they do not use C99 initializers). Maybe I should just replace it with a padding field. I'd be happy to make this change.

In D12437#258463, @cem wrote:
In D12437#258455, @jhb wrote:

Hmm, the commit message is perhaps not clear as it conflates the two meanings of "key" when it first states "there is no limit", which applies to "user supplied key" but it is about correcting the size of the "key the algorithm uses".

I'm not sure I understand the concern or what changes you would like made. I'll try to clarify the commit message slightly.

I would perhaps leave the member as 'keysize' instead of 'max_keysize'. Perhaps we can replace it in the future with a flag indicating if a key is valid at all and if it is the implied value is 'blocksize'.

I did consider just removing keysize entirely and using the blocksize field as the limit, but worried about missing an instance of the struct (they do not use C99 initializers). Maybe I should just replace it with a padding field. I'd be happy to make this change.

Unfortunately we can't just assume 'blocksize' because some algorithms (keyed SHA1, NULL, etc.) have 0 for the keysize. I just think 'max_keysize' is a bit of a misnomer because I consider the 'auth_xform' structures to provide values related to "what the algorithm uses" in which case 'keysize' is a single fixed-length field much like 'blocksize'. Handling "user provided keys" of different sizes is something that needs to happen at a slightly higher level similar to how one uses PKCS padding to deal with data that isn't an even multiple of a block. It's not the end of the world though. I looked and OpenSSL doesn't provide anything like 'keysize' in its EVP_MD structures, and HMAC() requires a key and assumes a block size (so for OpenSSL it seems you have to provide a bogus key for NULL HMAC which seems odd)

In D12437#258465, @jhb wrote:

Unfortunately we can't just assume 'blocksize' because some algorithms (keyed SHA1, NULL, etc.) have 0 for the keysize. I just think 'max_keysize' is a bit of a misnomer because I consider the 'auth_xform' structures to provide values related to "what the algorithm uses" in which case 'keysize' is a single fixed-length field much like 'blocksize'. Handling "user provided keys" of different sizes is something that needs to happen at a slightly higher level similar to how one uses PKCS padding to deal with data that isn't an even multiple of a block. It's not the end of the world though. I looked and OpenSSL doesn't provide anything like 'keysize' in its EVP_MD structures, and HMAC() requires a key and assumes a block size (so for OpenSSL it seems you have to provide a bogus key for NULL HMAC which seems odd)

Ok. Would you like me to rename max_keysize back to keysize? Are there any other changes you'd like to see before this goes in?

In D12437#258495, @cem wrote:
In D12437#258465, @jhb wrote:

Unfortunately we can't just assume 'blocksize' because some algorithms (keyed SHA1, NULL, etc.) have 0 for the keysize. I just think 'max_keysize' is a bit of a misnomer because I consider the 'auth_xform' structures to provide values related to "what the algorithm uses" in which case 'keysize' is a single fixed-length field much like 'blocksize'. Handling "user provided keys" of different sizes is something that needs to happen at a slightly higher level similar to how one uses PKCS padding to deal with data that isn't an even multiple of a block. It's not the end of the world though. I looked and OpenSSL doesn't provide anything like 'keysize' in its EVP_MD structures, and HMAC() requires a key and assumes a block size (so for OpenSSL it seems you have to provide a bogus key for NULL HMAC which seems odd)

Ok. Would you like me to rename max_keysize back to keysize? Are there any other changes you'd like to see before this goes in?

Just that please.

This revision is now accepted and ready to land.Sep 25 2017, 6:33 PM
cem edited the summary of this revision. (Show Details)

Rename max_keysize back to keysize

This revision now requires review to proceed.Sep 25 2017, 6:42 PM
sys/opencrypto/cryptodev.c
524 ↗(On Diff #33272)

Code that handles zero seems to suggest that any key size is accepted, rather than no key. But that may not be the intent.

This revision is now accepted and ready to land.Sep 25 2017, 8:01 PM

The change looks fine to me.

In D12437#258447, @cem wrote:
In D12437#258443, @jhb wrote:

Well, I think the only functional result of this change is to permit shorter keys passed via /dev/crypto, yes?

Longer keys, too. The current restrictions on key size in /dev/crypto have no relationship to the underlying HMAC block size. E.g.:

AlgoBogus key size requirementActual HMAC block size
MD51664
SHA12064
SHA2563264
SHA38448128
SHA51264128

The current mandatory key sizes have no basis in reality. They just happen to be what IPSec uses, I think.

I think it may be slightly less crazy than that. Those mandatory key sizes are also the hash sizes, and "less than L [hash size] bytes is strongly discouraged as it would decrease the security strength of the function" [1]. Of course, this doesn't mean we should forbid short keys in the API. But the original motivation might have been to prevent foot-shooting.

If the motivation for this change is just to allow larger keys, we might consider a more conservative change which allows key sizes between the hashsize and the blocksize (/ "keysize" / "max_keysize"). Or require a foot-shoot flag for short keys.

[1] https://tools.ietf.org/html/rfc2104#section-3

I think it may be slightly less crazy than that. Those mandatory key sizes are also the hash sizes, and "less than L [hash size] bytes is strongly discouraged as it would decrease the security strength of the function" [1]. Of course, this doesn't mean we should forbid short keys in the API. But the original motivation might have been to prevent foot-shooting.

Hm, ok. I don't know if we have a lot of precedence for anti-footshooting APIs.

If the motivation for this change is just to allow larger keys, we might consider a more conservative change which allows key sizes between the hashsize and the blocksize (/ "keysize" / "max_keysize"). Or require a foot-shoot flag for short keys.

Does anyone else have an opinion about what range should be allowed by default (and if it is restricted, whether a flag should be used to allow smaller keys)? I have a weak preference towards not adding a flag for this and not limiting the range.

Also update previously missed Via padlock auth_hash definitions. (Ryan noted
that these had the ctxsize and blocksize parameters swapped, and I've swapped
them back.) A strongly welcome future cleanup would be to use C99 initializers
for all of these instances.

This revision now requires review to proceed.Sep 26 2017, 4:03 PM
This revision was automatically updated to reflect the committed changes.

Padlock fix-up LGTM.

In D12437#259349, @cem wrote:

If the motivation for this change is just to allow larger keys, we might consider a more conservative change which allows key sizes between the hashsize and the blocksize (/ "keysize" / "max_keysize"). Or require a foot-shoot flag for short keys.

Does anyone else have an opinion about what range should be allowed by default (and if it is restricted, whether a flag should be used to allow smaller keys)? I have a weak preference towards not adding a flag for this and not limiting the range.

I tend to agree, I just wanted to make sure we're doing that consciously. We can't after all prevent clients from using a weak key of the "right" length.