Hello, In master (unsigned int) was replaced with (size_t), that allows for even larger sizes to be input to encryption and decryption functions. However, the usage of TMP_ALLOC to make a copy of the input data (e.g., in ctr.c) contradicts that goal.
In general I think the usage of alloca() is dangerous as it not always known whether the stack is limited, e.g, when nettle is called in a co-routine or in application with non-growing stack. I think there can be an easy modification of TMP_ALLOC to use malloc for larger than 128 (or any other fixed number), and alloca otherwise. Would you be interested in such a patch? (it would require the introduction of a TMP_FREE as well)
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
In master (unsigned int) was replaced with (size_t), that allows for even larger sizes to be input to encryption and decryption functions. However, the usage of TMP_ALLOC to make a copy of the input data (e.g., in ctr.c) contradicts that goal.
The intention is that TMP_ALLOC should only ever be used for small allocations. And each use must specify a maximum size. If you see any TMP_ALLOC with a potentially large size, that is a bug. (In case anyone else here is also hacking gmp, I should point out that gmp's TMP_ALLOC is different))
In ctr.c, the max size is 128 bytes (NBLOCKS * NETTLE_MAX_CIPHER_BLOCK_SIZE).
I see no problem there, except that an
assert (block_size <= NETTLE_MAX_CIPHER_BLOCK_SIZE);
might be appropriate.
I think there can be an easy modification of TMP_ALLOC to use malloc for larger than 128 (or any other fixed number), and alloca otherwise.
As far as possible, I think we should avoid large allocations. So I don't think such a change is needed.
Regards, /Niels
On Wed, Dec 11, 2013 at 12:56 PM, Niels Möller nisse@lysator.liu.se wrote:
In ctr.c, the max size is 128 bytes (NBLOCKS * NETTLE_MAX_CIPHER_BLOCK_SIZE).
Ok didn't notice that. However, the bignum functions like pkcs1_decrypt, pkcs1_encrypt, pkcs1_rsa_digest_encode, nettle_mpz_random_size seem to have no such limits.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
Ok didn't notice that. However, the bignum functions like pkcs1_decrypt, pkcs1_encrypt, pkcs1_rsa_digest_encode, nettle_mpz_random_size seem to have no such limits.
They work under the assumption that key_size is less than 10000 bits (see NETTLE_MAX_BIGNUM_BITS, 10000, and NETTLE_MAX_BIGNUM_SIZE, 1250).
I agree this is a bit more questionable.
Regards, /Niels
On Wed, Dec 11, 2013 at 2:47 PM, Niels Möller nisse@lysator.liu.se wrote:
Ok didn't notice that. However, the bignum functions like pkcs1_decrypt, pkcs1_encrypt, pkcs1_rsa_digest_encode, nettle_mpz_random_size seem to have no such limits.
They work under the assumption that key_size is less than 10000 bits (see NETTLE_MAX_BIGNUM_BITS, 10000, and NETTLE_MAX_BIGNUM_SIZE, 1250). I agree this is a bit more questionable.
As far as I understand that size assumption is only enforced on systems without alloca using an assert. In systems with alloca there is no such check. In both cases, it seems to be easy to abuse them for a denial of service.
I think it would be better for these functions to fail rather than abort() if parameters are out of supported range.
regards, Nikos
On Wed, Dec 11, 2013 at 3:13 PM, Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com wrote:
As far as I understand that size assumption is only enforced on systems without alloca using an assert. In systems with alloca there is no such check. In both cases, it seems to be easy to abuse them for a denial of service. I think it would be better for these functions to fail rather than abort() if parameters are out of supported range.
A fix that could suit the master branch is attached. That adds the possibility to return an error if the maximum sizes are exceeded. For 2.7.x though this can be handled by an abort which is less than ideal, as it looks trivial to exploit for DoS. Even worse, nettle_mpz_set_str_256*() does not enforce the maximum limits on big numbers and doesn't even have the ability to return an error.
The patch is pretty ugly because it introduces unused variables, but couldn't think of a cleaner way to fix that.
Any other ideas on how the issue can be gracefully solved (especially in the 2.7 branch)?
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
A fix that could suit the master branch is attached. That adds the possibility to return an error if the maximum sizes are exceeded.
For the functions involving bignums, I wonder if it's better to allocate the needed storage, either with malloc or with gmp allocation functions. gmp is going to be allocating things anyway, so there's little point in avoiding malloc (which is a different situation than, e.g, ctr.c). And it's not really the right place for arbitrary limits.
2.7.x though this can be handled by an abort which is less than ideal, as it looks trivial to exploit for DoS.
I'm not too concerned about DoS here. An application receiving an RSA key from an untrusted source should impose some reasonable limit on keysize before it is used. If I send you a public RSA key with n and e of 100000 bits, and some signature for you to verify, and you don't impose any limit on key size, I'll hog your cpu for quite a while.
And the application really has to choose the limit, that's not Nettle's job.
That said, Nettle shouldn't do unbounded stack allocations in this case, it ought to use malloc, or abort or fail in some other *reliable* fashion. (I think having some documented limit on keysize would be acceptable, but I'm leaning towards saying that it's better to just use heap allcoation).
Do you agree?
Even worse, nettle_mpz_set_str_256*() does not enforce the maximum limits on big numbers and doesn't even have the ability to return an error.
I'm not entirely sure what problem you see here.
Regards, /Niels
On Thu, Dec 12, 2013 at 3:40 PM, Niels Möller nisse@lysator.liu.se wrote:
I'm not too concerned about DoS here. An application receiving an RSA key from an untrusted source should impose some reasonable limit on keysize before it is used. If I send you a public RSA key with n and e of 100000 bits, and some signature for you to verify, and you don't impose any limit on key size, I'll hog your cpu for quite a while.
Indeed, but this reasonable limit has to be somehow known to the application to be enforced prior to calling the nettle functions. Now my limits in gnutls and nettle's limits are disconnected (as I only now realized that some functions of nettle could abort after 10000 bits).
That said, Nettle shouldn't do unbounded stack allocations in this case, it ought to use malloc, or abort or fail in some other *reliable* fashion. (I think having some documented limit on keysize would be acceptable, but I'm leaning towards saying that it's better to just use heap allcoation). Do you agree?
Yes, that would be much better. Do you want me to send an updated patch?
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
That said, Nettle shouldn't do unbounded stack allocations in this case, it ought to use malloc, or abort or fail in some other *reliable* fashion. (I think having some documented limit on keysize would be acceptable, but I'm leaning towards saying that it's better to just use heap allcoation). Do you agree?
Yes, that would be much better. Do you want me to send an updated patch?
That would be good. I think it makes sense to use gmp's allocation functions here, so the user can override allocation, without having to do it separately for nettle and gmp. See gmp-glue.c:gmp_alloc_limbs.
Regards, /Niels
On Thu, Dec 12, 2013 at 5:31 PM, Niels Möller nisse@lysator.liu.se wrote:
That said, Nettle shouldn't do unbounded stack allocations in this case, it ought to use malloc, or abort or fail in some other *reliable* fashion. (I think having some documented limit on keysize would be acceptable, but I'm leaning towards saying that it's better to just use heap allcoation). Do you agree?
Yes, that would be much better. Do you want me to send an updated patch?
That would be good. I think it makes sense to use gmp's allocation functions here, so the user can override allocation, without having to do it separately for nettle and gmp. See gmp-glue.c:gmp_alloc_limbs.
What about the attached patch?
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
What about the attached patch?
Looks good! Some minor comments:
--- /dev/null +++ b/bignum-internal.h
+inline static void* _tmp_gmp_alloc(unsigned* out_n, size_t n)
+inline static void _tmp_gmp_free(void* p, size_t n)
I don't think the _tmp prefix is needed, since these are declared static.
Do you think it's important that these are inline functions? The alternative is to put them in gmp-glue.c together with with gmp_{alloc,free}_limbs, and then the TMP_GMP_*-macros could go in nettle-internal.h.
+#define TMP_GMP_DECL(name, type) type *name; \
- unsigned name##_gmp_size
Here, on the other hand, it might make sense with a prefix on the size variable. Maybe __tmp_gmp_size##name or so (I tend to use __{NAME OF MACRO}_ as prefix).
diff --git a/pkcs1-decrypt.c b/pkcs1-decrypt.c index 02d3728..96016e0 100644 --- a/pkcs1-decrypt.c +++ b/pkcs1-decrypt.c
if (*length < message_length)
- return 0;
{
ret = 0;
goto err;
}
memcpy(message, terminator + 1, message_length); *length = message_length;
- return 1;
- ret = 1;
+err:
- TMP_GMP_FREE(em);
- return ret;
}
"err" is maybe not the right name for the label, since the code is also for successful termination. "done" or "cleanup" would be better.
Regards, /Niels
On Fri, Dec 13, 2013 at 11:33 AM, Niels Möller nisse@lysator.liu.se wrote:
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
What about the attached patch?
Looks good! Some minor comments:
updated.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
updated.
Checked in now, with minor changes (deleted the out_n argument for gmp_alloc, and moved the TMP_GMP_* macros to gmp-glue.h).
Regards, /Niels
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
On Sun, 2013-12-15 at 19:19 +0100, Niels Möller wrote:
Checked in now, with minor changes (deleted the out_n argument for gmp_alloc, and moved the TMP_GMP_* macros to gmp-glue.h).
Would that be included in a 2.7 release?
I don't currently plan any 2.7.2 bugfix release. I'm thinking that this problem is not serious enough to motivate a new release. If some more urgent reason to make a release comes up, we can consider backporting this change.
Or have you seen any real problems caused by this? That would be an application accepting an arbitrarily large RSA keys from an untrusted source, and passing it on to nettle without any limit to prevent DoS. In that scenario, it could be a real problem.
Regards, /Niels
On Tue, Dec 17, 2013 at 8:57 AM, Niels Möller nisse@lysator.liu.se wrote:
I don't currently plan any 2.7.2 bugfix release. I'm thinking that this problem is not serious enough to motivate a new release. If some more urgent reason to make a release comes up, we can consider backporting this change. Or have you seen any real problems caused by this? That would be an application accepting an arbitrarily large RSA keys from an untrusted source, and passing it on to nettle without any limit to prevent DoS. In that scenario, it could be a real problem.
The limit in gnutls for public keys was 16k. The undocumented abort() limit in these functions is 10k. Thus even if an application doesn't allow arbitrary limits it risks a crash. Even worse this crash can be simply from data coming from the network. So I find that a pretty serious issue (although, I think that issue is mitigated on systems that alloca() is available).
A release on the 2.7 could also include to lift the q_bit limits in generating a dsa key (so that nettle could be used for generating DH keys as well).
regards, Nikos
nettle-bugs@lists.lysator.liu.se