Hello,
I've made a new Merge Request in the nettle gitlab repo to provide RSA-OAEP encryption and decryption: https://git.lysator.liu.se/nettle/nettle/-/merge_requests/20
It adds 2 new functions:
int pkcs1_oaep_encrypt (size_t key_size, void *random_ctx, nettle_random_func *random, size_t hlen, size_t label_length, const uint8_t *label, size_t message_length, const uint8_t *message, mpz_t m);
int pkcs1_oaep_decrypt (size_t key_size, const mpz_t m, size_t hlen, size_t label_length, const uint8_t *label, size_t *length, uint8_t *message);
The parameter hlen is the output length of the SHA function used for masking data: - SHA1_DIGEST_SIZE - SHA256_DIGEST_SIZE - SHA384_DIGEST_SIZE - SHA512_DIGEST_SIZE
Is it possible to get feedback for this MR and eventually push it to the master branch?
Thanks in advance
/Nicolas
Hello again,
After rethinking it, I refactored the new functions to be consistent with how Nettle functions are designed, now there's one encrypt/decrypt function per hash.
https://git.lysator.liu.se/nettle/nettle/-/merge_requests/20
The new functions are the following:
Encryption: int rsa_oaep_sha1_encrypt(const struct rsa_public_key *key, void *random_ctx, nettle_random_func *random, size_t label_length, const uint8_t *label, size_t length, const uint8_t *message, mpz_t gibberish);
int rsa_oaep_sha256_encrypt(const struct rsa_public_key *key, void *random_ctx, nettle_random_func *random, size_t label_length, const uint8_t *label, size_t length, const uint8_t *message, mpz_t gibberish);
int rsa_oaep_sha384_encrypt(const struct rsa_public_key *key, void *random_ctx, nettle_random_func *random, size_t label_length, const uint8_t *label, size_t length, const uint8_t *message, mpz_t gibberish);
int rsa_oaep_sha512_encrypt(const struct rsa_public_key *key, void *random_ctx, nettle_random_func *random, size_t label_length, const uint8_t *label, size_t length, const uint8_t *message, mpz_t gibberish);
Decryption: int rsa_oaep_sha1_decrypt(const struct rsa_private_key *key, size_t label_length, const uint8_t *label, size_t *length, uint8_t *message, const mpz_t gibberish);
int rsa_oaep_sha256_decrypt(const struct rsa_private_key *key, size_t label_length, const uint8_t *label, size_t *length, uint8_t *message, const mpz_t gibberish);
int rsa_oaep_sha384_decrypt(const struct rsa_private_key *key, size_t label_length, const uint8_t *label, size_t *length, uint8_t *message, const mpz_t gibberish);
int rsa_oaep_sha512_decrypt(const struct rsa_private_key *key, size_t label_length, const uint8_t *label, size_t *length, uint8_t *message, const mpz_t gibberish);
The tests are updated too
/Nicolas
Hello Nicolas, Niels,
Now that another attack on RSA encryption with PKCS#1 v1.5 padding has been discovered (though Nettle is not vulnerable)[1], it is recommended to avoid using the v1.5 scheme in new applications[2][3], and thus supporting RSA-OAEP in Nettle is becoming more relevant.
I made some modifications to the existing merge request[4], mainly to make it side-channel safe at decryption: https://git.lysator.liu.se/nettle/nettle/-/merge_requests/60
Could you take a look when you have time?
Footnotes: [1] https://people.redhat.com/~hkario/marvin/
[2] https://datatracker.ietf.org/doc/draft-kario-rsa-guidance/
[3] https://github.com/checkpoint-restore/criu/pull/2297#discussion_r1420116692
[4] https://git.lysator.liu.se/nettle/nettle/-/merge_requests/20
Regards,
Daiki Ueno ueno@gnu.org writes:
Now that another attack on RSA encryption with PKCS#1 v1.5 padding has been discovered (though Nettle is not vulnerable)[1], it is recommended to avoid using the v1.5 scheme in new applications[2][3], and thus supporting RSA-OAEP in Nettle is becoming more relevant.
I agree oaep support is desirable.
I made some modifications to the existing merge request[4], mainly to make it side-channel safe at decryption: https://git.lysator.liu.se/nettle/nettle/-/merge_requests/60
Thanks for reviving this issue, and looking into side-channel silence.
Could you take a look when you have time?
Thanks, I've had a look, and it looks pretty good to me. Some comments and questions:
* For tests, would it make some with some test that check that encryption with a given message and randomness gives the expected output? Even better if there are any authoritative testcases for that?
* Is it useful to have oaep_decode_mgf1 and oaep_encode_mgf1 advertised as public functions, or would it be better to make them internal?
* Do you see any reasonable (i.e., with a net gain in maintainability) way to share more code between _oaep_sec_decrypt_variable and _pkcs1_sec_decrypt_variable?
* For oaep_decode_mgf1, oaep_encode_mgf1, maybe one could let the caller allocate and pass in the appropriate hashing context? Would be easy to do, e.g., in rsa_oaep_sha512_decrypt. But it looks like that would be inconsistent with pss_mgf1, though (which looks like it needs a separate hashing context).
* I think it was a design mistake to represent RSA ciphertexts as mpz_t rather then octet strings in Nettle's original RSA interfaces. I wonder if it would make sense to let the new functions take octet strings instead?
Regards, /Niels
On Mon, 2024-01-15 at 21:43 +0100, Niels Möller wrote:
Daiki Ueno ueno@gnu.org writes:
Now that another attack on RSA encryption with PKCS#1 v1.5 padding has been discovered (though Nettle is not vulnerable)[1], it is recommended to avoid using the v1.5 scheme in new applications[2][3], and thus supporting RSA-OAEP in Nettle is becoming more relevant.
I agree oaep support is desirable.
I made some modifications to the existing merge request[4], mainly to make it side-channel safe at decryption: https://git.lysator.liu.se/nettle/nettle/-/merge_requests/60
Thanks for reviving this issue, and looking into side-channel silence.
Could you take a look when you have time?
Thanks, I've had a look, and it looks pretty good to me. Some comments and questions:
For tests, would it make some with some test that check that encryption with a given message and randomness gives the expected output? Even better if there are any authoritative testcases for that?
Is it useful to have oaep_decode_mgf1 and oaep_encode_mgf1 advertised as public functions, or would it be better to make them internal?
Do you see any reasonable (i.e., with a net gain in maintainability) way to share more code between _oaep_sec_decrypt_variable and _pkcs1_sec_decrypt_variable?
Hi Niels, I did review this part, and to me it seem like it is more maintainable to keep them separate, they already are tricky as it is, adding more variability sounds to me would just make them more complex and difficult to reason about.
HTH, Simo.
For oaep_decode_mgf1, oaep_encode_mgf1, maybe one could let the caller allocate and pass in the appropriate hashing context? Would be easy to do, e.g., in rsa_oaep_sha512_decrypt. But it looks like that would be inconsistent with pss_mgf1, though (which looks like it needs a separate hashing context).
I think it was a design mistake to represent RSA ciphertexts as mpz_t rather then octet strings in Nettle's original RSA interfaces. I wonder if it would make sense to let the new functions take octet strings instead?
Regards, /Niels
-- Niels Möller. PGP key CB4962D070D77D7FCB8BA36271D8F1FF368C6677. Internet email is subject to wholesale government surveillance. _______________________________________________ nettle-bugs mailing list -- nettle-bugs@lists.lysator.liu.se To unsubscribe send an email to nettle-bugs-leave@lists.lysator.liu.se
Hello Niels, Simo,
Thanks for the comments. I've updated MR addressing part of them (see inline for the details).
Simo Sorce simo@redhat.com writes:
On Mon, 2024-01-15 at 21:43 +0100, Niels Möller wrote:
Thanks, I've had a look, and it looks pretty good to me. Some comments and questions:
- For tests, would it make some with some test that check that encryption with a given message and randomness gives the expected output? Even better if there are any authoritative testcases for that?
I would be happy to add if there are any, even if they are not so authoritative, though I wasn't even able to find ones with compatible license, in particular with SHA-2 being used as an underlying hash algorithm for MGF-1.
- Project Wycheproof (Apache 2.0): https://github.com/google/wycheproof/blob/master/testvectors/rsa_oaep_2048_s...
- Python Cryptography (Apache 2.0 and BSD): https://cryptography.io/en/latest/development/custom-vectors/rsa-oaep-sha2/
In any case, I'll try to check against those vectors manually outside the Nettle repository to ensure the correctness.
- Is it useful to have oaep_decode_mgf1 and oaep_encode_mgf1 advertised as public functions, or would it be better to make them internal?
Made them internal functions.
- Do you see any reasonable (i.e., with a net gain in maintainability) way to share more code between _oaep_sec_decrypt_variable and _pkcs1_sec_decrypt_variable?
I did review this part, and to me it seem like it is more maintainable to keep them separate, they already are tricky as it is, adding more variability sounds to me would just make them more complex and difficult to reason about.
I agree with that, considering potential optimization opportunities by the compiler.
- For oaep_decode_mgf1, oaep_encode_mgf1, maybe one could let the caller allocate and pass in the appropriate hashing context? Would be easy to do, e.g., in rsa_oaep_sha512_decrypt. But it looks like that would be inconsistent with pss_mgf1, though (which looks like it needs a separate hashing context).
Done.
- I think it was a design mistake to represent RSA ciphertexts as mpz_t rather then octet strings in Nettle's original RSA interfaces. I wonder if it would make sense to let the new functions take octet strings instead?
Done.
Regards,
Daiki Ueno ueno@gnu.org writes:
- For tests, would it make some with some test that check that encryption with a given message and randomness gives the expected output? Even better if there are any authoritative testcases for that?
I would be happy to add if there are any, even if they are not so authoritative, though I wasn't even able to find ones with compatible license, in particular with SHA-2 being used as an underlying hash algorithm for MGF-1.
Project Wycheproof (Apache 2.0): https://github.com/google/wycheproof/blob/master/testvectors/rsa_oaep_2048_s...
Python Cryptography (Apache 2.0 and BSD): https://cryptography.io/en/latest/development/custom-vectors/rsa-oaep-sha2/
In any case, I'll try to check against those vectors manually outside the Nettle repository to ensure the correctness.
To me it looks like those sources provide reasonable test vectors for RSA OAEP decryption.
On licensing, it looks like Apache and GPLv2 might be incompatible. I've been a bit sloppy when incorporating test code (e.g., for some time I had some testcode copied from openssl/libcrypto, to test compatibility glue). But in this case, I think a fully correct workaround would be to license the related test file LGPLv3 (no GPLv2 option); odd licensing for some of the test files shouldn't matter much for Nettle applications since the testcode isn't part of the library applications link. Proper attribution is of course important.
But my original question was for testing of RSA *en*cryption, if there are some determinstic testvectors with known output, with tests wiring something non-random for the randomness input.
- Is it useful to have oaep_decode_mgf1 and oaep_encode_mgf1 advertised as public functions, or would it be better to make them internal?
Made them internal functions.
Thanks. (It was maybe a mistake we didn't do that for the pss_*_mgf1 functions when added years ago).
- Do you see any reasonable (i.e., with a net gain in maintainability) way to share more code between _oaep_sec_decrypt_variable and _pkcs1_sec_decrypt_variable?
I did review this part, and to me it seem like it is more maintainable to keep them separate, they already are tricky as it is, adding more variability sounds to me would just make them more complex and difficult to reason about.
I agree with that, considering potential optimization opportunities by the compiler.
Let's leave this as is, then.
- For oaep_decode_mgf1, oaep_encode_mgf1, maybe one could let the caller allocate and pass in the appropriate hashing context? Would be easy to do, e.g., in rsa_oaep_sha512_decrypt. But it looks like that would be inconsistent with pss_mgf1, though (which looks like it needs a separate hashing context).
Done.
Nice. We'll still have another one allocated for each call to pss_mgf1, if I read the code correctly.
- I think it was a design mistake to represent RSA ciphertexts as mpz_t rather then octet strings in Nettle's original RSA interfaces. I wonder if it would make sense to let the new functions take octet strings instead?
Done.
What does the OAEP spec say about the ciphertet length? It would make the interface easier if we say that the ciphertext length *always* equals key->size; then one could delete passing and checking of the ciphertext_length argument. In the current MR, it looks like leading zero bytes are trimmed (behavior of nettle_mpz_sizeinbase_256_u), so that ciphertext may sometimes be shorter.
Regards, /Niels
Hello,
Niels Möller nisse@lysator.liu.se writes:
Daiki Ueno ueno@gnu.org writes:
- For tests, would it make some with some test that check that encryption with a given message and randomness gives the expected output? Even better if there are any authoritative testcases for that?
I would be happy to add if there are any, even if they are not so authoritative, though I wasn't even able to find ones with compatible license, in particular with SHA-2 being used as an underlying hash algorithm for MGF-1.
Project Wycheproof (Apache 2.0): https://github.com/google/wycheproof/blob/master/testvectors/rsa_oaep_2048_s...
Python Cryptography (Apache 2.0 and BSD): https://cryptography.io/en/latest/development/custom-vectors/rsa-oaep-sha2/
In any case, I'll try to check against those vectors manually outside the Nettle repository to ensure the correctness.
To me it looks like those sources provide reasonable test vectors for RSA OAEP decryption.
On licensing, it looks like Apache and GPLv2 might be incompatible. I've been a bit sloppy when incorporating test code (e.g., for some time I had some testcode copied from openssl/libcrypto, to test compatibility glue). But in this case, I think a fully correct workaround would be to license the related test file LGPLv3 (no GPLv2 option); odd licensing for some of the test files shouldn't matter much for Nettle applications since the testcode isn't part of the library applications link. Proper attribution is of course important.
But my original question was for testing of RSA *en*cryption, if there are some determinstic testvectors with known output, with tests wiring something non-random for the randomness input.
I did a bit of a research and realized that, when we added OAEP in libgcrypt, we apparently picked the test vector from: ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-1/pkcs-1v2-1d2-vec.zip https://lists.gnupg.org/pipermail/gcrypt-devel/2011-June/001802.html
The zip file is no longer accessible, but I still keep a copy and it seems identical to the one at: https://github.com/pyca/cryptography/tree/main/vectors/cryptography_vectors/...
Is it OK to use the vector assuming it is public domain?
In any case I incorporated one test from the vector.
What does the OAEP spec say about the ciphertet length? It would make the interface easier if we say that the ciphertext length *always* equals key->size; then one could delete passing and checking of the ciphertext_length argument. In the current MR, it looks like leading zero bytes are trimmed (behavior of nettle_mpz_sizeinbase_256_u), so that ciphertext may sometimes be shorter.
Yes, the length should match key->size; I've omitted the ciphertext_length argument. I'm not sure about the leading zeros though; as far as I read, nettle_mpz_to_octets seems to keep them.
Regards,
Daiki Ueno ueno@gnu.org writes:
The zip file is no longer accessible, but I still keep a copy and it seems identical to the one at: https://github.com/pyca/cryptography/tree/main/vectors/cryptography_vectors/...
Is it OK to use the vector assuming it is public domain?
According to the closest LICENSE file, https://github.com/pyca/cryptography/blob/main/vectors/LICENSE, it's dual licensed apache/BSD (our choice), so I think that is fine. And if we copy just the test vectors and not any surrounding code, it seems questionable if that is even copyrightable.
So I think copying from there, with proper attribution, is perfectly fine. Formally, we'll be exercising the BSD option.
Yes, the length should match key->size; I've omitted the ciphertext_length argument.
Thanks. Please remove everywhere, it looks like it's still present in some form in the test code. (You may still want to allocate an extra byte at the end and check that it isn't modified. Alternatively, rely on valgrind for detecting overwrites instead).
I'm not sure about the leading zeros though; as far as I read, nettle_mpz_to_octets seems to keep them.
I think nettle_mpz_to_octets is fine. The problem was when the length passed to this function was computed using nettle_mpz_sizeinbase_256_u, like it was in a previous revision.
Regards, /Niels
Niels Möller nisse@lysator.liu.se writes:
Daiki Ueno ueno@gnu.org writes:
The zip file is no longer accessible, but I still keep a copy and it seems identical to the one at: https://github.com/pyca/cryptography/tree/main/vectors/cryptography_vectors/...
Is it OK to use the vector assuming it is public domain?
According to the closest LICENSE file, https://github.com/pyca/cryptography/blob/main/vectors/LICENSE, it's dual licensed apache/BSD (our choice), so I think that is fine. And if we copy just the test vectors and not any surrounding code, it seems questionable if that is even copyrightable.
So I think copying from there, with proper attribution, is perfectly fine. Formally, we'll be exercising the BSD option.
OK, thanks for the confirmation. I've expanded the KAT tests further using the vector and also added a license notice.
Yes, the length should match key->size; I've omitted the ciphertext_length argument.
Thanks. Please remove everywhere, it looks like it's still present in some form in the test code. (You may still want to allocate an extra byte at the end and check that it isn't modified. Alternatively, rely on valgrind for detecting overwrites instead).
Added `mark_bytes_undefined (1, &ciphertext[key->size]);` to the test cases doing encryption.
Regards,
Daiki Ueno ueno@gnu.org writes:
Added `mark_bytes_undefined (1, &ciphertext[key->size]);` to the test cases doing encryption.
I'm afraid that isn't right. For one, mark_bytes_undefined is conditioned so it only has any effect when running the sc tests. Second, it will not produce any warnings for writes, which I think is what we'd like to detect here. I think the options are:
1. Just don't allocate any extra byte, and valgrind's should arrange for alerts on out-of-bounds writes without anything special.
2. Allocate an extra byte, write some random value before the call, and check that the value is unchanged after the call (some other tests do that sort of thing, it's simple, old fashioned, and doesn't depend on valgrind).
3. Allocate an extra byte, and mark it using VALGRIND_MAKE_MEM_NOACCESS (wrapped in some macro depending on the memcheck.h configure check). I don't think that gives any real benefit over valgrind's default behavior with (1), but might make sense if done in combination with (2).
Regards, /Niels
Niels Möller nisse@lysator.liu.se writes:
Daiki Ueno ueno@gnu.org writes:
Added `mark_bytes_undefined (1, &ciphertext[key->size]);` to the test cases doing encryption.
I'm afraid that isn't right. For one, mark_bytes_undefined is conditioned so it only has any effect when running the sc tests. Second, it will not produce any warnings for writes, which I think is what we'd like to detect here. I think the options are:
Just don't allocate any extra byte, and valgrind's should arrange for alerts on out-of-bounds writes without anything special.
Allocate an extra byte, write some random value before the call, and check that the value is unchanged after the call (some other tests do that sort of thing, it's simple, old fashioned, and doesn't depend on valgrind).
Allocate an extra byte, and mark it using VALGRIND_MAKE_MEM_NOACCESS (wrapped in some macro depending on the memcheck.h configure check). I don't think that gives any real benefit over valgrind's default behavior with (1), but might make sense if done in combination with (2).
Sorry for the confusion and thank you for the explanation; now I get it. I pushed a change along the of option (2). Could you take a look again?
Regards,
Daiki Ueno ueno@gnu.org writes:
Sorry for the confusion and thank you for the explanation; now I get it. I pushed a change along the of option (2). Could you take a look again?
Thanks. This is an unusually busy week for me, so I'm afraid I'll not be able to look at this (or any of the other pendning changes recently posted to the list) until next week.
Regards, /Niels
Daiki Ueno ueno@gnu.org writes:
Sorry for the confusion and thank you for the explanation; now I get it. I pushed a change along the of option (2). Could you take a look again?
Thanks, looks good! Two nits, and let me know at which point you'd like to get it merged, and do further improvements as followup MRs.
Since the oaep.h header now only declares internal functions, it shouldn't be installed (moved from HEADERS to DISTFILES in Makefile.in).
And it would be nice if the manual could give some more detail about the label: As I understnd it, the label is optional, so it's fine to pass 0, NULL if not needed. And if used, the same label must be used for both encrypt and decrypt.
Regards, /Niels
Niels Möller nisse@lysator.liu.se writes:
Daiki Ueno ueno@gnu.org writes:
Sorry for the confusion and thank you for the explanation; now I get it. I pushed a change along the of option (2). Could you take a look again?
Thanks, looks good! Two nits, and let me know at which point you'd like to get it merged, and do further improvements as followup MRs.
Thank you; I have addressed those issues. As for the merging, I think it is ready now. I have also created a draft MR in GnuTLS to use the algorithm in PKCS#8, etc., so we can test the implementation in a broader context: https://gitlab.com/gnutls/gnutls/-/merge_requests/1805
Since the oaep.h header now only declares internal functions, it shouldn't be installed (moved from HEADERS to DISTFILES in Makefile.in).
And it would be nice if the manual could give some more detail about the label: As I understnd it, the label is optional, so it's fine to pass 0, NULL if not needed. And if used, the same label must be used for both encrypt and decrypt.
Regards,
Daiki Ueno ueno@gnu.org writes:
Thank you; I have addressed those issues. As for the merging, I think it is ready now.
Thanks, merged.
Thanks to the doc update, I now noticed the possibility of failure from the encryption functions. Failure is propagated from _oaep_encode_mgf1, which does
assert (key_size >= 2 * hash->digest_size - 2);
if (message_length > key_size - 2 * hash->digest_size - 2) return 0;
Why is the first an assert (it could be triggered by using an unusually small RSA key with a large hash function, say rsa_oaep_sha512_encrypt with an old 512-bit RSA key, which from the docs isn't an obviously invalid usage), and the other a return value to indicate failure?
One alternative could be to instead check
if (message_length > key_size || message_length + 2 + 2*hash->digest_size > key_size) return 0;
(with two tests, to not trigger overflow in case message_length is close to the maximum size_t value; maybe that is more defensive than necessary since message_length large enough to trigger overflow can't be the size of properly allocated memory area).
The opposite alternative would to be to have a documented way for the application to get the maximum message size, and have an assertion failure for both cases. That would have the advantage that no return value is needed, simplifying the api (at least very locally).
Another doc detail: The docs for the decrypt functions don't say explicitly that *length is both an input and output argument. The text for the older function rsa_decrypt could be reused (or possibly improved).
Regards, /Niels
Hello Niels,
Thank you for merging it and the suggestions.
Niels Möller nisse@lysator.liu.se writes:
Thanks to the doc update, I now noticed the possibility of failure from the encryption functions. Failure is propagated from _oaep_encode_mgf1, which does
assert (key_size >= 2 * hash->digest_size - 2);
if (message_length > key_size - 2 * hash->digest_size - 2) return 0;
Why is the first an assert (it could be triggered by using an unusually small RSA key with a large hash function, say rsa_oaep_sha512_encrypt with an old 512-bit RSA key, which from the docs isn't an obviously invalid usage), and the other a return value to indicate failure?
I think I split these conditions that way, as the first condition is more about the algorithm setup, i.e., the user should choose RSA key size and hash digest size appropriately, before using any of the rsa_oaep_*_encrypt functions; otherwise treat it as a programming error.
That said, I agree that it would be more user friendly to combine them and treat it as a regular error, as we do in pss_encode_mgf1.
One alternative could be to instead check
if (message_length > key_size || message_length + 2 + 2*hash->digest_size > key_size) return 0;
(with two tests, to not trigger overflow in case message_length is close to the maximum size_t value; maybe that is more defensive than necessary since message_length large enough to trigger overflow can't be the size of properly allocated memory area).
I made this change in the attached patch.
The opposite alternative would to be to have a documented way for the application to get the maximum message size, and have an assertion failure for both cases. That would have the advantage that no return value is needed, simplifying the api (at least very locally).
It is tempting, though for now I just documented the size restriction.
Another doc detail: The docs for the decrypt functions don't say explicitly that *length is both an input and output argument. The text for the older function rsa_decrypt could be reused (or possibly improved).
Updated the documentation.
Regards,
Daiki Ueno ueno@gnu.org writes:
That said, I agree that it would be more user friendly to combine them and treat it as a regular error, as we do in pss_encode_mgf1.
Thanks for the update, patch merged.
I noticed that there are two failures in the ci builds. See https://gitlab.com/gnutls/nettle/-/pipelines/1178451395.
One failure is the new side-channel test failing with mini-gmp. Which is expected, the test should just be skipped in mini-gmp builds (similar to several other sc tests).
The other is a complaint from ubsan. I guess it's related to the label == NULL case. I don't know what's the proper place for a fix, maybe it's not in the new code. I think the Nettle APIs should generally allow size == 0, ptr == NULL more or less everywhere, even where libc functions we use formally require ptr != NULL.
Can you have a look?
Regards, /Niels
Niels Möller nisse@lysator.liu.se writes:
I noticed that there are two failures in the ci builds. See https://gitlab.com/gnutls/nettle/-/pipelines/1178451395.
One failure is the new side-channel test failing with mini-gmp. Which is expected, the test should just be skipped in mini-gmp builds (similar to several other sc tests).
Yes, I'm attaching the patch for this.
The other is a complaint from ubsan. I guess it's related to the label == NULL case. I don't know what's the proper place for a fix, maybe it's not in the new code. I think the Nettle APIs should generally allow size == 0, ptr == NULL more or less everywhere, even where libc functions we use formally require ptr != NULL.
This is similar to this issue: https://gitlab.com/gnutls/gnutls/-/issues/1306 where we passed NULL to sha*_update in the GnuTLS code, though it turned to be a non-issue.
In the RSA-OAEP case, I'm not exactly sure whether we should be able to safely special case label == NULL as its hash is part of plaintext data block. Therefore I'm adding label = "" at the API entry points.
Regards,
Daiki Ueno ueno@gnu.org writes:
Niels Möller nisse@lysator.liu.se writes:
One failure is the new side-channel test failing with mini-gmp. Which is expected, the test should just be skipped in mini-gmp builds (similar to several other sc tests).
Yes, I'm attaching the patch for this.
I've committed and pushed that part of patch.
The other is a complaint from ubsan. I guess it's related to the label == NULL case. I don't know what's the proper place for a fix, maybe it's not in the new code. I think the Nettle APIs should generally allow size == 0, ptr == NULL more or less everywhere, even where libc functions we use formally require ptr != NULL.
This is similar to this issue: https://gitlab.com/gnutls/gnutls/-/issues/1306 where we passed NULL to sha*_update in the GnuTLS code, though it turned to be a non-issue.
I don't remember seeing that issue. I think it should be allowed to call sha*_update with 0, NULL (when size is null, there's no reason to ever attempt to dereference that pointer). I'll see if I can fix that.
Regards, /Niels
Niels Möller nisse@lysator.liu.se writes:
This is similar to this issue: https://gitlab.com/gnutls/gnutls/-/issues/1306 where we passed NULL to sha*_update in the GnuTLS code, though it turned to be a non-issue.
I don't remember seeing that issue. I think it should be allowed to call sha*_update with 0, NULL (when size is null, there's no reason to ever attempt to dereference that pointer). I'll see if I can fix that.
Below patch seems to fix this issue, but not entirely sure that's the way I want to do it. I think I'd rather not touch the MD_* macros defined in macros.h, and do improved macros in md-internal.h instead. Since, for historic reasons, the macros.h file is public.
To get this thoroughly fixed, one would need tests where every nettle function, that accepts a potentially empty buffer, is called with 0, NULL, and make sure ubsan is happy with that.
Regards, /Niels
diff --git a/macros.h b/macros.h index 990d32ee..e67a403f 100644 --- a/macros.h +++ b/macros.h @@ -180,6 +180,8 @@ do { \ length and data. */ #define MD_UPDATE(ctx, length, data, f, incr) \ do { \ + if (length == 0) \ + goto __md_done; \ if ((ctx)->index) \ { \ /* Try to fill partial block */ \ diff --git a/sha256.c b/sha256.c index 0c9c21a0..907271bc 100644 --- a/sha256.c +++ b/sha256.c @@ -105,6 +105,9 @@ sha256_update(struct sha256_ctx *ctx, size_t length, const uint8_t *data) { size_t blocks; + if (length == 0) + return; + if (ctx->index > 0) { /* Try to fill partial block */
Niels Möller nisse@lysator.liu.se writes:
Below patch seems to fix this issue, but not entirely sure that's the way I want to do it.
I've pushed a fix, along the same lines, see https://git.lysator.liu.se/nettle/nettle/-/commit/99e62003c3916fdef04a2d3327...
I believe that should fix all hash update functions (and with proper test coverage).
There are probably a few more functions where 0, NULL should be allowed, but currently result in ubsan issues: Corresponding aead update functions, functions accepting optional nonces, empty messages for rsa encryption functions, maybe some of the cipher modes.
Regards, /Niels
nettle-bugs@lists.lysator.liu.se