Hello,
вт, 3 сент. 2019 г. в 20:05, Niels Möller nisse@lysator.liu.se:
dbaryshkov@gmail.com writes:
From: Dmitry Eremin-Solenikov dbaryshkov@gmail.com
Add common implementations for functions doing XOR over nettle_block16/nettle_block8.
I've merged the first two patches. Thanks! Do you know if anyone is using GCM_TABLE_BITS 4? I've tested that it still works, both before and after your change, but I don't test it regularly.
I don't know. As the size difference between GCM_TABLE_BITS being 4 and 8 is not that big, maybe we can drop it alltogether. I can send a patch ;-)
+static inline void +block16_xor_bytes (union nettle_block16 *r,
const union nettle_block16 *x,
const uint8_t *bytes)
+{
- memxor3 (r->b, x->b, bytes, 16);
+}
[...]
+static inline void +block8_xor_bytes (union nettle_block8 *r,
const union nettle_block8 *x,
const uint8_t *bytes)
+{
- memxor3 (r->b, x->b, bytes, 8);
+}
Not sure these two wrappers are that helpful. Do you have a good reason to add them?
They fit into cmac128/cmac64/siv-cmac code, as they simplify code there a bit. Using them you just say that Block1 = Block2 ^ bytestring, rather than XORing Block.b fields.
If you'd like, I can drop them, but from my point of view they look like good encapsulation.
The rest of the patch looks like a nice consolidation.
--- a/gcm.c +++ b/gcm.c @@ -53,16 +53,10 @@ #include "nettle-internal.h" #include "macros.h" #include "ctr-internal.h" +#include "block-internal.h"
#define GHASH_POLYNOMIAL 0xE1UL
-static void -gcm_gf_add (union nettle_block16 *r,
const union nettle_block16 *x, const union nettle_block16 *y)
-{
- r->u64[0] = x->u64[0] ^ y->u64[0];
- r->u64[1] = x->u64[1] ^ y->u64[1];
-} /* Multiplication by 010...0; a big-endian shift right. If the bit shifted out is one, the defining polynomial is added to cancel it out. r == x is allowed. */ @@ -108,7 +102,7 @@ gcm_gf_mul (union nettle_block16 *x, const union nettle_block16 *y) for (j = 0; j < 8; j++, b <<= 1) { if (b & 0x80)
gcm_gf_add(&Z, &Z, &V);
block16_xor3(&Z, &Z, &V);
This and few other calls below can be block16_xor rather than block16_xor3.
Will fix in next iteration.