Dmitry Eremin-Solenikov dbaryshkov@gmail.com writes:
True. I'll look into adding HMAC functions to nettle-benchmark then. It would be interesting to compare performance.
That would be great. It's better to measure performance than to speculate about it.
It might be worth moving both index and block out of 'state' function and then updating compress/MD_* macros to accept separate 'compression state' and 'buffer state' structures. This might result in some code cleanups. I'll give this idea a thought.
That would be conceptually very nice. I suspect there might be some complications from the count field (counter of compressed blocks), which most hash function have, but, e.g., sha3 doesn't. On the other hand, hmac is designed to be used only with MD-style hash functions, so I'm not sure hmac-sha3 is of any use.
What about having following functions:
_FOO_init(state); FOO_init(ctx); _FOO_compress(state, block[]) FOO_update(ctx, length, data); _FOO_digest(state, buffer_state); FOO_digest(ctx);
Users will call typical FOO_* functions, while HMAC code can call internal _FOO_* functions.
What would _FOO_digest be used for? Also note that all functions needed by hmac would need to be exposed in struct nettle_hash.
We already have a couple of FOO_compress functions, mainly because those functions are candidates for assembly implementation.
Regards, /Niels