Hi, The attached patch set introduces new APIs to access the TLS 1.0 and 1.2 PRFs as well as the HKDF (rfc5869) function used in TLS 1.3.
There are few other updates to .gitlab-ci.yml (to use a more recent asan and ubsan versions), and a comment in .bootstrap for the testsuite/.test-rules.make which I always seem to forget and spend an hour figuring out what is going on. Ideally .test-rules.make should depend on Makefile.in so that one wouldn't need to manually run test- rules.
I'd also suggest to rename the .bootstrap to bootstrap.sh as the latter is quite a convention to find in projects (.bootstrap is not even listed in ls).
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
The attached patch set introduces new APIs to access the TLS 1.0 and 1.2 PRFs as well as the HKDF (rfc5869) function used in TLS 1.3.
Thanks, a few initial comments.
Are the TLS-1.0 and TLS-1.2 PRFs in any use outside of TLS?
There are few other updates to .gitlab-ci.yml (to use a more recent asan and ubsan versions),
Applied.
and a comment in .bootstrap for the testsuite/.test-rules.make which I always seem to forget and spend an hour figuring out what is going on. Ideally .test-rules.make should depend on Makefile.in.
Have you tested if it works to just add that dependency?
An alternative might be to drop support for non-GNU make programs, then one could probably write these as %-pattern rules directly in Makefile.in.
I'd also suggest to rename the .bootstrap to bootstrap.sh as the latter is quite a convention to find in projects (.bootstrap is not even listed in ls).
Makes sense. There are references to the name .bootstrap in README and in .gitlab-ci.yml, any other place that would need updating?
+void +hkdf_extract(void *mac_ctx,
nettle_hmac_set_key_func *set_key,
nettle_hash_update_func *update,
nettle_hash_digest_func *digest,
size_t digest_size,
size_t salt_size, const uint8_t *salt,
size_t secret_size, const uint8_t *secret,
uint8_t *dst)
+{
- set_key(mac_ctx, salt_size, salt);
- update(mac_ctx, secret_size, secret);
- digest(mac_ctx, digest_size, dst);
+}
This looks like a plain application of a mac, digest = MAC(salt, secret), is this really useful?
Not sure about the typedef nettle_hmac_set_key_func, there's nothing hmac specific, besides key size being variable? And it's identical to nettle_hash_update_func, right?
Also, since it seems the key is fixed, both in this function and below, I think it would be better to leave setting the mac key to the caller, and pass in a mac ctx with key already set. That would also be more consistent with the pbkdf2 code, and reduce the number of argument.
Note that the Nettle mac convention is that the sequence
set_key update digest update digest ... update digest
works fine.
+void +hkdf_expand(void *mac_ctx,
nettle_hmac_set_key_func *set_key,
nettle_hash_update_func *update,
nettle_hash_digest_func *digest,
size_t digest_size,
size_t prk_size, const uint8_t *prk,
size_t info_size, const uint8_t *info,
size_t length, uint8_t *dst)
+{
- uint8_t *Ttmp;
- ssize_t left;
- uint8_t i = 1;
- unsigned started = 0;
- left = length;
- while(left > 0) {
/* T(i) */
set_key(mac_ctx, prk_size, prk);
if (started != 0) {
update(mac_ctx, digest_size, Ttmp);
} else {
started = 1;
}
if (info_size)
update(mac_ctx, info_size, info);
update(mac_ctx, 1, &i);
if (left < digest_size)
digest_size = left;
digest(mac_ctx, digest_size, dst);
Ttmp = dst;
left -= digest_size;
dst += digest_size;
i++;
- }
I think this loop would clearer if Ttmp was replaced by (dst - digest_size), and maybe it would make sense to take out the first and/or final iterations.
Regards, /Niels
----- Original Message -----
Nikos Mavrogiannopoulos nmav@redhat.com writes:
The attached patch set introduces new APIs to access the TLS 1.0 and 1.2 PRFs as well as the HKDF (rfc5869) function used in TLS 1.3.
Thanks, a few initial comments.
Are the TLS-1.0 and TLS-1.2 PRFs in any use outside of TLS?
I do not believe so.
and a comment in .bootstrap for the testsuite/.test-rules.make which I always seem to forget and spend an hour figuring out what is going on. Ideally .test-rules.make should depend on Makefile.in.
Have you tested if it works to just add that dependency?
No, I've thought of it while writing the email. Will check.
An alternative might be to drop support for non-GNU make programs, then one could probably write these as %-pattern rules directly in Makefile.in.
Could also be as it is hard today not to be able to use gnu make, though I may be partial on that.
I'd also suggest to rename the .bootstrap to bootstrap.sh as the latter is quite a convention to find in projects (.bootstrap is not even listed in ls).
Makes sense. There are references to the name .bootstrap in README and in .gitlab-ci.yml, any other place that would need updating?
I could not think of any.
+void +hkdf_extract(void *mac_ctx,
nettle_hmac_set_key_func *set_key,
nettle_hash_update_func *update,
nettle_hash_digest_func *digest,
size_t digest_size,
size_t salt_size, const uint8_t *salt,
size_t secret_size, const uint8_t *secret,
uint8_t *dst)
+{
- set_key(mac_ctx, salt_size, salt);
- update(mac_ctx, secret_size, secret);
- digest(mac_ctx, digest_size, dst);
+}
This looks like a plain application of a mac, digest = MAC(salt, secret), is this really useful?
Right. It is only useful in the sense that when the protocol one is implementing requires hkdf-extract, and he can simply call that function from nettle. We can add documentation there instead, or even a macro to avoid a function call.
Not sure about the typedef nettle_hmac_set_key_func, there's nothing hmac specific, besides key size being variable? And it's identical to nettle_hash_update_func, right?
When casting functions it can get tricky when using the same prototype to cast multiple types. I was using something similar in gnutls, and when you switched the types in nettle some functions remained the same, while others changed, and these common casts had hidden the issue for quite some time. I prefer function casts which target the particular family of functions being abstracted for safety (future safety in that case).
Also, since it seems the key is fixed, both in this function and below, I think it would be better to leave setting the mac key to the caller, and pass in a mac ctx with key already set. That would also be more consistent with the pbkdf2 code, and reduce the number of argument.
That's interesting. That makes the hkdf_extract() seem even less useful. I'll submit an updated patch and add some documentation.
regards, Nikos
On Tue, 2017-05-16 at 22:47 +0200, Niels Möller wrote:
and a comment in .bootstrap for the testsuite/.test-rules.make which I always seem to forget and spend an hour figuring out what is going on. Ideally .test-rules.make should depend on Makefile.in.
Have you tested if it works to just add that dependency?
It seems to do. The first patch attached does just that.
+void
+hkdf_extract(void *mac_ctx,
- nettle_hmac_set_key_func *set_key,
- nettle_hash_update_func *update,
- nettle_hash_digest_func *digest,
- size_t digest_size,
- size_t salt_size, const uint8_t *salt,
- size_t secret_size, const uint8_t *secret,
- uint8_t *dst)
+{
- set_key(mac_ctx, salt_size, salt);
- update(mac_ctx, secret_size, secret);
- digest(mac_ctx, digest_size, dst);
+}
This looks like a plain application of a mac, digest = MAC(salt, secret), is this really useful?
I've kept it because it makes the mapping from RFC to hkdf.h simpler. Let me know if you would prefer a macro or inline function.
Not sure about the typedef nettle_hmac_set_key_func, there's nothing hmac specific, besides key size being variable? And it's identical to nettle_hash_update_func, right?
No longer there.
- while(left > 0) {
/* T(i) */
set_key(mac_ctx, prk_size, prk);
if (started != 0) {
update(mac_ctx, digest_size, Ttmp);
} else {
started = 1;
}
if (info_size)
update(mac_ctx, info_size, info);
update(mac_ctx, 1, &i);
if (left < digest_size)
digest_size = left;
digest(mac_ctx, digest_size, dst);
Ttmp = dst;
left -= digest_size;
dst += digest_size;
i++;
- }
I think this loop would clearer if Ttmp was replaced by (dst - digest_size), and maybe it would make sense to take out the first and/or final iterations.
Patch 0005 unrolls the first loop and does that change. I find that longer and not as easy to follow, but I may have not caught what you meant.
The last patch adds documentation for the added functions.
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
Subject: [PATCH 1/6] testsuite/Makefile.in: ensure .test-rules.make is regenerated
That is, regenerate when Makefile.in is modified.
Signed-off-by: Nikos Mavrogiannopoulos nmav@redhat.com
Applied, with some additional tweaks.
testsuite/Makefile.in | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/testsuite/Makefile.in b/testsuite/Makefile.in index 590691c..8f18f84 100644 --- a/testsuite/Makefile.in +++ b/testsuite/Makefile.in @@ -96,7 +96,9 @@ dlopen-test$(EXEEXT): dlopen-test.$(OBJEXT) testutils.$(OBJEXT) $(LINK) dlopen-test.$(OBJEXT) -ldl -o dlopen-test$(EXEEXT)
.PHONY: test-rules -test-rules: +test-rules: .test-rules.make
+.test-rules.make: Makefile.in
I changed the target to $(srcdir)/.test-rules.make, to match the include line, I think that's needed for make to notice it can remake the include file. And I changed the dependency to Makefile rather than Makefile.in, which I think is more accurate.
(for f in $(TS_NETTLE) $(TS_HOGWEED) $(EXTRA_TARGETS) ; do \ echo $$f'$$(EXEEXT): '$$f'.$$(OBJEXT)' ; \ echo ' $$(LINK) '$$f'.$$(OBJEXT) $$(TEST_OBJS) -o '$$f'$$(EXEEXT)' ; \ @@ -107,6 +109,10 @@ test-rules: echo ' $$(LINK_CXX) '$$f'.$$(OBJEXT) $$(TEST_OBJS) -o '$$f'$$(EXEEXT)' ; \ echo ; \ done) > $(srcdir)/.test-rules.make
- @echo "******************************************************************"
- @echo "testsuite Makefile rules have been regenerated; please re-run make"
- @echo "******************************************************************"
- false
Not sure why we need to terminate with "false" here. By https://www.gnu.org/software/make/manual/html_node/Remaking-Makefiles.html, I think make ought to restart automatically when the included file $(srcdir)/.test-rules.make is updated, but in my tests, if I delete that false, that didn't happen.
include $(srcdir)/.test-rules.make
[3. text/x-patch; 0002-Added-the-TLS-1.0-PRF-and-test-vectors.patch]...
[4. text/x-patch; 0003-Added-the-TLS-1.2-PRF-and-test-vectors.patch]...
Can we postpone tls-1.0 and tls-1.2 for now, and try to get the more generally useful hkdf in?
[5. text/x-patch; 0004-Added-the-HKDF-key-derivation-function-and-test-vect.patch]...
I hope to be able to have a close look soon, but not tonight.
Regards, /Niels
On Sat, 2017-05-20 at 22:09 +0200, Niels Möller wrote:
include $(srcdir)/.test-rules.make
[3. text/x-patch; 0002-Added-the-TLS-1.0-PRF-and-test- vectors.patch]...
[4. text/x-patch; 0003-Added-the-TLS-1.2-PRF-and-test- vectors.patch]...
Can we postpone tls-1.0 and tls-1.2 for now, and try to get the more generally useful hkdf in?
Do you mean for the 3.3 release of nettle? Do you have any concerns with including the TLS PRF? The reason I pushed that along with HKDF is that I see nettle best suited for the low level primitives because of its interface and because it allows for easy unit testing. I can keep these in gnutls, though it would simplify things for gnutls if nettle could have the TLS primitives.
regards, Nikos
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
On Sat, 2017-05-20 at 22:09 +0200, Niels Möller wrote:
Can we postpone tls-1.0 and tls-1.2 for now, and try to get the more generally useful hkdf in?
Do you mean for the 3.3 release of nettle? Do you have any concerns with including the TLS PRF?
Not sure. But as a somewhat related example, I've never really considered adding the ssh2 key expansion function to nettle (my implementation is the kex_make_key function in https://git.lysator.liu.se/lsh/lsh/blob/master/src/keyexchange.c).
So for now, I think we make better progress by focusing on getting HKDF in reasonably soon, and considering what to do with the other two later.
And regarding nettle-3.3, I guess it's time to try to formulate what the relase objectives should be.
1. Fix the ABI problem (which unfortunately implies an abi break). Some progress, but I don't think I've published my wip branch.
2. Get HKDF in.
3. Possible some othe recently posted GOST code, but I'm afraid it will take some time to work through.
4. Get skein256 in (see skein branch), and possible skein512 too.
5. If we do break the abi for (1), change base64 and base16 apiat the same time, to use char for ascii data and output and uint8_t for binary data, which I think should fix remaining signedness warnigns. Need some trick for in-place processing, possibly using a separate function?
It would be nice to make a relase within a month or two, but my hacking time is a bit limited, so we'd need to prioritize things.
Regards, /Niels
On Mon, May 22, 2017 at 12:36 AM, Niels Möller nisse@lysator.liu.se wrote:
And regarding nettle-3.3, I guess it's time to try to formulate what the relase objectives should be.
- Fix the ABI problem (which unfortunately implies an abi break). Some progress, but I don't think I've published my wip branch.
I'd recommend against breaking the ABI. It would mean that any new additions would take significant time to be propagated into distributions. Is there a way to fix the issue by introducing new safe APIs?
- Get skein256 in (see skein branch), and possible skein512 too.
While interesting to have, is there any practical use of it?
regards, Nikos
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
- Fix the ABI problem (which unfortunately implies an abi break). Some progress, but I don't think I've published my wip branch.
I'd recommend against breaking the ABI. It would mean that any new additions would take significant time to be propagated into distributions. Is there a way to fix the issue by introducing new safe APIs?
Problem is described at https://trofi.github.io/posts/195-dynamic-linking-ABI-is-hard.html. As I understand it, when I added SHA3 hashes to the nettle_hashes list in the nettle-3.3 release, that was intended to be binary compatible with earlier releases. But it wasn't, for traditional non-PIE executables linking dynamically with nettle, due to the way R_X86_64_COPY relocations work.
As far as I'm aware, the only fix which is going to be binary compatible with executables linked with all of nettle-3.1, -3.2 and -3.3 would be to drop sha3 from nettle_hashes. And that would clearly break compatibility with nettle-3.3 in a different way.
But my plan was to rename the symbol nettle_hashes to _nettle_hashes, add a function get_nettle_hashes to return its address, and define nettle_hashes as a macro expanding to get_nettle_hashes(). Which would be a compatible change to the API, but a break of the ABI.
Maybe it's possible to leave the old halfway broken symbols in place in some way. One could define the nettle_hashes symbol as an alias for _nettle_hashes, which would be left around in the .so file but unused when using updated header files.
Advice appreciated.
And this affects all global data where a symbol referring to the address was intended to be part of the ABI, while the size of the data was intended to be an implementation detail. In libnettle.so, the lists declared in nettle_meta.h are of this type, and in libhogweed.so, we have the ecc_curve structs.
- Get skein256 in (see skein branch), and possible skein512 too.
While interesting to have, is there any practical use of it?
Unclear. May be of more interest if it turns out to be significantly faster than sha3.
Regards, /Niels
On Mon, 2017-05-22 at 13:18 +0200, Niels Möller wrote:
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
- Fix the ABI problem (which unfortunately implies an abi
break). Some progress, but I don't think I've published my wip branch.
I'd recommend against breaking the ABI. It would mean that any new additions would take significant time to be propagated into distributions. Is there a way to fix the issue by introducing new safe APIs?
Problem is described at https://trofi.github.io/posts/195-dynamic-linking-ABI-is-hard.html. As I understand it, when I added SHA3 hashes to the nettle_hashes list in the nettle-3.3 release, that was intended to be binary compatible with earlier releases. But it wasn't, for traditional non-PIE executables linking dynamically with nettle, due to the way R_X86_64_COPY relocations work.
As far as I'm aware, the only fix which is going to be binary compatible with executables linked with all of nettle-3.1, -3.2 and -3.3 would be to drop sha3 from nettle_hashes. And that would clearly break compatibility with nettle-3.3 in a different way.
But my plan was to rename the symbol nettle_hashes to _nettle_hashes, add a function get_nettle_hashes to return its address, and define nettle_hashes as a macro expanding to get_nettle_hashes(). Which would be a compatible change to the API, but a break of the ABI.
Maybe it's possible to leave the old halfway broken symbols in place in some way. One could define the nettle_hashes symbol as an alias for _nettle_hashes, which would be left around in the .so file but unused when using updated header files.
Advice appreciated.
A backwards compatible approach is to leave the nettle_hashes accessible and provide the function as well. That will not be any more broken than 3.3 is, and would allow 3.3 programs to be linked with 3.4 without recompilation.
You can warn on use of nettle_hashes with the attached patch, most likely you can introduce a custom message as well with the replacement function.
While interesting to have, is there any practical use of it?
Unclear. May be of more interest if it turns out to be significantly faster than sha3.
I think we are moving to a setting where we have fewer and better studied/optimized options encryption/hashing in contrast with the pre- AES era on which we were including multiple ciphers in crypto libs, in the hope that few of them would be good. A new hash as such, I don't think has any use today as most of the use cases would be related to digital signatures or standard protocols, which have no reason to include a skein variant (for the majority of protocols sha3 isn't there yet either).
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
A backwards compatible approach is to leave the nettle_hashes accessible and provide the function as well. That will not be any more broken than 3.3 is, and would allow 3.3 programs to be linked with 3.4 without recompilation.
Could work.
You can warn on use of nettle_hashes with the attached patch, most likely you can introduce a custom message as well with the replacement function.
I think I'd prefer to use
#define nettle_hashes (get_nettle_hashes())
so that a recompile fixes the issue without any source code changes.
Regards, /Niels
Nikos Mavrogiannopoulos nmav@redhat.com writes:
On Tue, 2017-05-16 at 22:47 +0200, Niels Möller wrote:
- while(left > 0) {
/* T(i) */
set_key(mac_ctx, prk_size, prk);
if (started != 0) {
update(mac_ctx, digest_size, Ttmp);
} else {
started = 1;
}
if (info_size)
update(mac_ctx, info_size, info);
update(mac_ctx, 1, &i);
if (left < digest_size)
digest_size = left;
digest(mac_ctx, digest_size, dst);
Ttmp = dst;
left -= digest_size;
dst += digest_size;
i++;
- }
I think this loop would clearer if Ttmp was replaced by (dst - digest_size), and maybe it would make sense to take out the first and/or final iterations.
Patch 0005 unrolls the first loop and does that change. I find that longer and not as easy to follow, but I may have not caught what you meant.
I was thinking of something like
if (!length) return;
for (;; dst += digest_size, length -= digest_size, i++) { update(mac_ctx, info_size, info); /* info_size == 0 should work fine */ update(mac_ctx, 1, &i); if (left <= digest_size) { digest(mac_ctx, left, dst); return; } digest(max_ctx, digest_size, dst); update(mac_ctx, digest_size, dst); }
Regards, /Niels
On Sat, 2017-05-20 at 22:19 +0200, Niels Möller wrote:
Nikos Mavrogiannopoulos nmav@redhat.com writes:
On Tue, 2017-05-16 at 22:47 +0200, Niels Möller wrote:
- while(left > 0) {
/* T(i) */
set_key(mac_ctx, prk_size, prk);
if (started != 0) {
update(mac_ctx, digest_size, Ttmp);
} else {
started = 1;
}
if (info_size)
update(mac_ctx, info_size, info);
update(mac_ctx, 1, &i);
if (left < digest_size)
digest_size = left;
digest(mac_ctx, digest_size, dst);
Ttmp = dst;
left -= digest_size;
dst += digest_size;
i++;
- }
I think this loop would clearer if Ttmp was replaced by (dst - digest_size), and maybe it would make sense to take out the first and/or final iterations.
Patch 0005 unrolls the first loop and does that change. I find that longer and not as easy to follow, but I may have not caught what you meant.
I was thinking of something like
if (!length) return;
for (;; dst += digest_size, length -= digest_size, i++) { update(mac_ctx, info_size, info); /* info_size == 0 should work fine */
Would make an function call though, but I guess that shouldn't matter.
update(mac_ctx, 1, &i); if (left <= digest_size) { digest(mac_ctx, left, dst); return; } digest(max_ctx, digest_size, dst); update(mac_ctx, digest_size, dst); }
I'm fine with that. Do you want me to post an updated patch or would you modify it?
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
update(mac_ctx, 1, &i); if (left <= digest_size) { digest(mac_ctx, left, dst); return; } digest(max_ctx, digest_size, dst); update(mac_ctx, digest_size, dst); }
I'm fine with that. Do you want me to post an updated patch or would you modify it?
If you can test it and post an updated patch, that's helpful.
Regards, /Niels
On Mon, 2017-05-22 at 11:00 +0200, Nikos Mavrogiannopoulos wrote:
update(mac_ctx, 1, &i); if (left <= digest_size) { digest(mac_ctx, left, dst); return; } digest(max_ctx, digest_size, dst); update(mac_ctx, digest_size, dst); }
I'm fine with that. Do you want me to post an updated patch or would you modify it?
Here are the patches for HKDF only updated with the above approach (and a sanity check for length).
The gitlab CI output for these should be at: (queued atm) https://gitlab.com/nmav/nettle/pipelines/8450279
That patch set also includes a tweak to .gitlab-ci.yml for CI to work without attempting to regenerate the testsuite make rules.
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
Here are the patches for HKDF only updated with the above approach (and a sanity check for length).
Thanks, we're getting closer!
That patch set also includes a tweak to .gitlab-ci.yml for CI to work without attempting to regenerate the testsuite make rules.
Ouch, I think that indicates a real problem with the change I made, a ./configure && make && make check build will now always remake the .test-rules.make file, because it depends on Makefile, which obviously is modified by configure. That's pretty bad.
I think I saw a problem with having it depend only on Makefile.in, I have to investigate and probably change back.
--- /dev/null +++ b/hkdf.c @@ -0,0 +1,85 @@ +/*
- Copyright (C) 2017 Red Hat, Inc.
- Author: Nikos Mavrogiannopoulos
- This file is part of GnuTLS.
- The GnuTLS is free software; you can redistribute it and/or
- modify it under the terms of the GNU Lesser General Public License
- as published by the Free Software Foundation; either version 2.1 of
- the License, or (at your option) any later version.
- This library is distributed in the hope that it will be useful, but
- WITHOUT ANY WARRANTY; without even the implied warranty of
- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- Lesser General Public License for more details.
- You should have received a copy of the GNU Lesser General Public License
- along with this program. If not, see http://www.gnu.org/licenses/
- */
This file carries a GnuTLS copyright notice rather than the Nettle copyright notice used in most other files. I guess that's unintentional? Indentation in the rest of this file is also using a very different style than the rest of Nettle.
--- a/nettle.texinfo +++ b/nettle.texinfo
...
+The key derivation function used in TLS 1.3 is HKDF, described +in @cite{RFC 5869}, and is a derivation function based on HMAC.
+Nettle's @acronym{HKDF} functions are defined in +@file{<nettle/hkdf.h>}. There are two abstract functions for extract +and expand operations that operate on any HMAC implemented via the @code{nettle_hash_update_func}, +@code{nettle_hash_digest_func} interfaces.
It's intended for HMAC, but it should work with any keyed hash function, aka MAC, right?
+@deftypefun void hkdf_extract (void *mac_ctx, nettle_hash_update_func *update, nettle_hash_digest_func *digest, size_t digest_size,size_t secret_size, const uint8_t *secret, uint8_t *dst)
The Nettle convention for arguments is length, pointer, so I think it should be
hkdf_extract(void *mac_ctx, nettle_hash_update_func *update, nettle_hash_digest_func *digest, size_t secret_size, const uint8_t *secret, size_t digest_size, uint8_t *digest)
+Extract a Pseudorandom Key (PRK) from a secret and a salt according +to HKDF. The HMAC must have been initialized, with its key being the +salt for the Extract operation.
I find the terminology a bit confusing. There's "salt", "key", and "secret". If I get this right, what hkdf calls "salt" is used as the key for the underlying MAC? I think it would be good with an introductory paragraph explaining how these fit together and defining the terminology, and then the documentation for the actual functions wouldn't need to explain it.
This function will call the
+@var{update} and @var{digest} functions passing the @var{mac_ctx} +context parameter as an argument in order to compute digest of size +@var{digest_size}. Inputs are the secret @var{secret} of length +@var{secret_length}. The output length is fixed to @var{digest_size} octets, +thus the output buffer @var{dst} must have room for at least @var{digest_size} octets. +@end deftypefun
I think it's confusing to say that the output length is "fixed". Do you mean that digest_size must be the same as the digest size of the underlying MAC algorithm? Or may it be smaller, like for most other *_digest methods in Nettle? To me, it seems reasonable to support smaller values, but write that to conform with the spec, output length must equal the underlying digest size (if that's what the spec says).
(And I still find the utility of this function a bit questionable).
+@deftypefun void hkdf_expand (void *mac_ctx, nettle_hash_update_func *update, nettle_hash_digest_func *digest, size_t digest_size, size_t info_size, const uint8_t *info, size_t length, uint8_t *dst) +Expand a Pseudorandom Key (PRK) to an arbitrary size according to HKDF. +The HMAC must have been initialized, with its key being the +PRK from the Extract operation.
Is it required that hkdf_extract is used in some way to produce the key for hkdf_expand? Then I think the relation between _extract and _expand needs to be clarified. Would you always have the same number of calls to _extract and _expand, or could do _extract once and _expand multiple times (with different info string)?
This function will call the
+@var{update} and @var{digest} functions passing the @var{mac_ctx} +context parameter as an argument in order to compute digest of size +@var{digest_size}. Inputs are the info @var{info} of length +@var{info_length}, and the desired derived output length @var{length}. +The output buffer is @var{dst} which must have room for at least @var{length} octets. +@end deftypefun
Do you intend to add specific functions like hkdf_hmac_sha256_expand(...) too?
Regards, /Niels
On Mon, 2017-05-22 at 19:09 +0200, Niels Möller wrote:
That patch set also includes a tweak to .gitlab-ci.yml for CI to
work
without attempting to regenerate the testsuite make rules.
Ouch, I think that indicates a real problem with the change I made, a ./configure && make && make check build will now always remake the .test-rules.make file, because it depends on Makefile, which obviously is modified by configure. That's pretty bad.
I think I saw a problem with having it depend only on Makefile.in, I have to investigate and probably change back.
Do you see the gitlab notifications about broken builds? An approach to avoid broken builds in master, may be to wait for a successful build prior to committing to master. For example committing to a branch first and waiting for the gitlab.com mirror sync and the CI results.
The above process would be easier to handle that if you use switch to using merge requests for contribution. That of course would require moving to gitlab.com or enabling CI pipelines in git.lysator.liu.se (assuming it provides some CI runners).
I've put my updates at a merge request on my mirror repo, to see how that it would look: https://gitlab.com/nmav/nettle/merge_requests/1
(notice that the previous failure in CI compilation is now apparent, and in the 'Changes' tab it is possible to comment inline in changes)
+The key derivation function used in TLS 1.3 is HKDF, described +in @cite{RFC 5869}, and is a derivation function based on HMAC.
+Nettle's @acronym{HKDF} functions are defined in +@file{<nettle/hkdf.h>}. There are two abstract functions for extract +and expand operations that operate on any HMAC implemented via the @code{nettle_hash_update_func}, +@code{nettle_hash_digest_func} interfaces.
It's intended for HMAC, but it should work with any keyed hash function, aka MAC, right?
It may but HKDF is only defined with HMAC.
+@deftypefun void hkdf_extract (void *mac_ctx, nettle_hash_update_func *update, nettle_hash_digest_func *digest, size_t digest_size,size_t secret_size, const uint8_t *secret, uint8_t *dst)
The Nettle convention for arguments is length, pointer, so I think it should be
hkdf_extract(void *mac_ctx, nettle_hash_update_func *update, nettle_hash_digest_func *digest, size_t secret_size, const uint8_t *secret, size_t digest_size, uint8_t *digest)
I follow the convention in the pbkdf2() function. There the digest_size is part of the hash parameters passed. One could add an additional parameter for digest_size to allow getting a smaller prk. I didn't see the reason for it as the PRK is used as is in the expand operation. I could add it if needed, but as you said if one wants a smaller value he can use the hmac operation directly. That is a helper function for directly mapping the functionality needed for HKDF.
This function will call the +@var{update} and @var{digest} functions passing the @var{mac_ctx} +context parameter as an argument in order to compute digest of size +@var{digest_size}. Inputs are the secret @var{secret} of length +@var{secret_length}. The output length is fixed to @var{digest_size} octets, +thus the output buffer @var{dst} must have room for at least @var{digest_size} octets. +@end deftypefun
I think it's confusing to say that the output length is "fixed". Do you mean that digest_size must be the same as the digest size of the underlying MAC algorithm? Or may it be smaller, like for most other *_digest methods in Nettle? To me, it seems reasonable to support smaller values, but write that to conform with the spec, output length must equal the underlying digest size (if that's what the spec says).
[...]
+@deftypefun void hkdf_expand (void *mac_ctx, nettle_hash_update_func *update, nettle_hash_digest_func *digest, size_t digest_size, size_t info_size, const uint8_t *info, size_t length, uint8_t *dst) +Expand a Pseudorandom Key (PRK) to an arbitrary size according to HKDF. +The HMAC must have been initialized, with its key being the +PRK from the Extract operation.
Is it required that hkdf_extract is used in some way to produce the key for hkdf_expand? Then I think the relation between _extract and _expand needs to be clarified. Would you always have the same number of calls to _extract and _expand, or could do _extract once and _expand multiple times (with different info string)?
I'm not sure what you mean. The relation is defined in HKDF document, though upper protocols like tls 1.3 may utilize these in arbitrary ways. Nettle provides the implementation of the primitives.
This function will call the +@var{update} and @var{digest} functions passing the @var{mac_ctx} +context parameter as an argument in order to compute digest of size +@var{digest_size}. Inputs are the info @var{info} of length +@var{info_length}, and the desired derived output length @var{length}. +The output buffer is @var{dst} which must have room for at least @var{length} octets. +@end deftypefun
Do you intend to add specific functions like hkdf_hmac_sha256_expand(...) too?
I have no immediate plans for that.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
On Mon, 2017-05-22 at 19:09 +0200, Niels Möller wrote:
Ouch, I think that indicates a real problem with the change I made, a ./configure && make && make check build will now always remake the .test-rules.make file, because it depends on Makefile, which obviously is modified by configure. That's pretty bad.
I think I saw a problem with having it depend only on Makefile.in, I have to investigate and probably change back.
I'm about to commit the below patch. Seems to work for me, I tried removing and readding entries to TS_NETTLE_SOURCES, and .test-rules.make was updates as expected. And also make distcheck still works.
Do you see the gitlab notifications about broken builds?
I get occasional notifications, but I'm not really familiar with that process.
An approach to avoid broken builds in master, may be to wait for a successful build prior to committing to master. For example committing to a branch first and waiting for the gitlab.com mirror sync and the CI results.
If I create a branch for that purpose, will it be picked up automatically? Roughly how long is the latency for sync + build?
Regards, /Niels
diff --git a/ChangeLog b/ChangeLog index 8269e6b..e730110 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,12 @@ +2017-05-23 Niels Möller nisse@lysator.liu.se + + Rework the previous change, which had the unintended effect of + always regenerating .test-rules.make after ./configure is run. + * testsuite/Makefile.in (test-rules.stamp): New stamp file target, + depend on Makefile.in, and run $(MAKE) test-rules. + (.test-rules.make): Add a level of indirection, by depending on + test-rules.stamp. + 2017-05-20 Niels Möller nisse@lysator.liu.se
* testsuite/Makefile.in (test-rules): Use $(srddir)/-prefix for diff --git a/testsuite/Makefile.in b/testsuite/Makefile.in index cc8dea3..c1ac7d2 100644 --- a/testsuite/Makefile.in +++ b/testsuite/Makefile.in @@ -67,6 +67,7 @@ EXTRA_TARGETS = $(EXTRA_SOURCES:.c=$(EXEEXT)) SOURCES = $(TS_SOURCES) $(EXTRA_SOURCES) testutils.c dlopen-test.c
DISTFILES = $(SOURCES) $(CXX_SOURCES) Makefile.in .test-rules.make \ + test-rules.stamp \ $(TS_SH) setup-env teardown-env \ gold-bug.txt testutils.h sha3.awk
@@ -96,9 +97,7 @@ dlopen-test$(EXEEXT): dlopen-test.$(OBJEXT) testutils.$(OBJEXT) $(LINK) dlopen-test.$(OBJEXT) -ldl -o dlopen-test$(EXEEXT)
.PHONY: test-rules -test-rules: $(srcdir)/.test-rules.make - -$(srcdir)/.test-rules.make: Makefile +test-rules: (for f in $(TS_NETTLE) $(TS_HOGWEED) $(EXTRA_TARGETS) ; do \ echo $$f'$$(EXEEXT): '$$f'.$$(OBJEXT)' ; \ echo ' $$(LINK) '$$f'.$$(OBJEXT) $$(TEST_OBJS) -o '$$f'$$(EXEEXT)' ; \ @@ -109,10 +108,14 @@ $(srcdir)/.test-rules.make: Makefile echo ' $$(LINK_CXX) '$$f'.$$(OBJEXT) $$(TEST_OBJS) -o '$$f'$$(EXEEXT)' ; \ echo ; \ done) > $(srcdir)/.test-rules.make - @echo "******************************************************************" - @echo "testsuite Makefile rules have been regenerated; please re-run make" - @echo "******************************************************************" - false + +$(srcdir)/.test-rules.make: $(srcdir)/test-rules.stamp + +# Updates the stamp file *first*, so that this rule isn't triggered +# again and again by the recursive $(MAKE). +$(srcdir)/test-rules.stamp: Makefile.in + echo stamp > $(srcdir)/test-rules.stamp + $(MAKE) test-rules
include $(srcdir)/.test-rules.make
diff --git a/testsuite/test-rules.stamp b/testsuite/test-rules.stamp new file mode 100644 index 0000000..859afb1 --- /dev/null +++ b/testsuite/test-rules.stamp @@ -0,0 +1 @@ +stamp
On Tue, May 23, 2017 at 11:37 PM, Niels Möller nisse@lysator.liu.se wrote:
Do you see the gitlab notifications about broken builds?
I get occasional notifications, but I'm not really familiar with that process.
An approach to avoid broken builds in master, may be to wait for a successful build prior to committing to master. For example committing to a branch first and waiting for the gitlab.com mirror sync and the CI results.
If I create a branch for that purpose, will it be picked up automatically? Roughly how long is the latency for sync + build?
gitlab.com/gnutls/nettle syncs every hour with your repo. The build starts immediately but may take some time.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
gitlab.com/gnutls/nettle syncs every hour with your repo. The build starts immediately but may take some time.
I've just created a branch master-updates for ci runs of changes before they are merged to master. (I've not gotten much hacking time last few weeks, spending the (long) weekends in the garden).
Regards, /Niels
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
On Mon, 2017-05-22 at 19:09 +0200, Niels Möller wrote:
Is it required that hkdf_extract is used in some way to produce the key for hkdf_expand? Then I think the relation between _extract and _expand needs to be clarified. Would you always have the same number of calls to _extract and _expand, or could do _extract once and _expand multiple times (with different info string)?
I'm not sure what you mean. The relation is defined in HKDF document, though upper protocols like tls 1.3 may utilize these in arbitrary ways. Nettle provides the implementation of the primitives.
Sorry this got a bit stalled. I would like the Nettle docs to be reasonably self-contained, and explain what the primitive does, what problem it is intended to solve, and the typical way to use it. And in case terminology in the relevant RFC or other literature differs from what's used elsewhere in the Nettle manual, point of how they relate. In particuler, I found the "salt"/"key"/"secret" arguments a bit confusing, as well as the purpose of the _extract function.
With your current patch to the docs, I'll have to read the HKDF spec carefully myself to be able to review the code and the docs, and I haven't yet gotten the time to do that. Clear and more self-contained documentation would make this easier.
Best regards, /Niels
PS. I'm currently on vacation, i.e., no work duties in the way of Nettle hacking, but I'm a bit offline and not reading email daily.
On Fri, Jul 7, 2017 at 9:24 PM, Niels Möller nisse@lysator.liu.se wrote:
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
On Mon, 2017-05-22 at 19:09 +0200, Niels Möller wrote:
Is it required that hkdf_extract is used in some way to produce the key for hkdf_expand? Then I think the relation between _extract and _expand needs to be clarified. Would you always have the same number of calls to _extract and _expand, or could do _extract once and _expand multiple times (with different info string)?
I'm not sure what you mean. The relation is defined in HKDF document, though upper protocols like tls 1.3 may utilize these in arbitrary ways. Nettle provides the implementation of the primitives.
Sorry this got a bit stalled. I would like the Nettle docs to be reasonably self-contained, and explain what the primitive does, what problem it is intended to solve, and the typical way to use it. And in case terminology in the relevant RFC or other literature differs from what's used elsewhere in the Nettle manual, point of how they relate. In particuler, I found the "salt"/"key"/"secret" arguments a bit confusing, as well as the purpose of the _extract function.
If the changes below are not sufficient, please provide some concrete suggestions for the terminology to use. I've tried to keep the terminology consistent with the only other key derivation algorithm (PBKDF2) but I may have failed on that.
With your current patch to the docs, I'll have to read the HKDF spec carefully myself to be able to review the code and the docs, and I haven't yet gotten the time to do that. Clear and more self-contained documentation would make this easier.
I have modified the text to be more self-contained and clarify the role of the variables, which may address terminology as well. Let me know if that's ok.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
I have modified the text to be more self-contained and clarify the role of the variables, which may address terminology as well. Let me know if that's ok.
Thanks!
I've pushed what I believe was the latest version of the hkdf patch + this documentation patch to the branch hkdf-support. Can you double check that I got it right?
I'll then fix some minor spacing/formatting details and write a ChangeLog entry before merging to master.
Regards, /Niels
On Wed, 2017-08-30 at 19:05 +0200, Niels Möller wrote:
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
I have modified the text to be more self-contained and clarify the role of the variables, which may address terminology as well. Let me know if that's ok.
Thanks!
I've pushed what I believe was the latest version of the hkdf patch + this documentation patch to the branch hkdf-support. Can you double check that I got it right?
Thanks, it matches the version I've included in gnutls.
regards, Nikos
Nikos Mavrogiannopoulos nmav@redhat.com writes:
On Wed, 2017-08-30 at 19:05 +0200, Niels Möller wrote:
I've pushed what I believe was the latest version of the hkdf patch + this documentation patch to the branch hkdf-support. Can you double check that I got it right?
Thanks, it matches the version I've included in gnutls.
Good! Pushed now, with some minor additional cleanups.
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes (back in May):
Nikos Mavrogiannopoulos nmav@redhat.com writes:
--- /dev/null +++ b/hkdf.c @@ -0,0 +1,85 @@ +/*
- Copyright (C) 2017 Red Hat, Inc.
- Author: Nikos Mavrogiannopoulos
- This file is part of GnuTLS.
This file carries a GnuTLS copyright notice rather than the Nettle copyright notice used in most other files. I guess that's unintentional? Indentation in the rest of this file is also using a very different style than the rest of Nettle.
Forgot about this. I'll just change to the (slightly stricter, GPLv2+ or LGPLv3+) Nettle licensing notice, OK?
Regards, /Niels
On Wed, 2017-09-06 at 22:47 +0200, Niels Möller wrote:
nisse@lysator.liu.se (Niels Möller) writes (back in May):
Nikos Mavrogiannopoulos nmav@redhat.com writes:
--- /dev/null +++ b/hkdf.c @@ -0,0 +1,85 @@ +/*
- Copyright (C) 2017 Red Hat, Inc.
- Author: Nikos Mavrogiannopoulos
- This file is part of GnuTLS.
This file carries a GnuTLS copyright notice rather than the Nettle copyright notice used in most other files. I guess that's unintentional? Indentation in the rest of this file is also using a very different style than the rest of Nettle.
Forgot about this. I'll just change to the (slightly stricter, GPLv2+ or LGPLv3+) Nettle licensing notice, OK?
Fine with me.
regards, Nikos
nettle-bugs@lists.lysator.liu.se