Hello, I tried to switch gnutls to use dsa_generate_keypair() to generate primes for the DH key exchange, but unfortunately this interface has the strict DSS checks for q_bits. Would it be possible to have a dh_generate_keypair() that does the exact same thing without the q_bits and p_bits limitations?
regards, Nikos
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
I tried to switch gnutls to use dsa_generate_keypair() to generate primes for the DH key exchange, but unfortunately this interface has the strict DSS checks for q_bits.
I'm not sure what's the right thing is here.
Simplest would be to just drop these requirements from dsa_generate_keypair, and let it do whatever the caller asks for. Do you think that makes sense?
Would it be possible to have a dh_generate_keypair() that does the exact same thing without the q_bits and p_bits limitations?
Do you need exactly the same thing for DH? I.e., a group of relatively small size q, which is a subgroup of Z_p^* for some much larger p?
I imagine one might want to rather use primes like p = 2q + 1 or so, so the size q subgroup is almost as large as Z_p. I'm not sure the current code works with q_size = p_size-1.
Related issue, which soemone else asked about some week ago, is separating generation of DSA parameters (i.e., p, q, g) from generation of the key pair. Currently, there's no easy way in nettle to generate a key for some pre-specified DSA group. It might be better to have something like
struct dsa_params { mpz_t p; mpz_t q; mpz_t g; };
struct dsa_private_key { const struct dsa_params *params; mpz_t x; };
struct dsa_private_key { const struct dsa_params *params; mpz_t y; };
(a bit analogous to the new ecdsa code). But that would be a pretty large and incompatible change, so we maybe shouldn't do that, or at least think carefully about compatibility.
Regards, /Niels
On 12/06/2013 02:00 PM, Niels Möller wrote:
Simplest would be to just drop these requirements from dsa_generate_keypair, and let it do whatever the caller asks for. Do you think that makes sense?
This seems reasonable to me, even for people who want to make DSA keys. If they want to follow the strict requirements about the sizes of p and q, they have all the tools at their disposal to follow them, and i'm not sure nettle should be enforcing that behavior.
Do you need exactly the same thing for DH? I.e., a group of relatively small size q, which is a subgroup of Z_p^* for some much larger p?
I imagine one might want to rather use primes like p = 2q + 1 or so, so the size q subgroup is almost as large as Z_p. I'm not sure the current code works with q_size = p_size-1.
Why would a DH key exchange need the larger group but a DSA signature be secure with the smaller group? This is a serious question and not a rhetorical one -- if you have a pointer to any sort of writeup that explains the difference of these two uses of the discrete log, i'd like to try to understand it. Sorry if this is a dumb question!
--dkg
Daniel Kahn Gillmor dkg@fifthhorseman.net writes:
Why would a DH key exchange need the larger group but a DSA signature be secure with the smaller group?
I think the main point of the smaller group in DSA is to get small signatures.
And discrete logs in the large group and discrete logs in the small subgroup are of comparable difficulty, because there's more structure in the larger group ("index calculus" is the name of the trick, iirc).
For DH, I don't think there's any particular reason to prefer to work in a small subgroup. But I may be missing something, of course.
Regards, /Niels
On 12/06/2013 03:12 PM, Niels Möller wrote:
I think the main point of the smaller group in DSA is to get small signatures.
And discrete logs in the large group and discrete logs in the small subgroup are of comparable difficulty, because there's more structure in the larger group ("index calculus" is the name of the trick, iirc).
cool, thanks, i'll look into that further.
For DH, I don't think there's any particular reason to prefer to work in a small subgroup. But I may be missing something, of course.
I can think of a few, but i'm not sure how legitimate they are:
one is a minimized-entropy: If you know your small subgroup is of size q, then you need less entropy to choose your secret A in the DH key exchange, since it never needs to exceed q.
Another might be efficiency of modular exponentiation: if you use exponentiation by squaring, then the cost of generating the public DH value A' from A where 1 < A < q should be O(log_2(q)) instead of O(log_2(p)) where 1 < A < p. The cost of the second exponentiation (from B') will still be O(log_2(p)), of course, but it still seems like a non-trivial reduction in computation.
I've heard complaints about both the additional round-trip of the DH
--dkg
I wrote:
I think the main point of the smaller group in DSA is to get small signatures.
And it should also gets a bit faster, with smaller exponents. Which might be an advantage also for DH, but I don't know if that's considered important?
And discrete logs in the large group and discrete logs in the small subgroup are of comparable difficulty, because there's more structure in the larger group ("index calculus" is the name of the trick, iirc).
Let me say a bit more about this. There's one class of "abstract" discrete logarithm algorithms which work on any group where the group operations are computable.
If n is the size of the group, there's an "obvious" meet-in-the-middle algorithm with O(sqrt(n)) computation and O(sqrt(n)) storage: To find x such that g^x = y, let s = ceil(sqrt(s)). First tabulate g^{k s} for k = 0, 1, ..., s. Then compute y g^{-m}, for m = 0, 1, ..., and stop when you get a value which is present in the table. Then x = k s + m. (Different time-memory tradeoffs are possible, of course).
Even better is to use the Pollard-rho discrete log algorithm, with computation O(sqrt(n)), and very little storage.
As far as I'm aware, these abstract group algorithms are the best known for directly attacking the DSA subgroup, or for attacking the elliptic curve groups used for cryptographic purposes.
On the other hand, to compute discrete logarithms in the multiplicative group of Z_p, there's the index calculus algorithm which is somewhat similar to the number number field sieve factoring algorithm, and with similar computational complexity (I'm not sure exactly what the complexity is, though. But a *lot* better than O(sqrt(n))).
To break DSA, it's the attacker's choice to try the index calculus algorithm on the large group, or the pollard-rho algorithm on the smaller subgroup. Parameters should be chosen so that both attacks are too expensive. But for reasonable parameters, the cost should be roughly the same, e.g., it makes little sense to increase the size of p to 4096 bits, while sticking to only 160 bits for q.
I think it's a bit curious that each of the discrete logarithm above are closely related to a factoring algorithm.
Regards, /Niels
On Fri 2013-12-06 15:12:57 -0500, Niels Möller wrote:
I think the main point of the smaller group in DSA is to get small signatures.
And discrete logs in the large group and discrete logs in the small subgroup are of comparable difficulty, because there's more structure in the larger group ("index calculus" is the name of the trick, iirc).
cool, thanks, i'll look into that further.
For DH, I don't think there's any particular reason to prefer to work in a small subgroup. But I may be missing something, of course.
I can think of a few, but i'm not sure how legitimate they are:
One is based on minimized entropy: If you know your small subgroup is of size q, then you need less entropy to choose your secret A in the DH key exchange, since it never needs to exceed q.
another might be efficiency of modular exponentiation: if you use exponentiation by squaring, then the cost of calculating A' = g^A mod p where A < q should be O(log_2(q)) instead of O(log_2(p)) where A < p. So if q has half the bits of p, you'd halve the amount of computation.
The second modular exponentiation (B'^A mod p) would be similarly faster. Both of these factors seem like they might be significant in a TLS endpoint that terminates many DHE sessions per second, but i haven't profiled them.
what do you think?
--dkg
Daniel Kahn Gillmor dkg@fifthhorseman.net writes:
cool, thanks, i'll look into that further.
You can start with Handbook of Applied Cryptography, sec. 3.6 (available as pdfs at http://cacr.uwaterloo.ca/hac/).
For DH, I don't think there's any particular reason to prefer to work in a small subgroup. But I may be missing something, of course.
I can think of a few, but i'm not sure how legitimate they are:
One is based on minimized entropy: If you know your small subgroup is of size q, then you need less entropy to choose your secret A in the DH key exchange, since it never needs to exceed q.
I doubt this matters, if you have a decent and properly seeded pseudorandomness generator.
another might be efficiency of modular exponentiation: if you use exponentiation by squaring, then the cost of calculating A' = g^A mod p where A < q should be O(log_2(q)) instead of O(log_2(p)) where A < p. So if q has half the bits of p, you'd halve the amount of computation.
I think this is true also for more sophisticated exponentiation algorithms. Cost is linear in exponent size.
The second modular exponentiation (B'^A mod p) would be similarly faster. Both of these factors seem like they might be significant in a TLS endpoint that terminates many DHE sessions per second, but i haven't profiled them.
Could well be. I'd like to hear what Nikos says about this.
Regards, /Niels
On Fri, 2013-12-06 at 23:31 +0100, Niels Möller wrote:
The second modular exponentiation (B'^A mod p) would be similarly faster. Both of these factors seem like they might be significant in a TLS endpoint that terminates many DHE sessions per second, but i haven't profiled them.
Could well be. I'd like to hear what Nikos says about this.
Yes, this is the reason I used this format for primes in gnutls. Other implementations use a prime that as you suggested has a very large prime factor of p-1 (comparable to p), and then select their key with a size based on a security parameter (e.g. 256 bits). Both cases are considered secure, but I find the former format and usage of the group more elegant.
To make things interesting, in TLS the client has no information on the construction of p (or the order of g), so it has to select an 1<x<p-1, which make his computation a bit slower.
regards, Nikos
Daniel Kahn Gillmor dkg@fifthhorseman.net writes:
On Fri 2013-12-06 15:12:57 -0500, Niels Möller wrote:
For DH, I don't think there's any particular reason to prefer to work in a small subgroup. But I may be missing something, of course.
I can think of a few, but i'm not sure how legitimate they are:
[...]
another might be efficiency of modular exponentiation: if you use exponentiation by squaring, then the cost of calculating A' = g^A mod p where A < q should be O(log_2(q)) instead of O(log_2(p)) where A < p. So if q has half the bits of p, you'd halve the amount of computation.
And on the other hand, you just pointed out a potential problem on the ietf-ssh mailing list:
The selection of a discrete log group with a subgroup of targeted size q (instead of using a group with a safe prime modulus, which only allows subgroups of at worst (p-1)/2 if you exclude (p-1) as a valid public key) makes it costly to check whether the peer is forcing your shared secret into one of the other smaller subgroups.
If the subgroup is of prime size q, then you can check if an element x belongs to that subgroup by checking that x^q = 1 (mod p). Right? Is that too expensive? And that subgroup in turn has no proper subgroups.
Even with this additional check, it could be significantly faster than using the large group, in particular if one uses tricks to compute x^q and x^e (where e is your local and secret dh exponent) together.
This could still be Note that this kind of subgroup-forcing attack was used in the DHE variant of Bhargavan et al's recent attack against client certification in TLS (other mistakes in the TLS protocol played a role in these attacks too, of course)
I haven't read up on this.
Regards, /Niels
On Fri, 2013-12-06 at 20:00 +0100, Niels Möller wrote:
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
I tried to switch gnutls to use dsa_generate_keypair() to generate primes for the DH key exchange, but unfortunately this interface has the strict DSS checks for q_bits.
I'm not sure what's the right thing is here.
Simplest would be to just drop these requirements from dsa_generate_keypair, and let it do whatever the caller asks for. Do you think that makes sense?
Sounds reasonable. Nettle is low-level anyway.
Related issue, which soemone else asked about some week ago, is separating generation of DSA parameters (i.e., p, q, g) from generation of the key pair. Currently, there's no easy way in nettle to generate a key for some pre-specified DSA group. It might be better to have something like
I find that also useful. Now I just discard to values of x and y when I generate parameters, and generate the keys at later point (when there is an actual TLS connection).
(a bit analogous to the new ecdsa code). But that would be a pretty large and incompatible change, so we maybe shouldn't do that, or at least think carefully about compatibility.
In the master branch you break the ABI anyway, so it may be a good time to introduce that. Otherwise you may simply introduce new functions for the new structures and leave the old API intact.
regards, Nikos
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
On Fri, 2013-12-06 at 20:00 +0100, Niels Möller wrote:
Simplest would be to just drop these requirements from dsa_generate_keypair, and let it do whatever the caller asks for. Do you think that makes sense?
Sounds reasonable. Nettle is low-level anyway.
I'll strive for that then. It's some work to support arbitrary p_size > q_size, though. I've spent some of the day looking into pocklington's theorem and variants again. The cases q_size < p_size/2 and q_size > p_size / 2 need different handling.
In the master branch you break the ABI anyway, so it may be a good time to introduce that. Otherwise you may simply introduce new functions for the new structures and leave the old API intact.
I think I can do that *almost* without breaking source-level compatibility. API draft:
New structs:
struct dsa_params { /* Modulo */ mpz_t p;
/* Group order */ mpz_t q;
/* Generator */ mpz_t g; };
struct dsa_value { const struct dsa_params *params; /* For private keys, represents an exponent (0 < x < q). For public keys, represents a group element, 0 < x < p) */ mpz_t x; };
New functions:
int dsa_sign(const struct dsa_value *key, void *random_ctx, nettle_random_func *random, size_t digest_size, const uint8_t *digest, struct dsa_signature *signature);
int dsa_verify(const struct dsa_value *pub, size_t digest_size, const uint8_t *digest, const struct dsa_signature *signature);
These two names exists in the repo since a few weeks ago, but in no released version, so it's no problem to change them.
void dsa_generate_params (struct dsa_params *params,
void *random_ctx, nettle_random_func *random,
void *progress_ctx, nettle_progress_func *progress, unsigned p_bits, unsigned q_bits);
New, obviously.
int dsa_generate_keypair (struct dsa_value *pub, struct dsa_value *key,
void *random_ctx, nettle_random_func *random);
THis is a change of an advertised function in the API, and it existing code. Not sure what to do, either, give a new name to the new function. Or rename the old function, and let applications do preprocessor tricks like
#ifdef dsa_generate_keypair_old #undef dsa_generate_keypair #define dsa_generate_keypair dsa_generate_keypair_old #endif
if they want to keep using the old function with no other changes. Or check some define #NETTLE_OLD_DSA_API in the header file to do that extra name mangling for the application.
And the rest of the old DSA API kept with no changes, possibly to be retired in the distant future.
Comments?
Regards, /Niels
On Mon, 2013-12-09 at 17:23 +0100, Niels Möller wrote:
Simplest would be to just drop these requirements from dsa_generate_keypair, and let it do whatever the caller asks for. Do you think that makes sense?
Sounds reasonable. Nettle is low-level anyway.
I'll strive for that then. It's some work to support arbitrary p_size > q_size, though. I've spent some of the day looking into pocklington's theorem and variants again. The cases q_size < p_size/2 and q_size > p_size / 2 need different handling.
I think having a limitation that q_size < p_size/2 is pretty much reasonable. The recommendations for DH parameters have q_size << p_size/2.
I think I can do that *almost* without breaking source-level compatibility. API draft:
Looks reasonable.
THis is a change of an advertised function in the API, and it existing code. Not sure what to do, either, give a new name to the new function. Or rename the old function, and let applications do preprocessor tricks like
#ifdef dsa_generate_keypair_old #undef dsa_generate_keypair #define dsa_generate_keypair dsa_generate_keypair_old #endif
I don't think it makes much sense to keep the old function if the ABI breaks anyway. It's not that big deal of a change, but it's up to you.
regards, Nikos
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
I think having a limitation that q_size < p_size/2 is pretty much reasonable. The recommendations for DH parameters have q_size << p_size/2.
Good to know. I was thinking that, e.g., p_size = q_size + 1 (and p = 2q + 1) was important. But maybe that's a special case, and the general case of q_size close to p_size is not very important?
I don't think it makes much sense to keep the old function if the ABI breaks anyway. It's not that big deal of a change, but it's up to you.
My concerns here are mainly with source-level API (since we already know there will be an ABI break). It helps applications to transition to the new version if it's reasonably easy to write code which works with both the new and the old version. So they can do minimal changes to be able to compile with either version, and later move over to use the new API.
Regards, /Niels
On Mon, Dec 9, 2013 at 10:34 PM, Niels Möller nisse@lysator.liu.se wrote:
I think having a limitation that q_size < p_size/2 is pretty much reasonable. The recommendations for DH parameters have q_size << p_size/2.
Good to know. I was thinking that, e.g., p_size = q_size + 1 (and p = 2q
- was important. But maybe that's a special case, and the general
case of q_size close to p_size is not very important?
Other implementations use it (or more precise they use safe primes rather than set q explicitly). If that use-case is required, having a different function to generate safe primes seems good enough.
regards, Nikos
nisse@lysator.liu.se (Niels Möller) writes:
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
In the master branch you break the ABI anyway, so it may be a good time to introduce that. Otherwise you may simply introduce new functions for the new structures and leave the old API intact.
I think I can do that *almost* without breaking source-level compatibility. API draft:
New structs:
struct dsa_params { /* Modulo */ mpz_t p;
/* Group order */ mpz_t q; /* Generator */ mpz_t g;
};
struct dsa_value { const struct dsa_params *params; /* For private keys, represents an exponent (0 < x < q). For public keys, represents a group element, 0 < x < p) */ mpz_t x; };
I have now implemented this and pushed it to the dsa-reorg branch in the repo. Comments appreciated.
It remains to convert the function that convert dsa keys to and from strings using sexp or asn.1 der formatting. I'm not sure if we need to maintain any source-level backwards compatibility there. And to add tests using the new interface.
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes:
nisse@lysator.liu.se (Niels Möller) writes:
Nikos Mavrogiannopoulos nmav@gnutls.org writes:
In the master branch you break the ABI anyway, so it may be a good time to introduce that. Otherwise you may simply introduce new functions for the new structures and leave the old API intact.
I think I can do that *almost* without breaking source-level compatibility. API draft:
New structs:
struct dsa_params { /* Modulo */ mpz_t p;
/* Group order */ mpz_t q; /* Generator */ mpz_t g;
};
struct dsa_value { const struct dsa_params *params; /* For private keys, represents an exponent (0 < x < q). For public keys, represents a group element, 0 < x < p) */ mpz_t x; };
I have now implemented this and pushed it to the dsa-reorg branch in the repo. Comments appreciated.
It remains to convert the function that convert dsa keys to and from strings using sexp or asn.1 der formatting.
I've converted thse to the new api now (with no backwards compatibility). Looking at http://www.lysator.liu.se/~nisse/nettle/plan.html, the DSA interface is one of the last API changes I'd like to complete before the release.
The current code (on the dsa-reorg branch) uses a struct dsa_params to represent the group parameters p, q, and g. And a struct dsa_value which holds a pointer to the parameters (a bit like the ecc functions) and a single number, which can represent either a group element or an exponent, depending on the context.
This makes it a bit unwieldy to read a key pair, since one needs to initialize three different structs, for parameters, public key, and private key. I think the old interface which puts the parameters and the public key in the same struct is a bit easier in some common usecases.
Typical code, from pkcs1-conv.c
static int convert_dsa_private_key(struct nettle_buffer *buffer, size_t length, const uint8_t *data) { struct dsa_params params; struct dsa_value pub; struct dsa_value priv; int res;
dsa_params_init (¶ms); dsa_value_init (&pub, ¶ms); dsa_value_init (&priv, ¶ms);
if (dsa_openssl_private_key_from_der(¶ms, &pub, &priv, 0, length, data)) { /* Reuses the buffer */ nettle_buffer_reset(buffer); res = dsa_keypair_to_sexp(buffer, NULL, &pub, &priv); } else { werror("Invalid OpenSSL private key.\n"); res = 0; } dsa_value_clear (&pub); dsa_value_clear (&priv); dsa_params_clear (¶ms); return res; }
What do you think?
Is it possible to somehow provide a "all-in-one" interface for both parameters and a public or private key, and have a separate parameter struct (for the benefit of diffie-hellman use, or for keys using some fixed predefined group), without lots of duplication?
It differs a bit from the ecc case, in that a common case is that the dsa parameters are defined more or less dynamically at runtime, while ecc curves are compile time constants.
One possibility might be to have all dsa functions take the dsa group parameters and the actual key as separate function arguments. Then the application is free to choose if it wants an all-in-one key struct like
struct dsa_public_key { struct dsa_params params; mpz_t y; };
or something like the above dsa_value, or keep the params separately in some other way. It would call the same nettle functions in either case. Ant struct combining key and parameters would always be taken apart when calling nettle functions; it would not appear in any nettle function prototypes. At the moment, I think that is an attractive alternative.
A somewhat related question is that of backwards compatibility. The current code maintains struct dsa_public_key and struct dsa_private_key unchanged, and the sign and verify functions that use these types, for some source level compatibility with existing code. Is that important or not?
And to add tests using the new interface.
This still remains. In particular, the new key generation function.
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes:
One possibility might be to have all dsa functions take the dsa group parameters and the actual key as separate function arguments.
I've tried this now. I think it looks reasonably good. Excerpts from the new dsa.h:
struct dsa_params { /* Modulo */ mpz_t p;
/* Group order */ mpz_t q;
/* Generator */ mpz_t g; };
So this is the new parameter struct. Is this a good name, or should it be struct dsa_group?
int dsa_sign(const struct dsa_params *params, const mpz_t x, void *random_ctx, nettle_random_func *random, size_t digest_size, const uint8_t *digest, struct dsa_signature *signature);
int dsa_verify(const struct dsa_params *params, const mpz_t y, size_t digest_size, const uint8_t *digest, const struct dsa_signature *signature);
These functions now take the parameters separately. One could get something more in style with the the old interface by defining
#define dsa_sign(pub, key, [...]) dsa_sign(&pub->params, key->x, [...]) #define dsa_verify(pub, [...]) dsa_verify(&pub->params, pub->y, [...])
Almost makes one wish for C++ style overloading...
int dsa_generate_params (struct dsa_params *params, void *random_ctx, nettle_random_func *random, void *progress_ctx, nettle_progress_func *progress, unsigned p_bits, unsigned q_bits);
void dsa_generate_keypair (const struct dsa_params *params, mpz_t pub, mpz_t key,
void *random_ctx, nettle_random_func *random);
It seems reasonable to provide one key generation function which also generates parameters, and one key generation function which takes fixed parameters as argument. Any suggestion for naming? For compatibility, it would be preferable to keep dsa_generate_keypair unchanged, and invent a new name for the function above.
/* Convenience structs, close to the interface used in nettle-2.7.x and earlier. */ struct dsa_public_key { struct dsa_params params; /* Public value */ mpz_t y; };
This is essentially the same struct as in earlier versions, but it wraps the parameters in a struct, so it's an API change. I think this is the sane way to do it, if this is viewed as an interface to be supported in future versions, and not just something retained for backwards compatibility.
In this version of the interface, struct dsa_public_key is not deprecated, but it is made *optional*: all dsa features should be available even if you don't bundle parameters and public key value in this way.
And I think bundling the parameters with the public key do make sense for many common use cases, in particular applications only wanting to verify signatures and certificates.
int dsa_sha1_sign(const struct dsa_public_key *pub, const struct dsa_private_key *key, void *random_ctx, nettle_random_func *random, struct sha1_ctx *hash, struct dsa_signature *signature);
int dsa_sha1_verify(const struct dsa_public_key *key, struct sha1_ctx *hash, const struct dsa_signature *signature);
These and related functions retained, with no change to the prototypes. I think most applications can use these functions, rather than the more general dsa_sign and dsa_verify above. Maybe the _sign functions should be changed to take a struct dsa_params * rather than a struct dsa_public_key *; that would be more logical and consistent, but less compatible with existing code.
int dsa_generate_keypair_old(struct dsa_public_key *pub, struct dsa_private_key *key,
void *random_ctx, nettle_random_func *random, void *progress_ctx, nettle_progress_func *progress, unsigned p_bits, unsigned q_bits);
As said above, there's a naming issue here. I think it would be nice to keep this function with name and prototype unchanged.
/* Generates a public-key expression if PRIV is NULL .*/ int dsa_keypair_to_sexp(struct nettle_buffer *buffer, const char *algorithm_name, /* NULL means "dsa" */ const struct dsa_params *params, const mpz_t pub, const mpz_t priv);
/* If PRIV is NULL, expect a public-key expression. If PUB is NULL, * expect a private key expression and ignore the parts not needed for * the public key. */ /* Keys must be initialized before calling this function, as usual. */ int dsa_sha1_keypair_from_sexp(struct dsa_params *params, mpz_t pub, mpz_t priv, unsigned p_max_bits, size_t length, const uint8_t *expr);
int dsa_params_from_der_iterator(struct dsa_params *params, unsigned max_bits, unsigned q_bits, struct asn1_der_iterator *i); int dsa_public_key_from_der_iterator(const struct dsa_params *params, mpz_t pub, struct asn1_der_iterator *i);
int dsa_openssl_private_key_from_der_iterator(struct dsa_params *params, mpz_t pub, mpz_t priv, unsigned p_max_bits, struct asn1_der_iterator *i);
int dsa_openssl_private_key_from_der(struct dsa_params *params, mpz_t pub, mpz_t priv, unsigned p_max_bits, size_t length, const uint8_t *data);
These conversion functions all take a separate dsa_params argument, and don't use struct dsa_public_key and dsa_private_key.
Regards, /Niels
On Thu, Mar 13, 2014 at 11:22 AM, Niels Möller nisse@lysator.liu.se wrote:
These functions now take the parameters separately. One could get something more in style with the the old interface by defining #define dsa_sign(pub, key, [...]) dsa_sign(&pub->params, key->x, [...]) #define dsa_verify(pub, [...]) dsa_verify(&pub->params, pub->y, [...])
Hello, It would be nice if the new interface is api compatible with the old one; especially if that would be possible with macros such as the one you show, so that compatibility bloat doesn't need to be included in the library.
It seems reasonable to provide one key generation function which also generates parameters, and one key generation function which takes fixed parameters as argument. Any suggestion for naming? For compatibility, it would be preferable to keep dsa_generate_keypair unchanged, and invent a new name for the function above. /* Convenience structs, close to the interface used in nettle-2.7.x and earlier. */ struct dsa_public_key { struct dsa_params params; /* Public value */ mpz_t y; }; This is essentially the same struct as in earlier versions, but it wraps the parameters in a struct, so it's an API change. I think this is the sane way to do it, if this is viewed as an interface to be supported in future versions, and not just something retained for backwards compatibility.
I thought that this was used to keep API compatibility. I don't think it makes sense to introduce new APIs that are close to old APIs. There is already a new API.
int dsa_generate_keypair_old(struct dsa_public_key *pub, struct dsa_private_key *key,
void *random_ctx, nettle_random_func *random, void *progress_ctx, nettle_progress_func *progress, unsigned p_bits, unsigned q_bits);
As said above, there's a naming issue here. I think it would be nice to keep this function with name and prototype unchanged.
If old sources can be compiled with no changes, I agree it should keep the same name. If not, I don't see many advantages of keeping an API that looks like the old but is not compatible with it. It will take up space, and require maintaining more code.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
It would be nice if the new interface is api compatible with the old one; especially if that would be possible with macros such as the one you show, so that compatibility bloat doesn't need to be included in the library.
[...]
I thought that this was used to keep API compatibility. I don't think it makes sense to introduce new APIs that are close to old APIs. There is already a new API.
You have a good point there.
I really not sure about the best way to go about it. Another option might be to purge all deprecated stuff from dsa.h, and put it into a new file dsa-compat.h. Main practical advantage would be that dsa-compat.h can then also tweak the name mangling of dsa_generate_keypair, so that if you include this file, it refers to a function compatible with the previous version.
Then applications that want to stick to the old interface (in order to support both old and new nettle) could get that reasonably easy, with a
#include <nettle/dsa.h> #if HAVE_NETTLE_DSA_COMPAT_H #include <nettle/dsa-compat.h> #endif
And speaking of *-compat.h files, does anybody use rsa-compat.h (RSAREF-compatible interface) or des-compat.h (libdes-compatible interface)?
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes:
I really not sure about the best way to go about it. Another option might be to purge all deprecated stuff from dsa.h, and put it into a new file dsa-compat.h.
I've now done this, and pushed it on the dsa-reorg branch. I also killed struct dsa_value. The new advertised interface is now like
int dsa_sign(const struct dsa_params *params, const mpz_t priv, void *random_ctx, nettle_random_func *random, size_t digest_size, const uint8_t *digest, struct dsa_signature *signature);
int dsa_verify(const struct dsa_params *params, const mpz_t pub, size_t digest_size, const uint8_t *digest, const struct dsa_signature *signature);
Unless there are objections, I think I'm going to merge this to the master branch as soon as I get the time. (Most likely a manual merge, to avoid checking in dead ends). We really need to get this over with.
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes:
Unless there are objections, I think I'm going to merge this to the master branch as soon as I get the time. (Most likely a manual merge, to avoid checking in dead ends). We really need to get this over with.
Pushed now.
Regards, /Niels
nettle-bugs@lists.lysator.liu.se