I have updated the release plans, at http://www.lysator.liu.se/~nisse/nettle/plan.html. I aim to get a release out within a month or two.
Regards, /Niels
tis 2013-04-02 klockan 10:45 +0200 skrev Niels Möller:
I have updated the release plans, at http://www.lysator.liu.se/~nisse/nettle/plan.html. I aim to get a release out within a month or two.
Re the 3.0 plans, it seems that making a 3.0 release that implement the first two items on the list (unsigned->size_t, memxor namespace) would be possible to achieve relatively soon with modest investment in work. I think it would be good to get that part out sooner rather than later.
Considering the significant time it takes for people to bump library versions with ABI incompatibilities, I suggest to consider making a 3.0 release with only those two changes soon after 2.7, and then postpone the rest of the current 3.0 items for a future 3.1 or similar. That way, we'll get a API/ABI clean version of the library out soon that people can start to migrate to.
Just an idea. I always feel that these "distant future" release plans raraly materialize, you just keep on adding nice to do features on to the "distant future" release without ever getting closer to implementing them. Eventually the list becomes so long it is practically impossible to implement all of them for a single major release.
Just an idea, and keep up the good work.
/Simon
Simon Josefsson simon@josefsson.org writes:
Just an idea. I always feel that these "distant future" release plans raraly materialize,
True. We'll see what to do after 2.7.
Regards, /Niels
Simon Josefsson simon@josefsson.org writes:
Re the 3.0 plans, it seems that making a 3.0 release that implement the first two items on the list (unsigned->size_t, memxor namespace) would be possible to achieve relatively soon with modest investment in work.
I've done the memxor thing now, and I'm looking into size_t. When switching to size_t, should we use it for *all* lengths, or only all lengths which potentially are large? For an example, consider aes.h:
I think it's clear that we ought to switch to size_t for
void aes_encrypt(const struct aes_ctx *ctx, unsigned length, uint8_t *dst, const uint8_t *src); void aes_decrypt(const struct aes_ctx *ctx, unsigned length, uint8_t *dst, const uint8_t *src);
(not that it seems particularly likely that those functions will be called with more than 4GB at a time, but the API shouldn't make it impossible). But what about
void aes_set_encrypt_key(struct aes_ctx *ctx, unsigned length, const uint8_t *key);
The length here must be one of the integers 16, 24 or 32. Should we stick to unsigned here, or use size_t for consistency?
Other key sizes are a bit more subtle. E.g., hmac keys can be up to 2^64 bits (or whatever is the maximum size of the underlying hash function, like the ridiculously large limit of 2^128 bits for SHA512), but all keys used in practice are going to have a size which fit in 32 bits (or even 16 bits. RSA bitsizes are similarly unlimited in theory but pretty small in any reasonable practice.
I think it would make some sense to adopt the principle that key sizes use unsigned (since keys by definition are small objects), and message sizes use size_t. Which would still leave some corner cases, like rsa_encrypt where only messages of small size (depending on the key size) are possible.
Comments?
Regards, /Niels
On Thu, Apr 25, 2013 at 10:44 AM, Niels Möller nisse@lysator.liu.se wrote:
Simon Josefsson simon@josefsson.org writes:
Re the 3.0 plans, it seems that making a 3.0 release that implement the first two items on the list (unsigned->size_t, memxor namespace) would be possible to achieve relatively soon with modest investment in work.
I've done the memxor thing now, and I'm looking into size_t. When switching to size_t, should we use it for *all* lengths, or only all lengths which potentially are large? For an example, consider aes.h: I think it's clear that we ought to switch to size_t for void aes_encrypt(const struct aes_ctx *ctx, unsigned length, uint8_t *dst, const uint8_t *src);
Is this the only reason to break binary compatibility? If yes I'd say delay that change until there is another more important reason to break compatibility. GnuTLS 3.x broke compatibility with 2.12.x and hasn't been included in most distributions yet, 2 years after its release.
On the practical side, I don't think switching unsigned to size_t in cryptographic functions would buy much, and would actually occupy 4 more bytes on the stack on 64-bit systems. There aren't many applications (that I'm aware of) that require more than 4GB of encrypted data in a single run. Even if there are, it is much simpler (and memory efficient) to break the encryption in chunks, and as far as I remember some encryption modes (I think it was in-place cbc) allocate memory equal to input size, which make them anyway unsuitable for such large sizes.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
Is this the only reason to break binary compatibility?
The other obvious thing is to get memxor in the nettle_ symbol name space. And changing memxor argument types from uint8_t * to void * (strictly speaking, that's an API change, not an ABI change).
There are some other, less trivial, changes I'm considering.
* Making the ctx argument of nettle_cipher_func const, and restrict it to block ciphers only. (And then do some different abstraction(s) for for arcfour and salsa20).
* Doing something about the hash/hmac interface, to avoid having to allocate three different buffers for a single hmac context.
* Tweaks to other context structs. E.g., for AES we have the nrounds field last, after the subkeys. If one could move it first, then one could allocate less space for subkeys when using shorter AES key sizes (not entirely sure how to make a decent C API for that, though).
* See if we can arrange for 16-byte alignment for structures where it matters.
The ABI break for nettle-2.1 didn't cause any large problems, as far as I'm aware. But I guess nettle has more users now than it had three years ago.
Regards, /Niels
On Thu, Apr 25, 2013 at 2:55 PM, Niels Möller nisse@lysator.liu.se wrote:
- Doing something about the hash/hmac interface, to avoid having to allocate three different buffers for a single hmac context.
Moreover, I need two hmac contexts in order to implement reset(). Since TLS is using the same key per packet, I would need to call hmac_set_key() on every packet (which is expensive), or save all states and reload them on reset(). On plain HMAC the memory for the hashes was not that significant, but on umac that method is quite wasteful. I don't see a straightforward solution to that though, without a high level API.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
Moreover, I need two hmac contexts in order to implement reset().
Can you explain how this works and what is needed? I don't remember much of TLS, so I have no idea what "reset" means here.
On plain HMAC the memory for the hashes was not that significant, but on umac that method is quite wasteful. I don't see a straightforward solution to that though, without a high level API.
Would it help to have a separate struct for the expanded key, and use that key with several per-message contexts? A bit like the split between struct gcm_key and struct gcm_ctx, in gcm.h? The same could be done also with hmac, if needed.
Regards, /Niels
On Thu, Apr 25, 2013 at 7:51 PM, Niels Möller nisse@lysator.liu.se wrote:
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
Moreover, I need two hmac contexts in order to implement reset().
Can you explain how this works and what is needed? I don't remember much of TLS, so I have no idea what "reset" means here.
The current HMAC API assumes that the hashing state is kept per call. That is if I have to hash a series of packets with contents X_0, X_1, ..., X_n I do: hmac_set_key(s); for (i=1;i<n;i++) { hmac_update(s, X_i) hmac_digest(s, output) }
In this approach the output of X_i contains the state which resulted from hashing X_(i-1). In TLS, however, hmac is used simply as: for (i=1;i<n;i++) { hmac_set_key(s); hmac_update(s, X_i) hmac_digest(s, output) }
that is the MAC of X_i is independent of X_(i-1), and no state is kept across records. The reset I am mentioning is a simplification of the above as: hmac_set_key(s); for (i=1;i<n;i++) { hmac_update(s, X_i) hmac_digest(s, output) hmac_reset(s) }
and effectively sets the state s, to the same values it was after hmac_set_key().
On plain HMAC the memory for the hashes was not that significant, but on umac that method is quite wasteful. I don't see a straightforward solution to that though, without a high level API.
Would it help to have a separate struct for the expanded key, and use that key with several per-message contexts? A bit like the split between struct gcm_key and struct gcm_ctx, in gcm.h? The same could be done also with hmac, if needed.
That looks like a nice and clean solution. Would it be something like: hmac_set_key(struct hmac_key*) hmac_init(struct hmac_ctx*, struct hmac_key*) hmac_update(struct hmac_ctx*) hmac_digest(struct hmac_ctx*, output) ?
It would be nice if umac could be used under such an abstraction (or if the umac_set_nonce would imply the reset).
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
The current HMAC API assumes that the hashing state is kept per call.
I don't think so, but maybe I misunderstand you (or maybe you have found a bug?).
That is if I have to hash a series of packets with contents X_0, X_1, ..., X_n I do: hmac_set_key(s); for (i=1;i<n;i++) { hmac_update(s, X_i) hmac_digest(s, output) }
That loop should compute HMAC(key, X_0), HMAC(key, X_1), and so on, with X_0 affecting only the first digest.
for (i=1;i<n;i++) { hmac_set_key(s); hmac_update(s, X_i) hmac_digest(s, output) }
And so should this (assuming you pass the same key to set_key every time).
Both hmac_set_key and hmac_digest end with identical calls
memcpy(state, inner, hash->context_size);
to set the state properly for hashing a new message.
hmac_set_key(struct hmac_key*) hmac_init(struct hmac_ctx*, struct hmac_key*) hmac_update(struct hmac_ctx*) hmac_digest(struct hmac_ctx*, output)
Something like that would make sense.
It would be nice if umac could be used under such an abstraction (or if the umac_set_nonce would imply the reset).
umac_digest should imply a reset (and an increment of the nonce, if you don't call set_nonce explicitly).
Regards, /Niels
On Fri, Apr 26, 2013 at 8:21 PM, Niels Möller nisse@lysator.liu.se wrote:
The current HMAC API assumes that the hashing state is kept per call.
I don't think so, but maybe I misunderstand you (or maybe you have found a bug?).
It seems you're correct, sorry for the noise. I interpreted the _digest() functions of nettle similarly to digest output functions in gcrypt which don't cause a reset of the hash. I should have read the documentation carefully. The current code is thus fine for my usage.
regards, Nikos
Nikos Mavrogiannopoulos n.mavrogiannopoulos@gmail.com writes:
It seems you're correct, sorry for the noise. I interpreted the _digest() functions of nettle similarly to digest output functions in gcrypt which don't cause a reset of the hash.
So in nettle, instead you need to copy the state (or start over) if you want to have the digest of X_0 and the digest of the concatenation X_0 + X_1. Which I'd expect to be a less common use case.
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes:
There are some other, less trivial, changes I'm considering.
- Tweaks to other context structs. E.g., for AES we have the nrounds field last, after the subkeys. If one could move it first, then one could allocate less space for subkeys when using shorter AES key sizes (not entirely sure how to make a decent C API for that, though).
I think it's going to get a bit messy to handle structs of varying size. I think the simplest way would be to arrange the internal aes functions to take number of rounds and the subkey as separate arguments. And then define separate context structs and functions for each key size, like
struct aes128_ctx { uint32_t keys[44]; };
struct aes192_ctx { uint32_t keys[52]; };
struct aes256_ctx { uint32_t keys[60]; };
There should be no problem to also keep the current AES interface (with variable key size) for backwards compatibility.
All the public AES functions would then call the same internal functions, specifying the approproate number of rounds/subkeys in each case.
Does that make sense? My impression is that most applications and protocols typically treat AES128 and AES256 as different algorithms, and have little use for an interface where the same function accepts a variable key size.
And to recall, the motivation for the change is to avoid useless allocation for the common case of AES128.
Regards, /Niels
On Mon, Apr 29, 2013 at 9:41 AM, Niels Möller nisse@lysator.liu.se wrote:
Does that make sense? My impression is that most applications and protocols typically treat AES128 and AES256 as different algorithms, and have little use for an interface where the same function accepts a variable key size.
At least for gnutls, that's looks fine.
regards, Nikos
nisse@lysator.liu.se (Niels Möller) writes:
I think the simplest way would be to arrange the internal aes functions to take number of rounds and the subkey as separate arguments. And then define separate context structs and functions for each key size, like
struct aes128_ctx { uint32_t keys[44]; };
struct aes192_ctx { uint32_t keys[52]; };
struct aes256_ctx { uint32_t keys[60]; };
There should be no problem to also keep the current AES interface (with variable key size) for backwards compatibility.
I've pushed a branch "aes-reorg" to the public repo, implementing this change.
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes:
I've pushed a branch "aes-reorg" to the public repo, implementing this change.
I'm now about to merge this into the master branch. Some questions:
Now that the more ciphers in nettle work with fix key size, maybe it would be a good idea to drop the length argument also from the nettle_set_key_func typedef? This is used primarily for struct nettle_cipher (nettle-meta.h), where using a function pointer with more than one size makes little sense. Since various incompatible changes are being made for nettle-2.8 anyway, this could be a good time.
Also, other AES-style algorithms, in particular, twofish and camellia, could have similar changes as AES. E.g., the twofish implementation appear to zero-pad keys of unusual sizes up to the next standard size. Is that an important feature, or can I change it to just have twofishN_setkey (struct twofishN_ctx *, const uint8_t *key), for n = 128, 192 and 256?
Regards, /Niels
You wrote:
Now that the more ciphers in nettle work with fix key size, maybe it would be a good idea to drop the length argument also from the nettle_set_key_func typedef? This is used primarily for struct nettle_cipher (nettle-meta.h), where using a function pointer with more than one size makes little sense.
How would you then handle ciphers that accepts arbitrary key sizes?
Also, other AES-style algorithms, in particular, twofish and camellia, could have similar changes as AES. E.g., the twofish implementation appear to zero-pad keys of unusual sizes up to the next standard size. Is that an important feature, or can I change it to just have twofishN_setkey (struct twofishN_ctx *, const uint8_t *key), for n = 128, 192 and 256?
I'd say drop it unless someone has a use-case for it.
/Simon
I wrote:
Now that the more ciphers in nettle work with fix key size, maybe it would be a good idea to drop the length argument also from the nettle_set_key_func typedef?
Simon Josefsson simon@josefsson.org writes:
How would you then handle ciphers that accepts arbitrary key sizes?
You should call the algorithm-specific functions which accepts variable key size, e.g., cast128_set_key. But you will no longer be able to pass arbitrary key size via the function pointers nettle_cast128.set_*_key, which will then be a wrapper function specifying a fix key size.
So this change affects the interface via the structs in nettle-meta.h. And any application code which use the nettle_crypt_func typedef for other purposes. Direct use via algorithm specific function keeps following the same principle: Algorithms with variable key size have a _set_key function with a length parameter. Algorithms with a fix key size take no length parameter.
I think this makes sense, because struct nettle_cipher contains no information about possible key sizes, only a single value. So if other key sizes are possible, there's no way to tell from looking at the struct nettle_cipher instance that represents the algorithm. Except by examining the name field...
For aes, aes_set_*_key (which is kept for backwards compatibility) accepts a length parameter, but the new and recommended functions, aes128_set_*_key, aes192_set_*_key, aes256_set_*_key, do not. In effect, the new interface treats aes128, aes192 and aes256 as distinct algorithms, with any similarities being an implementation detail. And I'm now considering doing the same with other aes-style algorithms.
Regards, /Niels
You wrote:
I wrote:
Now that the more ciphers in nettle work with fix key size, maybe it would be a good idea to drop the length argument also from the nettle_set_key_func typedef?
Simon Josefsson simon@josefsson.org writes:
How would you then handle ciphers that accepts arbitrary key sizes?
You should call the algorithm-specific functions which accepts variable key size, e.g., cast128_set_key. But you will no longer be able to pass arbitrary key size via the function pointers nettle_cast128.set_*_key, which will then be a wrapper function specifying a fix key size.
Ok, sounds fine to me.
For aes, aes_set_*_key (which is kept for backwards compatibility) accepts a length parameter, but the new and recommended functions,
If you bump the ABI, is there any reason to keep backwards compatible functions? It sounds like the changes you are considering may break applications anyway, so they might as well take the time to upgrade to the new API.
/Simon
Simon Josefsson simon@josefsson.org writes:
If you bump the ABI, is there any reason to keep backwards compatible functions?
The ABI will break, but I'm also considering source-level "API" compatibility. Which isn't an all-or-nothing issue; different applications use different subsets of the API.
Regards, /Niels
You wrote:
Simon Josefsson simon@josefsson.org writes:
If you bump the ABI, is there any reason to keep backwards compatible functions?
The ABI will break, but I'm also considering source-level "API" compatibility. Which isn't an all-or-nothing issue; different applications use different subsets of the API.
Sure. If it was easier, one would want to for example build all reverse-dependencies in Debian using an updated Nettle to see what percentage of packages break and how... but this is not that simple to do.
/Simon
nisse@lysator.liu.se (Niels Möller) writes:
Also, other AES-style algorithms, in particular, twofish and camellia, could have similar changes as AES.
I've had a closer look now. I was a bit mistaken about twofish; it uses the same number of rounds and subkeys independent of the key size. So the remanining algorithms with a number of subkeys depending on the key size are camellia and cast128.
Camellia uses fewer subkeys for 128 bit keys than for 192 or 256 bit keys. So this is a bit similar to AES, and I think an analogous reorg would make sense.
As for cast128, the smaller number of rounds apply only for keysizes shorter than 80 bits (possible choices: 40, 48, 56, 64 or 72 bits). Maybe we can totally drop support for that, and limit possible key sizes to 80, 88, 96, 104, 112, or 128 bits? Does anyone has a usecase, e.g., interop with some old applications, where CAST128 with shorter keys are important? I'd imagine anyone using CAST128 today is using the maximum key size of 128 bits.
If I go through with the plan to drop the length argument from nettle_cipher.set_*_key, I think I should also add some public wrapper functions like, e.g.,
void twofish128_set_key (struct twofish_ctx *ctx, const uint8_t *key) { twofish_set_key (ctx, 16, key); }
for algorithms which allow variable key size. (Current situation is that instead, ciphers with *fixed* key size, like the new aes128_set_encrypt_key, need wrappers for their corresponding nettle_cipher instance). Naming the cast128 wrapper function for the common case of 128-bit keys will be a challenge, though...
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes:
Camellia uses fewer subkeys for 128 bit keys than for 192 or 256 bit keys. So this is a bit similar to AES, and I think an analogous reorg would make sense.
I tried this out now. New header file below.
camellia192 and camellia256 are really the same, except for the key schedule. Hence some #defines to make several camellia192 symbols aliases to the corresponding camellia256 things. I don't quite like use a define on the struct tag, but I hope that's ok. And maybe I should substitute _CAMELLIA*_NKEYS for _CAMELLIA*_ROUNDS, since it's not exactly a round count.
Comments appreciated.
Regards, /Niels
/* camellia.h * * Copyright (C) 2006,2007 * NTT (Nippon Telegraph and Telephone Corporation). * * Copyright (C) 2010, 2013 Niels Möller * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA */
#ifndef NETTLE_CAMELLIA_H_INCLUDED #define NETTLE_CAMELLIA_H_INCLUDED
#include "nettle-types.h"
#ifdef __cplusplus extern "C" { #endif
/* Name mangling */ #define camellia128_set_encrypt_key nettle_camellia128_set_encrypt_key #define camellia128_set_decrypt_key nettle_camellia_set_decrypt_key #define camellia128_invert_key nettle_camellia128_invert_key #define camellia128_crypt nettle_camellia128_crypt
#define camellia192_set_encrypt_key nettle_camellia192_set_encrypt_key #define camellia192_set_decrypt_key nettle_camellia192_set_decrypt_key
#define camellia256_set_encrypt_key nettle_camellia256_set_encrypt_key #define camellia256_set_decrypt_key nettle_camellia256_set_decrypt_key #define camellia256_invert_key nettle_camellia256_invert_key #define camellia256_crypt nettle_camellia256_crypt
#define CAMELLIA_BLOCK_SIZE 16 /* Valid key sizes are 128, 192 or 256 bits (16, 24 or 32 bytes) */ #define CAMELLIA128_KEY_SIZE 16 #define CAMELLIA192_KEY_SIZE 24 #define CAMELLIA256_KEY_SIZE 32
/* For 128-bit keys, there are 18 regular rounds, pre- and post-whitening, and two FL and FLINV rounds, using a total of 26 subkeys, each of 64 bit. For 192- and 256-bit keys, there are 6 additional regular rounds and one additional FL and FLINV, using a total of 34 subkeys. */ /* The clever combination of subkeys imply one of the pre- and post-whitening keys is folded with the round keys, so that subkey #1 and the last one (#25 or #33) is not used. The result is that we have only 24 or 32 subkeys at the end of key setup. */
#define _CAMELLIA128_ROUNDS 24 #define _CAMELLIA256_ROUNDS 32
struct camellia128_ctx { uint64_t keys[_CAMELLIA128_ROUNDS]; };
void camellia128_set_encrypt_key(struct camellia128_ctx *ctx, const uint8_t *key);
void camellia128_set_decrypt_key(struct camellia128_ctx *ctx, const uint8_t *key);
void camellia128_invert_key(struct camellia128_ctx *dst, const struct camellia128_ctx *src);
void camellia128_crypt(const struct camellia128_ctx *ctx, size_t length, uint8_t *dst, const uint8_t *src);
struct camellia256_ctx { uint64_t keys[_CAMELLIA256_ROUNDS]; };
void camellia256_set_encrypt_key(struct camellia256_ctx *ctx, const uint8_t *key);
void camellia256_set_decrypt_key(struct camellia256_ctx *ctx, const uint8_t *key);
void camellia256_invert_key(struct camellia256_ctx *dst, const struct camellia256_ctx *src);
void camellia256_crypt(const struct camellia256_ctx *ctx, size_t length, uint8_t *dst, const uint8_t *src);
/* camellia192 is the same as camellia256, except for the key schedule. */ /* Slightly ugly with a #define on a struct tag, since it might cause surprises if also used as a name of a variable. */ #define camellia192_ctx camellia256_ctx
void camellia192_set_encrypt_key(struct camellia256_ctx *ctx, const uint8_t *key);
void camellia192_set_decrypt_key(struct camellia256_ctx *ctx, const uint8_t *key);
#define camellia192_invert_key camellia256_invert_key #define camellia192_crypt camellia256_crypt
#ifdef __cplusplus } #endif
#endif /* NETTLE_CAMELLIA_H_INCLUDED */
nisse@lysator.liu.se (Niels Möller) writes:
Camellia uses fewer subkeys for 128 bit keys than for 192 or 256 bit keys. So this is a bit similar to AES, and I think an analogous reorg would make sense.
I tried this out now. New header file below.
Now merged in the master branch. Unlike the aes reorg there's *no* backwards compatible camellia_ctx. Is that ok?
My thinking on backwards compatibility is:
1. ABI compatibility (i.e., ability to link applications with the new version of the library, without recompilation) is broken anyway, so that's not an issue at the moment.
2. When it's trivial to implement a backwards compatible API (i.e., the ability to recompile applications with no changes to their source code), do it.
3. For features that are widely used, e.g., AES, implement backwards compatible interfaces also when that requires a bit more effort.
For (3), I think Camellia is borderline. It's been implemented in Nettle for a couple of releases, but I expect few applications use it. And I should also add that applications using nettle_camellia* from nettle-meta.h (and do so in the documented way) are unaffected by this change.
Regards, /Niels
nisse@lysator.liu.se (Niels Möller) writes:
For (3), I think Camellia is borderline. It's been implemented in Nettle for a couple of releases, but I expect few applications use it. And I should also add that applications using nettle_camellia* from nettle-meta.h (and do so in the documented way) are unaffected by this change.
One way to confirm theories around which applications uses what is to establish a list of significant applications that uses Nettle, which can be recompiled with a proposed nettle release to see if they break. For you to do this is a lot of work, but maybe just establishing a list of applications which deserves to be tested against newer nettle is useful -- then you can ask for volunteers to actually do the testing, and summarize results. If nobody steps up for a particular application, then that particular project has a weaker rationale to complain about any problems later on. If projects not on the list complains about API/ABI breakage, the project can be added to the list.
The above is just an idea, based on my experience with API/ABI breakage in libraries. I started out believing that the best approch to API/ABI rigidity was to do the right thing from a theoretical point of view -- i.e., if there is ABI breakage, bump ABI -- but I've become more pragmatic over the years. It seems to cause less problems for everyone if you make sure the API/ABI is reasonable and works for the majority of well-maintained packages, even if that sometimes means violating the theoretical rules around API/ABI version bumping. The theoretical rules around API/ABI bumping causes friction and work for a lot of people every time they are excercised, and overall sometimes that can be contra-productive.
/Simon
nisse@lysator.liu.se (Niels Möller) writes:
I think it would make some sense to adopt the principle that key sizes use unsigned (since keys by definition are small objects), and message sizes use size_t.
I'm now leaning towards using size_t also for all function arguments which are key sizes. There should be no performance penalty in most cases, and very small penalty also when arguments are passed on the stack. And it may reduce warnings about truncation to smaller type if one passes, e.g, the output of strlen. I'll stick to unsigned in struct nettle_cipher and some of the internal functions, though.
I created the branch size_t-changes, now available also in the public repo. I think I have converted everything except the unfinished openpgp code.
Regards, /Niels
I prefer using size_t for all string and data sizes, if for no other reason that it is what the C and POSIX standards uses.
/Simon
nisse@lysator.liu.se skrev:
nisse@lysator.liu.se (Niels Möller) writes:
I think it would make some sense to adopt the principle that key
sizes
use unsigned (since keys by definition are small objects), and
message
sizes use size_t.
I'm now leaning towards using size_t also for all function arguments which are key sizes. There should be no performance penalty in most cases, and very small penalty also when arguments are passed on the stack. And it may reduce warnings about truncation to smaller type if one passes, e.g, the output of strlen. I'll stick to unsigned in struct nettle_cipher and some of the internal functions, though.
I created the branch size_t-changes, now available also in the public repo. I think I have converted everything except the unfinished openpgp code.
Regards, /Niels
-- Niels Möller. PGP-encrypted email is preferred. Keyid C0B98E26. Internet email is subject to wholesale government surveillance.
nettle-bugs@lists.lysator.liu.se