On Thu, Apr 25, 2013 at 10:44 AM, Niels Möller nisse@lysator.liu.se wrote:
Simon Josefsson simon@josefsson.org writes:
Re the 3.0 plans, it seems that making a 3.0 release that implement the first two items on the list (unsigned->size_t, memxor namespace) would be possible to achieve relatively soon with modest investment in work.
I've done the memxor thing now, and I'm looking into size_t. When switching to size_t, should we use it for *all* lengths, or only all lengths which potentially are large? For an example, consider aes.h: I think it's clear that we ought to switch to size_t for void aes_encrypt(const struct aes_ctx *ctx, unsigned length, uint8_t *dst, const uint8_t *src);
Is this the only reason to break binary compatibility? If yes I'd say delay that change until there is another more important reason to break compatibility. GnuTLS 3.x broke compatibility with 2.12.x and hasn't been included in most distributions yet, 2 years after its release.
On the practical side, I don't think switching unsigned to size_t in cryptographic functions would buy much, and would actually occupy 4 more bytes on the stack on 64-bit systems. There aren't many applications (that I'm aware of) that require more than 4GB of encrypted data in a single run. Even if there are, it is much simpler (and memory efficient) to break the encryption in chunks, and as far as I remember some encryption modes (I think it was in-place cbc) allocate memory equal to input size, which make them anyway unsuitable for such large sizes.
regards, Nikos