Jeffrey Walton noloader@gmail.com writes:
It is easy enough to audit. Everywhere there is an assert() to assert a condition, then there should be an if() statement with the same test that returns failure.
Sorry for getting a bit off-topic, but I would like to advice against that practice.
In particular, it makes the error handling for the "return failure" case awkward to test properly. At least, in my experience it is desirable to be able to run the same testsuite on either release or debug builds. But any tests intended to exercise that error handling will crash with an assertion failure when testing a debug build. I don't like that at all.
Getting back to Nettle, I strive to design the interfaces so that there are no ways to fail, given valid inputs. No return value, and no error handling required from callers. And where practical, documented input requirements are backed up with asserts.
E.g, consider the _set_key method for some cipher. Due to an application bug, this function might be called with an empty key, with a null key pointer or zero key length. Nettle will then crash with either an assertion failure or fatal signal from a null pointer dereference.
We could avoid crashing, by silently ignoring the error and doing nothing. That seems dangerous to me, the application might end up sending sensitive data encrypted with an uninitialized cipher, and where the value of uninitialized subkeys is likely easy to guess by the attacker.
We could return a error code, but for that to be a real improvement, *all* applications must clutter up their code to check errors from _set_key, which are not expected to ever happen, and with error handling which is inconvenient to test properly. If the applications with the empty key bug fails to check the return value, or checks it but with bugs in the error handling code, it will end up in the same case of silently using a cipher with uninitialized easy-to-guess subkeys.
On the other hand, the _set_key function for ciphers with weak keys does have a return value in Nettle. I think this makes sense because
1. It's Nettle's job to know which keys are weak, not the application's.
2. Checking for weak keys is somewhat optional. It might useful in some cases, but not terribly useful if the application selects the key using some random process where the probability of weak keys are negligible. And for the applications where it does matter, the error handling should be reasonably easy to test.
Postel's law is dangerous nowadays. The threat landscape has changed. Look for any reason you can to fail processing. If you can't find a reason, then begrudgingly process the data.
I'm afraid I can't make much sense of this remark. You probably read the robustness principle (RFC 1112) differently than I do.
The way I read it, one should be prepared to receive all possible input from the remote end, make sure that everything allowed by the spec is handled correctly, and with proper error handling for anything invalid.
While when sending data, one should stay in the main stream. Avoid any obscure and rarely used protocol features and corner cases, even when they are technically correct according to the spec.
Regards, /Niels