Catching up, and I'd like to give my view of this since I'm responsible for it.
There is lots of code that simply assumes that a zero from read_bytes et al means the file doesn't exist. You can tell since the code happily continues with e.g. some kind of default, or that it simply doesn't process the file at all (think files in a queue). This is very broken behavior for other sorts of I/O errors, as people already have mentioned.
I ran across this on code which happily ignored input because of a permission problem. Nothing was reported, and the system just idled when it was supposed to be doing things. It was of course all too time consuming to figure out what the problem actually was.
My aim with the change was to make the functions as useful as possible, considering they are convenience functions. They should only return for errors that the caller typically wants to handle and throw the rest. My experience, and also all the code I've seen that are using these functions, tells that the error one wants to handle is when the file doesn't exist, and nothing else.
So, the intention is that very little code using these functions should need to be changed. It only starts to behave better since exceptional errors (i.e. installation errors or hardware failures) get reported instead of being essentially ignored. I can only see two cases where changes are necessary:
1. Code that checks errno after getting a zero return value. (I can't recall I've ever seen that.)
2. Code that regularly encounters other (i.e. exceptional, imo) I/O errors but has so far lumped them together with the common nonexisting file error. Such code could probably be improved anyway to report these errors in different ways since they are of different severity.