He didn't say more specifically where and why in Java. I can't see it as anything but a bug that ought to be fixed there. (Not that that is any reason for not providing the option to decode non-shortest forms. That's decided by the severity of the bug and how common its faulty output is in practice.)
You can read all about the reasons why decoding non-shortest forms is bad in the technical reports at unicode.org, e.g. http://www.unicode.org/reports/tr36/tr36-2.html. Basically, it's very useful to be able to interpret the various ASCII chars that have special meaning in protocols without knowing or caring whether UTF-8 is being used. That's also one of the design goals for UTF-8, and it's only achieved by forbidding both coding and decoding of non-shortest forms.