Yes, there's no doubt that the property of being able to process ASCII in encoded form is specific to each encoding. But I don't see why there should be a general principle in the Charset module that that property must be disregarded in the encodings that have it just because there are other encodings that do not.
You are correct that if someone uses the Charset module with an arbitrary encoding, (s)he can't assume this property, but if that person uses a predetermined encoding that is known to have it, why not let him/her take advantage of that? There are afterall other cases when the Charset module users rely on this and various other encoding specific properties, typically in heuristics that guess the encoding.
But more importantly, this is not a matter of choice when it comes to UTF-8. The standard clearly states that an implementation MUST NOT decode non-shortest forms. If it does, it doesn't decode UTF-8 anymore, it decodes a superset of UTF-8 or, if you like, a dated version of it. I think the decoder returned by Locale.Charset.decoder("utf-8") should comply to the UTF-8 standard. There's every possibility to add more decoders to the Charset module for other variants. I can take it upon myself to fix an extended/historic encoder and decoder if someone proposes a name for it.
As for the argument to use utf8_to_string instead, that one doesn't have the feature to handle streaming operation. If streaming isn't wanted, I think most people already use utf8_to_string when they only deal with UTF-8.