I think you are conflagrating range with interpretation. Both a Latin1 string and an UTF-8 encoded one are 8-bit strings (with a 0-255 range). What would be useful is a datatype that declares that the elements are not Unicode characters (as they are in the Latin1 string case) but some raw binary encoding (as they are in the UTF-8 case), optionally also specifying which encoding. This has been suggested before (with "buffer" as a suggestion for the name of the new datatype), but it has never been implemented due to the difficulty of introducing such a datatype in a consistent way while still retaining backward compatibility.
(The idea for the new datatype was that it would be used for I/O, which always needs to be encoded somehow, and that it would not internalize (hash) the values since this is generally less useful for encoded strings.)