Well, it if converts 16-bit characters using UTF-8, then it must have an opinion of the interpretation of those 16-bit characters (probably UCS-2 or UTF-16). So I'd say that it _does_ have an opinion of what a (16-bit) character is, which it differentiates from (not necessarily character) 8-bit sequences. Just like Oracle differentiates character and binary data.