/.../ It seems that through the years, the Sql interface wanted to have real types, but slowly mutated to strings only.
Yes, that's suggested by the OLD_SQL_COMPAT define too. I'm not sure how it came about, but it might have been an odd way to fix the integer 0 vs. NULL problem.
Well, a NULL value basically has any type (in SQL). I'm not sure what would be the most natural way to extend this into Pike, but I figured that the closest we could get, was something like the code above. Since the NULL value has "any" type, it's not lying as such.
I think one should view this more from the Pike angle. If e.g. arrayp(x) is true, a pike programmer typically expects that all operators and calls for arrays would work for x. It's not possible to achieve that when x is an object (most builtin functions expecting arrays would still balk, for example). So even if you added a bunch of backtick functions it'd still not work 100%.
That's why I think _is_type should be used very sparingly. In this case we don't have compatibility concerns (assuming new query calls), so there's no need for pretence for that reason.
Adding some backticks like `+ and ``+ to make it behave a little bit like builtin types is another matter. That's just a convenience; the user must still know that there's really an object involved.
The problem is, the NULL value *can* be casted to any type, it's just that the result for any types other than the ones mentioned above is just that: undefined.
Here again I think it's more right to view it from a Pike perspective rather than SQL, since it actually is Pike we got at this end of the interface. The return value from e.g. cast("multiset") should really be a multiset, and UNDEFINED isn't a multiset. If a cast function cannot return a value of the indicated type, it should always throw an error.