/.../ One must know that passed strings must not be UTF8 strings, so conversion won't hurt them. This is neither checked nor documented, and even if documented, it forces user to do something to fullfill this requirement.
It depends on your point of view. If the point of view is that you have decoded strings internally, which is customary in Pike, then it's exactly the opposite - the implicit conversion enables you to feed and get back decoded strings without fuzz.
If there were a universal way to encode wide chars in normal files then you could be assured that the Stdio functions also would do that encoding implicitly so that you wouldn't have to meddle with encoded strings in your pike program when using the Stdio module.
/.../ There are thousands of functions available in different libraries, but mere existence doesn't mean that they should be used, nor it troubles anyone.
That analogy is faulty. The case is rather the choice between <simple straightforward method> and <more complex method>. If the <more complex method> exists, one will think it must do that for a reason. Thus it's worth investigating it and maybe deploy it even though the code gets more clunky. If the <more complex method> doesn't produce any measurable gain at all compared to the other one, it has no reason to exist.
But by now I think this discussion in itself has become a waste of too much effort compared to either alternative, so the flag is perfectly fine by me for that reason only. I.e. for me you're welcome to go ahead and add it. There are certainly plenty of things in Pike already that are a lot worse any way you look at them but which got in without nearly this amount of debate.
All this is not about "what is Pike for and what it is not" - it is generic (so far) interpreted language, so it might (and is) used for anything and everything. In turn, it means that more control over what is going on and how is better than less control.
I don't agree. There are a lot of things you don't control in Pike. E.g. whether strings gets hashed or not. One could certainly squeeze out a bit of extra performance by telling the interpreter not to hash certain strings. Same thing goes for the mandatory runtime argument checks, the limits of native integers, the hash method used in mappings, the allocation strategies in countless places, the reference counting policy, etc, etc, etc. All these things you don't control, and that is for a reason: To keep the language simple to use and - in some cases - simple to implement.
Also, it's not like these things, nor the implicit encoding in the SQLite interface, impacts the expressiveness of the language. You can still get the work done, just not _precisely_ in the way you might want to do it.
I just want to understand _why_ SQLite uses implicit conversions when no other DB module does this, that ALL.
This is a good question, and note that I haven't argued this aspect so far.
The reason the other db modules don't do it is that there's no set policy in those db's what the encoding is. I take it that SQLite got such a policy and that it's UTF8 (please correct me if I'm wrong here).
So an argument against the SQLite glue doing any conversion is that the others don't do it and they are all accessible through a uniform interface. But otoh that interface doesn't really say anything about providing a uniform string encoding behavior. Actually, you can't do much at all through the Sql.Sql interface without knowing the database you're connected to, since so many details differ anyway.
So all in all, I think it's good that the glue to any db with a well defined encoding policy enforces that policy by default, since it makes the interface to those db's simpler to use. I've heard that newer MySQLs have a Unicode policy too. I'd be positive to embed encoding in that glue too (but there's of course the compatibility aspect to worry about in that case).