We need to work on configure times, though. The largest gain (about 400% or so) can be had by generating configure from an old version of autoconf.
And a secondary rather large gain can be had by not having separate configure-scripts for each module, but instead generating a toplevel configure from the subconfigures or something.
And removing redundant and other rather unnessesary tests (I'm guilty here, I often add tests for each function even though the presence of one generally means that the others are available too, such as the {get,set,list,llist,lset,lget,fget,fset,flist}xattr methods I just added).
And also stopping the module configure scripts as soon as a required feature is found to be missing (as an example, in the GL module, when no GL library is found, stop looking for functions in the non existing library, and inversely, stop looking for more GL libraries once the first has been found)
And this brings back the question of what to do with autoconf. I would really, really like to have one that is 100% compatible with our (and others) configure.in-files but then goes through all the normal steps of parsing, generating an abstract tree, optimize it and then generate, with language plugins, the configure files. Two given plugins would be sh and pike.
Or, the less advanced but still usable route of forking autoconf and maintain a compatible, optimized version with actual enhancements from 2.5x backported.
Wouldn't it just be easier to write our own, pike-based configure language which can be interpreted or compiled to a shell-script?
I've thought about doing that for years...
Shall we get going then?
Seriously, I think it could be done without too much work. It all depends on how advanced we want the system to be and how compatible it must be with the standard autoconf tools.
Well, there's that, and how much we want to be able to write shellscripts in it. Maybe we want to write a better solution then autoconf, that is run by Pike instead of bash?
The best way to go, long term, is to rewrite the whole thing as something new and shiny, with no (ba)sh stuff, only pike. Then we can write a sh-compiler that compiles our native config language to sh for source dists etc.
The first decision would be to decide if we want to invent a new config-language or if we want pure pike. A new config language would probably be easier to compile into sh-scripts while pike would give us *MUCH* more powerful configuration scripts.
If you are not compatible with existing configure files, then there are already alternatives out there.
The problem with strict compatibility, is that configure files often mix in (ba)sh stuff which will make them much harder to parse and understand. As long as they only use "standard" functions for the tests, they shouldn't be much of a problem.
Yes, but bash isn't that hard to parse. But it sure isn't the most straight forward syntax for sure...
It's not the parsing parts I'm worried about. It's more the understanding-and-doing-something-useful part that scares me.
I guess it's possible to ignore those things and simply enlist a real (ba)sh whenever needed, but it would be neat to have everything "in-house".
It's quite impossible to have everything 'in-house' unless we add a C-compiler to pike.
Adding a C-compiler shouldn't be too hard.. ;)
I get your point, but still, I would like to avoid using the shell for more operations than an absolute minimum. OTOH, we can ignore those performance issues for now. If we would just get a working base, we can optimize from there.
Yes. The problem is that is would be hard for it to reach critical mass (i.e. the point where someone else is maintaining it for you) if it was Pike-based.
That does not really matter as long as it works for pike.
If you need pike on the build system it will be difficult to bootstrap new platforms.
pike-devel@lists.lysator.liu.se