can you elaborate on this?
Sure.
what about pike packages for linux distributions? those usually use the same kind of system to build the package that the end user installs. so the configure tests should all be the same unless the user makes non-standard changes to the system (or is missing some packages, which could be solved by depending on those in the pike devel component)
Well. Usually (if you assume debian) you can't compile any external modules using those at all, since the path names ending upp in specs.in and dynamic_module_makefile does not match the path names the files are actually installed to.
But, anyway, the pike configure tests tests how to build pike, on the current system, with the currently installed packages. Changing which packages are installed (say, changing compiler version, or adding a new include/lib directory) can break things.
The 'new' system assumes that you write your own configure tests, if needed.
Adding a set of default tests somewhere might help people, of course.
The old system remembers the exact flags to use when calling gcc/link, and assumes smartlink is used.
There is simply much less magic going on with the new system. I very much prefer that, since my external modules (using the new system) now tend to actually compile and install correctly.