It's not that graceful when limit is hit.
string f=""; for(int i; i<1024*1024; i++) { f+="int i"+i+";\n"; }
Ok.
compile(f);
Program received signal SIGSEGV, Segmentation fault. 0x080eee50 in low_add_to_identifiers (state=0x83a7768, ARG= {name = 0x85abdd0, type = 0x82b9570, identifier_flags = 0 '\0', run_time_type = 251 'û', opt_flags = 0, func = {c_fun = 0x40ff8, offset = 266232}}) at /home/nilsson/Pike/7.5/src/program_areas.h:19 19 FOO(unsigned INT16,struct identifier, struct identifier, identifiers)
/ Martin Nilsson (saturator)
Previous text:
2004-01-15 20:14: Subject: file limits
If there is one, it's not less then 65536, I hope. It might be that the source code files are limited to 4Gb.
Ie, report when you hit the limits and I'm sure someone will fix it. :)
/ Mirar
(. Perhaps this is pikes way of saying you need a better variable naming scheme? .)
/ Peter Lundqvist (disjunkt)
Previous text:
2004-01-15 20:24: Subject: file limits
It's not that graceful when limit is hit.
string f=""; for(int i; i<1024*1024; i++) { f+="int i"+i+";\n"; }
Ok.
compile(f);
Program received signal SIGSEGV, Segmentation fault. 0x080eee50 in low_add_to_identifiers (state=0x83a7768, ARG= {name = 0x85abdd0, type = 0x82b9570, identifier_flags = 0 '\0', run_time_type = 251 'û', opt_flags = 0, func = {c_fun = 0x40ff8, offset = 266232}}) at /home/nilsson/Pike/7.5/src/program_areas.h:19 19 FOO(unsigned INT16,struct identifier, struct identifier, identifiers)
/ Martin Nilsson (saturator)
Hmmhmm. I agree that's not very graceful. "unsigned INT16" seems to agree with the 65536 identifiers I guessed.
Put such a test in the testsuite?
/ Mirar
Previous text:
2004-01-15 20:24: Subject: file limits
It's not that graceful when limit is hit.
string f=""; for(int i; i<1024*1024; i++) { f+="int i"+i+";\n"; }
Ok.
compile(f);
Program received signal SIGSEGV, Segmentation fault. 0x080eee50 in low_add_to_identifiers (state=0x83a7768, ARG= {name = 0x85abdd0, type = 0x82b9570, identifier_flags = 0 '\0', run_time_type = 251 'û', opt_flags = 0, func = {c_fun = 0x40ff8, offset = 266232}}) at /home/nilsson/Pike/7.5/src/program_areas.h:19 19 FOO(unsigned INT16,struct identifier, struct identifier, identifiers)
/ Martin Nilsson (saturator)
Actually, it's the maximum is 65535 identifiers. The number 65535 is used to represent nonexistant identifiers in many routines.
/ Fredrik (Naranek) Hubinette (Real Build Master)
Previous text:
2004-01-15 22:00: Subject: file limits
Hmmhmm. I agree that's not very graceful. "unsigned INT16" seems to agree with the 65536 identifiers I guessed.
Put such a test in the testsuite?
/ Mirar
I rewrote the reallocator so it doesn't crash (someone who knows C should look at it for correctness and fix the warnings). With 65535 used for non-existing identifiers and 0 not used at all(?), we get a max of 65534 identifiers in a class. The time complexity is however such that a normal testsuite test would be more of a problem than a solution. I made a small testsuite with two tests, one below and one above the limit of identifiers, and it ran in 8 minutes on an Athlon 1800+.
/ Martin Nilsson (saturator)
Previous text:
2004-01-15 22:00: Subject: file limits
Hmmhmm. I agree that's not very graceful. "unsigned INT16" seems to agree with the 65536 identifiers I guessed.
Put such a test in the testsuite?
/ Mirar
Hmm.
| pike -e 'string s=""; for (int i=0; i<65536; i++) s+="int i"+i+";\n"; compile_string(s);' | zsh: segmentation fault pike -e | -------- user 3.720 kernel 0.020 elapsed 3.793
Faster for me, at least before your changes?
/ Mirar
Previous text:
2004-01-16 01:53: Subject: file limits
I rewrote the reallocator so it doesn't crash (someone who knows C should look at it for correctness and fix the warnings). With 65535 used for non-existing identifiers and 0 not used at all(?), we get a max of 65534 identifiers in a class. The time complexity is however such that a normal testsuite test would be more of a problem than a solution. I made a small testsuite with two tests, one below and one above the limit of identifiers, and it ran in 8 minutes on an Athlon 1800+.
/ Martin Nilsson (saturator)
It fails correctly with "Too many variable_index." for me.
/ Martin Nilsson (saturator)
Previous text:
2004-01-16 08:11: Subject: file limits
Hmm.
| pike -e 'string s=""; for (int i=0; i<65536; i++) s+="int i"+i+";\n"; compile_string(s);' | zsh: segmentation fault pike -e | -------- user 3.720 kernel 0.020 elapsed 3.793
Faster for me, at least before your changes?
/ Mirar
Ah, you meant so. Yes, the speed is accepable in the beginning for me as well. How long time does it take after my changes (and twice as many symbols)?
/ Martin Nilsson (saturator)
Previous text:
2004-01-16 21:38: Subject: file limits
Yes, that's before your changes. I meant the 3.7 seconds before the error. It's clearly doable to me... :)
/ Mirar
15.7 seconds for my example. What have you done??! :-)
(The number doesn't seem to matter, as long as it's above 65535.)
/ Mirar
Previous text:
2004-01-16 21:57: Subject: file limits
Ah, you meant so. Yes, the speed is accepable in the beginning for me as well. How long time does it take after my changes (and twice as many symbols)?
/ Martin Nilsson (saturator)
Before it crashed after 32767 symbols, now it throws an error after 65534. That was what I meant with twice as many symbols. So the first half takes 3.7s and the second 12s.
/ Martin Nilsson (saturator)
Previous text:
2004-01-16 22:11: Subject: file limits
15.7 seconds for my example. What have you done??! :-)
(The number doesn't seem to matter, as long as it's above 65535.)
/ Mirar
I worked on the warnings now, but it isn't tested until the next xenofarm build which is probably too late for me (zzZz).
I still feel it takes too long time. 15-20 seconds is only 3-4000 lines of code a second. That's *really slow*. The compiler usually chews 35000 lines a second, says the benchmark.
I haven't a clue what the compiler really does in the meantime. Anyone that feels like optimizing it? :)
/ Mirar
Previous text:
2004-01-16 22:41: Subject: file limits
Aah.
/ Mirar
OTOH it is not really a problem that it takes a while to compile files with more than 30000 symbols in the same scope. Any optimization is likely to slow down compilation in normal programs I think.
/ Martin Nilsson (saturator)
Previous text:
2004-01-16 23:11: Subject: file limits
I worked on the warnings now, but it isn't tested until the next xenofarm build which is probably too late for me (zzZz).
I still feel it takes too long time. 15-20 seconds is only 3-4000 lines of code a second. That's *really slow*. The compiler usually chews 35000 lines a second, says the benchmark.
I haven't a clue what the compiler really does in the meantime. Anyone that feels like optimizing it? :)
/ Mirar
I just wonder what it does. Allocating symbols shouldn't take *that* long?
/ Mirar
Previous text:
2004-01-16 23:29: Subject: file limits
OTOH it is not really a problem that it takes a while to compile files with more than 30000 symbols in the same scope. Any optimization is likely to slow down compilation in normal programs I think.
/ Martin Nilsson (saturator)
Runs the program in a CPU emulator and counts cache misses, instructions per line of source code, etc.
/ Martin Nilsson (saturator)
Previous text:
2004-01-17 13:02: Subject: file limits
Maybe it's possible? What does cachegrind do, more exactly?
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
But doesn't the fact that the program runs slower due to emulation mean that external events such as hardware interrupts will occur more frequently than usual (from the programs POV) giving an atypical control flow and thus false results?
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-17 13:14: Subject: file limits
Runs the program in a CPU emulator and counts cache misses, instructions per line of source code, etc.
/ Martin Nilsson (saturator)
It's a synthetic CPU so the instruction timing and memory statistics are not extracted from the hardware CPU but computed in software so the hardware interrupts shouldn't be a factor.
/ Jonas Walldén
Previous text:
2004-01-17 13:47: Subject: file limits
But doesn't the fact that the program runs slower due to emulation mean that external events such as hardware interrupts will occur more frequently than usual (from the programs POV) giving an atypical control flow and thus false results?
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Uh, but unless you emulate the entire universe you will always have external factors. If you emulate just the CPU, you will have external hardware (such as UART:s, video processors, etc), if you emulate those too, you'll still have user input, network traffic etc.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-17 13:58: Subject: file limits
It's a synthetic CPU so the instruction timing and memory statistics are not extracted from the hardware CPU but computed in software so the hardware interrupts shouldn't be a factor.
/ Jonas Walldén
If you get rid of IRQs it should be Good Enough(tm).
/ Peter Bortas
Previous text:
2004-01-17 14:06: Subject: file limits
Uh, but unless you emulate the entire universe you will always have external factors. If you emulate just the CPU, you will have external hardware (such as UART:s, video processors, etc), if you emulate those too, you'll still have user input, network traffic etc.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
What do you mean 'get rid of'? If no interrupts reach the program, it can't function properly (it will eventually get around to waiting for some hardware event), and so the measurement becomes useless.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-17 14:09: Subject: file limits
If you get rid of IRQs it should be Good Enough(tm).
/ Peter Bortas
You don't pollute the caches with 1 billion eth IRQ callbacks, because you don't care. You don't pollute the cache with disk IRQs, bnecouse you don't care. In general on a 3GHz x86 you are served IRQs so seldom that they don't much affect the cache coherancy of your program. I don't know what cachegrind does though. As soon as I get X11 running again I'll check if they have any whitepapers on their site.
/ Peter Bortas
Previous text:
2004-01-17 14:10: Subject: file limits
What do you mean 'get rid of'? If no interrupts reach the program, it can't function properly (it will eventually get around to waiting for some hardware event), and so the measurement becomes useless.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Of course, but why would those interrupts affect the flow of user-level code? In the real world it might pollute the L1/L2 caches with non-app data and reduce the hit rate when the user-level execution resumes, but I don't think that makes the output from valgrind less useful.
/ Jonas Walldén
Previous text:
2004-01-17 14:06: Subject: file limits
Uh, but unless you emulate the entire universe you will always have external factors. If you emulate just the CPU, you will have external hardware (such as UART:s, video processors, etc), if you emulate those too, you'll still have user input, network traffic etc.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Who said anything about "user-level code"? The emulator has to emulate _all_ code to be accurate. If you don't emulate into system calls, you have _no idea whatsoever_ what's in the caches when they return. The same thing for asynchronous interrupt handlers.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-17 14:10: Subject: file limits
Of course, but why would those interrupts affect the flow of user-level code? In the real world it might pollute the L1/L2 caches with non-app data and reduce the hit rate when the user-level execution resumes, but I don't think that makes the output from valgrind less useful.
/ Jonas Walldén
As an application developer I'm primarily interested in my own code and not the L{1,2}/RAM handling in supervisor mode. As Bortas said, I don't believe interrupts have a substantial effect on the memory behavior of user-level code. And if the app is bound by disk I/O I don't think optimizations on RAM handling is going to have a great effect anyway.
/ Jonas Walldén
Previous text:
2004-01-17 14:12: Subject: file limits
Who said anything about "user-level code"? The emulator has to emulate _all_ code to be accurate. If you don't emulate into system calls, you have _no idea whatsoever_ what's in the caches when they return. The same thing for asynchronous interrupt handlers.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
I'm just saying the results you get (and which you apparently are interrested in) on your own code are inaccurate because you fail to take the supervisor mode code into account. How large these inaccuracies are depend on the type of application and the system environment of course.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-17 14:21: Subject: file limits
As an application developer I'm primarily interested in my own code and not the L{1,2}/RAM handling in supervisor mode. As Bortas said, I don't believe interrupts have a substantial effect on the memory behavior of user-level code. And if the app is bound by disk I/O I don't think optimizations on RAM handling is going to have a great effect anyway.
/ Jonas Walldén
You'll also get (larger) problems with threaded programs. The results will be wrong regardless of whether you use real or emulated time for scheduling. If you use the real time, preemption will occur too often. If you use emulated time, threads doing system calls will appear to reenter the ready state too soon or too late, depeding on how much emulated time you account for the system call (it's not practical to try to compute the amount correctly).
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-17 14:21: Subject: file limits
As an application developer I'm primarily interested in my own code and not the L{1,2}/RAM handling in supervisor mode. As Bortas said, I don't believe interrupts have a substantial effect on the memory behavior of user-level code. And if the app is bound by disk I/O I don't think optimizations on RAM handling is going to have a great effect anyway.
/ Jonas Walldén
The results will *always* be wrong as you can't observe a process without skewing it. (As established by Heisenberg...) The point is that cachegrind can measure things that other methods can't, but it's not good for everything.
/ Fredrik (Naranek) Hubinette (Real Build Master)
Previous text:
2004-01-17 14:30: Subject: file limits
You'll also get (larger) problems with threaded programs. The results will be wrong regardless of whether you use real or emulated time for scheduling. If you use the real time, preemption will occur too often. If you use emulated time, threads doing system calls will appear to reenter the ready state too soon or too late, depeding on how much emulated time you account for the system call (it's not practical to try to compute the amount correctly).
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
With hardware assist it's possible to get correct results.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-17 14:33: Subject: file limits
The results will *always* be wrong as you can't observe a process without skewing it. (As established by Heisenberg...) The point is that cachegrind can measure things that other methods can't, but it's not good for everything.
/ Fredrik (Naranek) Hubinette (Real Build Master)
Actually, now that I read some docs I see that supervisor code is executed in the real CPU; they trap the call and does a trick with the program counter so they can resume virtual CPU execution when the call is completed.
The thread support (Helgrind) is for verifying correctness and not performance measurements. For threads in Memcheck is implements its own pthreads-compliant library and runs the emulator process single-threaded.
/ Jonas Walldén
Previous text:
2004-01-17 14:30: Subject: file limits
You'll also get (larger) problems with threaded programs. The results will be wrong regardless of whether you use real or emulated time for scheduling. If you use the real time, preemption will occur too often. If you use emulated time, threads doing system calls will appear to reenter the ready state too soon or too late, depeding on how much emulated time you account for the system call (it's not practical to try to compute the amount correctly).
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
You use it in limited environments and tests that doesn't require/depend on external events? Such as a compilation.
/ Mirar
Previous text:
2004-01-17 14:06: Subject: file limits
Uh, but unless you emulate the entire universe you will always have external factors. If you emulate just the CPU, you will have external hardware (such as UART:s, video processors, etc), if you emulate those too, you'll still have user input, network traffic etc.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Compilation typically works with files on a disk. Hard disks are hardware, and generate interrupts (through the disk controller).
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-17 14:11: Subject: file limits
You use it in limited environments and tests that doesn't require/depend on external events? Such as a compilation.
/ Mirar
It would be a lot of work, but it looks like someone is preparing for a PPC port by separating CPU- and OS-specific parts in diffrent directories in the next version. To really get what I want I would probably have to apply magic to a DC, solder on at least some extra memmory, after writing the SH3->Valgrind-RISC compiler.
Did anyone already do something like that? I can't remember hearing anything about the specs for the DC memory controller.
/ Peter Bortas
Previous text:
2004-01-17 13:02: Subject: file limits
Maybe it's possible? What does cachegrind do, more exactly?
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Do what? Solder on more memory? The memory controller is integrated in the SH4 and can handle 64MB of SDRAM per bank. The current memory is located in bank 3, but bank 2 can also be used for SDRAM (currently unused, so you'd have to solder the chip select line directly to the CPU, probably).
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-17 17:43: Subject: file limits
It would be a lot of work, but it looks like someone is preparing for a PPC port by separating CPU- and OS-specific parts in diffrent directories in the next version. To really get what I want I would probably have to apply magic to a DC, solder on at least some extra memmory, after writing the SH3->Valgrind-RISC compiler.
Did anyone already do something like that? I can't remember hearing anything about the specs for the DC memory controller.
/ Peter Bortas
Do what? Solder on more memory?
Yes.
Sounds doable and fun project then. 64MiB memory is cheap and picking up a used DC should be equally cheap. The experiment boards from ELFA are likely to be the most expensive part in this...
/ Peter Bortas
Previous text:
2004-01-17 17:48: Subject: file limits
Do what? Solder on more memory? The memory controller is integrated in the SH4 and can handle 64MB of SDRAM per bank. The current memory is located in bank 3, but bank 2 can also be used for SDRAM (currently unused, so you'd have to solder the chip select line directly to the CPU, probably).
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
I think the most complex machine specific parts of valgrind are the translation from machine code into intermediate code, and then (after instrumentation) back into machine code. I think that code is reasonably well isolated.
I haven't looked at cachegrind, that has to have some additional machine dependencies of its own.
/ Niels Möller (vässar rödpennan)
Previous text:
2004-01-17 17:43: Subject: file limits
It would be a lot of work, but it looks like someone is preparing for a PPC port by separating CPU- and OS-specific parts in diffrent directories in the next version. To really get what I want I would probably have to apply magic to a DC, solder on at least some extra memmory, after writing the SH3->Valgrind-RISC compiler.
Did anyone already do something like that? I can't remember hearing anything about the specs for the DC memory controller.
/ Peter Bortas
I haven't looked at cachegrind, that has to have some additional machine dependencies of its own.
Probably/hopefully not much. The skins seems to mostly interact with the intermediate code.
/ Peter Bortas
Previous text:
2004-01-17 18:08: Subject: file limits
I think the most complex machine specific parts of valgrind are the translation from machine code into intermediate code, and then (after instrumentation) back into machine code. I think that code is reasonably well isolated.
I haven't looked at cachegrind, that has to have some additional machine dependencies of its own.
/ Niels Möller (vässar rödpennan)
I just guess that it has to know in detail how the particular cache works. But perhaps caches behave about the same on all architectures, just sizes vary?
/ Niels Möller (vässar rödpennan)
Previous text:
2004-01-17 18:13: Subject: file limits
I haven't looked at cachegrind, that has to have some additional machine dependencies of its own.
Probably/hopefully not much. The skins seems to mostly interact with the intermediate code.
/ Peter Bortas
I don't know of any major diffrences. In a multithread/SMP/HT sitation it gets to be a lot more fun, but that is beyond the surrent scope.
/ Peter Bortas
Previous text:
2004-01-17 18:14: Subject: file limits
I just guess that it has to know in detail how the particular cache works. But perhaps caches behave about the same on all architectures, just sizes vary?
/ Niels Möller (vässar rödpennan)
Sparcs, at least older sparcs, have "direct mapped" caches, located between the cpu and the mmu. Which is a pretty bad idea, as far as I understand. But perhaps that doesn't matter for what cachegrind do.
/ Niels Möller (vässar rödpennan)
Previous text:
2004-01-17 18:16: Subject: file limits
I don't know of any major diffrences. In a multithread/SMP/HT sitation it gets to be a lot more fun, but that is beyond the surrent scope.
/ Peter Bortas
It doesn't matter, since it only emulates x86 (except cyrix).
/ Martin Nilsson (saturator)
Previous text:
2004-01-17 18:32: Subject: file limits
Sparcs, at least older sparcs, have "direct mapped" caches, located between the cpu and the mmu. Which is a pretty bad idea, as far as I understand. But perhaps that doesn't matter for what cachegrind do.
/ Niels Möller (vässar rödpennan)
Well. After 65 minutes user time cachegrind has decided its optimization todo. (Ir=instructions, Dr=data reads. Dw=data writes)
------------------------------------------------------------------------ Ir Dr Dw file:function ------------------------------------------------------------------------ 88,059,178,111 42,955,588,163 4,295,926,320 program.c:really_low_find_shared_string_identifier 84,919,770 50,782,803 12,208,676 y.tab.c:yyparse 47,592,824 19,994,402 9,505,154 operators.c:f_add 45,815,009 22,484,729 3,141,369 stralloc.c:low_do_hash 27,212,209 7,719,604 4,777,904 ???:_IO_vfprintf 26,784,098 11,787,918 4,979,221 lexer0.h:yylex0 25,957,914 12,781,687 4,063,093 preprocessor.h:lower_cpp0 19,901,526 9,591,678 3,970,340 stralloc.c:internal_findstring
/ Martin Nilsson (saturator)
Previous text:
2004-01-17 00:10: Subject: file limits
cachegrind you say... time I learned how to use it. *emerges and tries*
/ Mirar
I see one function that could do with optimizing.
/ Mirar
Previous text:
2004-01-17 01:50: Subject: file limits
Well. After 65 minutes user time cachegrind has decided its optimization todo. (Ir=instructions, Dr=data reads. Dw=data writes)
Ir Dr Dw file:function
88,059,178,111 42,955,588,163 4,295,926,320 program.c:really_low_find_shared_string_identifier 84,919,770 50,782,803 12,208,676 y.tab.c:yyparse 47,592,824 19,994,402 9,505,154 operators.c:f_add 45,815,009 22,484,729 3,141,369 stralloc.c:low_do_hash 27,212,209 7,719,604 4,777,904 ???:_IO_vfprintf 26,784,098 11,787,918 4,979,221 lexer0.h:yylex0 25,957,914 12,781,687 4,063,093 preprocessor.h:lower_cpp0 19,901,526 9,591,678 3,970,340 stralloc.c:internal_findstring
/ Martin Nilsson (saturator)
How do you run it?
| % valgrind --tool=cachegrind pike | ... | .../master.pike:350: Failed to index module 'Files' with 'Stat' (module doesn't exist?)
I've compiled with --with-valgrind.
/ Mirar
Previous text:
2004-01-17 01:50: Subject: file limits
Well. After 65 minutes user time cachegrind has decided its optimization todo. (Ir=instructions, Dr=data reads. Dw=data writes)
Ir Dr Dw file:function
88,059,178,111 42,955,588,163 4,295,926,320 program.c:really_low_find_shared_string_identifier 84,919,770 50,782,803 12,208,676 y.tab.c:yyparse 47,592,824 19,994,402 9,505,154 operators.c:f_add 45,815,009 22,484,729 3,141,369 stralloc.c:low_do_hash 27,212,209 7,719,604 4,777,904 ???:_IO_vfprintf 26,784,098 11,787,918 4,979,221 lexer0.h:yylex0 25,957,914 12,781,687 4,063,093 preprocessor.h:lower_cpp0 19,901,526 9,591,678 3,970,340 stralloc.c:internal_findstring
/ Martin Nilsson (saturator)
Checking if the symbol already exists using linear search? That would give compilation of this particular program O(n²) complexity.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2004-01-16 23:31: Subject: file limits
I just wonder what it does. Allocating symbols shouldn't take *that* long?
/ Mirar
Indeed, the binary-search lookup table is only created after the compilation is done... all the *find*identifier* functions should be rewritten to use hash tables anyways.
/ Fredrik (Naranek) Hubinette (Real Build Master)
Previous text:
2004-01-16 23:52: Subject: file limits
Checking if the symbol already exists using linear search? That would give compilation of this particular program O(n²) complexity.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
pike-devel@lists.lysator.liu.se