Hi,
Playing with sqlite module (by Bill), got some mysterious segfault
after ca. 140K rows were returned... It was completely mysterious,
pointing to places like:
push_text("val"); // I replaced original to test it...
where all variable values were normal (according to gdb), but
valgrind went crazy... Well... The story finished when I started
pike with -s1000000 - everything is OK - despite the fact that
processing of 1000000 rows takes ca. 500M or RAM :)
So... The question... Isn't something that is controlled by -s (pike
stack, I guess - values are accumulated on stack before aggregation)
supposed to check stack size against oversizing? functions like
push_* or whatever? To be honest, it took me few hours of hunting
without any clue about reasons - and totally useless coredumps,
valgrind reports, etc...
Ah... It was tried with pike 7.6.24... Not sure if this is relevant,
but I remember that early (some time ago) I got a message like
"Stack overflow" from Pike, when it was too small (even without
--rtl-debug etc.).
NB: Bill, there are some troubles with sqlite3 support - I fixed
them and will send a patch to you shortly... Unless you fixed
those already :)
Regards,
/Al