Seems reasonable, however, you may want to test if fast_check_threads_etc(4) actually gives any speed gain at all considering that check_threads_etc() is a pretty speedy call.
As always, measuring is key.
/ Fredrik (Naranek) Hubinette (Real Build Master)
Previous text:
2002-09-10 15:54: Subject: Frequent context switches
It'd be interesting to see how those applications worked. I discussed it a bit with Grubba and we reasoned like this:
If there's a single fast_check_threads_etc() in mega_apply there would be long delays only if mega_apply is called seldom. It's hard to see any way that would happen if it's pike code that does the work, so the only case would be long C function calls, e.g. doing indices() on mappings and things like that. So one way would be to use check_threads_etc before every C function call but only, say, fast_check_threads_etc(8) before Pike function calls (only to catch the mostly theoretical cases where a significant amount of time is spent in Pike functions that only calls each other and not any C functions at all).
Unfortunately most C functions are faster and more frequent than Pike functions so it'd still be very uneven. The good solution would be to use fast_check_threads_etc() on C functions too and then add check_threads_etc() inside the C functions when they are about to start on some bigger task. We might not want to rely on that though, so the question then becomes: How big can the ratio slow/fast C function calls become? We couldn't imagine any reasonable cases where it becomes very big, so a fast_check_threads_etc(4) in front of C functions calls seems to be on the safe side.
So the conclusion is to have fast_check_threads_etc(4) for C function calls and fast_check_threads_etc(8) for Pike calls. Do you think that's reasonably safe?
/ Martin Stjernholm, Roxen IS