In the poll() branch, files/file.c:file_peek() raises the minimum poll time from zero to one millisecond. This is wreaking havoc on poll-performance. Is there any reason to do so?
I.e. I was trying to refactor _PGsql and downsize it is much as possible, i.e. trying to use Pike native modules instead of direct system calls, and it seems to work rather nicely, except for this poll-mess. I.e. I could take out the special poll code, if the standard files/file.c module would not introduce that extra delay.
This is what we're talking about:
diff --git a/src/modules/files/file.c b/src/modules/files/file.c index 6daf5c3..642f951 100644 --- a/src/modules/files/file.c +++ b/src/modules/files/file.c @@ -844,14 +844,20 @@ static void file_peek(INT32 args) struct pollfd fds; int timeout; timeout = (int)(tf*1000); /* ignore overflow for now */ +#if 0 if (!timeout) timeout = 1; +#endif fds.fd=FD; fds.events=POLLIN; fds.revents=0;
- THREADS_ALLOW(); - ret=poll(&fds, 1, timeout); - THREADS_DISALLOW(); + if(timeout) { + THREADS_ALLOW(); + ret=poll(&fds, 1, timeout); + THREADS_DISALLOW(); + } + else + ret=poll(&fds, 1, 0);
if(ret < 0) {
It's a 40% speed difference caused by extra latency in the pgsql driver case.