My theory was that it has something to do with double or better precision floats. But:
float f=1e99; f*1e-99;
(1) Result: 1.000000
Ok, jolly good.
1e1000;
(2) Result: 340282346638528859811704183484516925440.000000
Not so good.
1e-99;
(3) Result: 0.000000
1e-1000;
Compiler Error: 1:parse error, unexpected TOK_IDENTIFIER, expecting TOK_LEX_EOF or ';'
Huh?
Looks like the last case is due to this: Pike tries to parse it as an integer or float and where the parsing stops it considers the token to end. Since the system strtod doesn't like the exponent it probably parses it as an integer followed by the unexpected identifier "e".
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-26 08:13: Subject: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
My theory was that it has something to do with double or better precision floats. But:
float f=1e99; f*1e-99;
(1) Result: 1.000000
Ok, jolly good.
1e1000;
(2) Result: 340282346638528859811704183484516925440.000000
Not so good.
1e-99;
(3) Result: 0.000000
1e-1000;
Compiler Error: 1:parse error, unexpected TOK_IDENTIFIER, expecting TOK_LEX_EOF or ';'
Huh?
/ Mirar
Quite possibly. I wonder why it doesn't like it. I still consider that a bug, though, since 1e1000 work.
/ Mirar
Previous text:
2003-02-26 11:11: Subject: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
Looks like the last case is due to this: Pike tries to parse it as an integer or float and where the parsing stops it considers the token to end. Since the system strtod doesn't like the exponent it probably parses it as an integer followed by the unexpected identifier "e".
/ Martin Stjernholm, Roxen IS
The problem was in STRTOD in port.c. Upon underflow it reset the pointer to the beginning (but not for overflow). I've changed that.
The overflow case is because it stores HUGE_VAL which seems to be different from infinity. Is that correct?
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-26 11:14: Subject: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
Quite possibly. I wonder why it doesn't like it. I still consider that a bug, though, since 1e1000 work.
/ Mirar
If the C implementation conforms to IEC 60559 (which it acknowledges by defining __STDC_IEC_559__), then HUGE_VAL shall be positive infinity.
If IEC 60559 is _not_ supported, the C standard only requires that it is "a positive double constant expression" though. So in this case it could theoretically be 1.0...
Also note that if you're using long doubles, HUGE_VALL should be used, not HUGE_VAL.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2003-02-26 13:16: Subject: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
The problem was in STRTOD in port.c. Upon underflow it reset the pointer to the beginning (but not for overflow). I've changed that.
The overflow case is because it stores HUGE_VAL which seems to be different from infinity. Is that correct?
/ Martin Stjernholm, Roxen IS
My system has a HUGE_VAL with positive infinity. Further inspection shows that pikes STRTOD uses HUGE for overflow, with a fallback to HUGE_VAL only if HUGE isn't defined. HUGE is defined on my system to 3.40282347e+38F. I wonder if there's a good reason for STRTOD to use HUGE firsthand.
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-26 13:38: Subject: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
If the C implementation conforms to IEC 60559 (which it acknowledges by defining __STDC_IEC_559__), then HUGE_VAL shall be positive infinity.
If IEC 60559 is _not_ supported, the C standard only requires that it is "a positive double constant expression" though. So in this case it could theoretically be 1.0...
Also note that if you're using long doubles, HUGE_VALL should be used, not HUGE_VAL.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
strtod() uses HUGE_VAL (if it follows the C standard), so it would make probably sense for STRTOD() to do likewise.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2003-02-26 13:53: Subject: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
My system has a HUGE_VAL with positive infinity. Further inspection shows that pikes STRTOD uses HUGE for overflow, with a fallback to HUGE_VAL only if HUGE isn't defined. HUGE is defined on my system to 3.40282347e+38F. I wonder if there's a good reason for STRTOD to use HUGE firsthand.
/ Martin Stjernholm, Roxen IS
On Wed, Feb 26, 2003 at 01:40:03PM +0100, Marcus Comstedt (ACROSS) (Hail Ilpalazzo!) @ Pike (-) developers forum wrote:
Also note that if you're using long doubles, HUGE_VALL should be used, not HUGE_VAL.
Ahh... May be this matters... My Pike is configured --with-double-precision
/Al
--with-double-precision will give double precision (8 bytes), not long double precision (10 bytes?).
There are three possible precision levels, float (default), double (--with-double-precision), and long double (--with-long-double-precision).
Pike can also be compiled with 32 or 64 bit int, but since bignums kick in the 64 bit isn't that much of interest other then if you use double (or better) precision and don't want to waste the bits. Or 64 bit pointer systems, which could use both double and 64 bit int, I guess. This is enabled with --with-long-long-int (long long is 64 bits on most systems).
/ Mirar
Previous text:
2003-02-26 14:12: Subject: Re: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
On Wed, Feb 26, 2003 at 01:40:03PM +0100, Marcus Comstedt (ACROSS) (Hail Ilpalazzo!) @ Pike (-) developers forum wrote:
Also note that if you're using long doubles, HUGE_VALL should be used, not HUGE_VAL.
Ahh... May be this matters... My Pike is configured --with-double-precision
/Al
/ Brevbäraren
Of course, if you do a log of calculations of itnegers that fit in 64 bits but not 32, using 64 bit ints will be a lot faster even on 32 bit machines. The bignum support isn't fast.
/ David Hedbor
Previous text:
2003-02-26 20:12: Subject: Re: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
--with-double-precision will give double precision (8 bytes), not long double precision (10 bytes?).
There are three possible precision levels, float (default), double (--with-double-precision), and long double (--with-long-double-precision).
Pike can also be compiled with 32 or 64 bit int, but since bignums kick in the 64 bit isn't that much of interest other then if you use double (or better) precision and don't want to waste the bits. Or 64 bit pointer systems, which could use both double and 64 bit int, I guess. This is enabled with --with-long-long-int (long long is 64 bits on most systems).
/ Mirar
On Wed, Feb 26, 2003 at 08:15:03PM +0100, David Hedbor @ Pike developers forum wrote:
Of course, if you do a log of calculations of itnegers that fit in 64 bits but not 32, using 64 bit ints will be a lot faster even on 32 bit machines. The bignum support isn't fast.
I would like to see separate builtin type like "int64" for 64 bits ints, instead of converting _all_ ints to 64 bits :) But this is hopeless, I guess... :)
Regards, /Al
with automatic conversion from int to bignum, what would be the point of having an extra int64?
more sensible might be an automatic conversion
int32 -> int64 -> bignum
ie: the pike int type always uses the smallest usable type in c.
Why not int8 and int16 while we're on it? :)
/ Mirar
Previous text:
2003-02-26 20:33: Subject: Re: 64 bit ints
with automatic conversion from int to bignum, what would be the point of having an extra int64?
more sensible might be an automatic conversion
int32 -> int64 -> bignum
ie: the pike int type always uses the smallest usable type in c.
/ Brevbäraren
Requirement for 32 bit unsigned ints isn't entirely small when it comes to protocols / data files. Might be nice to have non-bignum support for that.
/ David Hedbor
Previous text:
2003-02-26 21:16: Subject: Re: 64 bit ints
Might it? How?
/ Martin Stjernholm, Roxen IS
Then the solution is rather to use --with-long-long-ints. I don't believe it's anywhere near worth the extra complexity of another runtime type just to use one bit differently, and that's even if the implementation work is disregarded.
How well does --with-long-long-ints work nowadays, btw?
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-26 21:18: Subject: Re: 64 bit ints
Requirement for 32 bit unsigned ints isn't entirely small when it comes to protocols / data files. Might be nice to have non-bignum support for that.
/ David Hedbor
It passes the testsuite. Both for icc and gcc.
(As well as the non-long-long-int Pike, that is.)
/ Mirar
Previous text:
2003-02-26 21:29: Subject: Re: 64 bit ints
Then the solution is rather to use --with-long-long-ints. I don't believe it's anywhere near worth the extra complexity of another runtime type just to use one bit differently, and that's even if the implementation work is disregarded.
How well does --with-long-long-ints work nowadays, btw?
/ Martin Stjernholm, Roxen IS
In the 32 bit protocol example above, for instance. For "%4c" and IP number operations unsigned INT32 is what you really want. INT64 is the next best, and bignums is really not what you want.
/ Mirar
Previous text:
2003-02-26 21:16: Subject: Re: 64 bit ints
Might it? How?
/ Martin Stjernholm, Roxen IS
What would be gained with that? Waste less memory?
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-26 20:28: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 08:15:03PM +0100, David Hedbor @ Pike developers forum wrote:
Of course, if you do a log of calculations of itnegers that fit in 64 bits but not 32, using 64 bit ints will be a lot faster even on 32 bit machines. The bignum support isn't fast.
I would like to see separate builtin type like "int64" for 64 bits ints, instead of converting _all_ ints to 64 bits :) But this is hopeless, I guess... :)
Regards, /Al
/ Brevbäraren
Most likely gain speed. On 32 bit computers operations on 32 bit integers is faster than of 64 bit ones, but 64 bit ones are faster than bignum.
/ David Hedbor
Previous text:
2003-02-26 20:53: Subject: Re: 64 bit ints
What would be gained with that? Waste less memory?
/ Martin Stjernholm, Roxen IS
I suspect the integer operations themselves are almost negligible compared to all the other overhead with function calls, type checks etc etc. That speed gain could very well be eaten up simply by the larger working set caused by another type. (Note that I'm comparing separate treatment of 32 and 64 bit integers now, not bignums.)
It might be some idea to have 32 and 64 bit variants of vector types like Math.Matrix.
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-26 21:00: Subject: Re: 64 bit ints
Most likely gain speed. On 32 bit computers operations on 32 bit integers is faster than of 64 bit ones, but 64 bit ones are faster than bignum.
/ David Hedbor
On Wed, Feb 26, 2003 at 09:05:03PM +0100, David Hedbor @ Pike developers forum wrote:
Most likely gain speed. On 32 bit computers operations on 32 bit integers is faster than of 64 bit ones, but 64 bit ones are faster than bignum.
Exactly. Bignums are too slow in case when maximum that is needed fits in 64 bits. And, to be honest, I hate automatic bignum2int and vice versa conversion - because there is _no control_ over it.
I still cannot understand why 0x80000000 is converted to bignum while it fits in 32 bits. Well, I know the official explanation, but it seems unlogical.
Unsigned ints would be nice as well, BTW :) I can explain why and where, if someone is interested.
NB: The excuse like "nobody really needs it" is not good enough, it remembers me "The 640K ought to be enough for everybody" and a lot of really useless (to majority) code in current Pike's core modules :)
Regards, /Al
I still cannot understand why 0x80000000 is converted to bignum while it fits in 32 bits. Well, I know the official explanation, but it seems unlogical.
0x80000000 is -2147483648 if you cram it inside a signed 2-complement (normal) 32 bit integer.
/ Mirar
Previous text:
2003-02-26 21:18: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 09:05:03PM +0100, David Hedbor @ Pike developers forum wrote:
Most likely gain speed. On 32 bit computers operations on 32 bit integers is faster than of 64 bit ones, but 64 bit ones are faster than bignum.
Exactly. Bignums are too slow in case when maximum that is needed fits in 64 bits. And, to be honest, I hate automatic bignum2int and vice versa conversion - because there is _no control_ over it.
I still cannot understand why 0x80000000 is converted to bignum while it fits in 32 bits. Well, I know the official explanation, but it seems unlogical.
Unsigned ints would be nice as well, BTW :) I can explain why and where, if someone is interested.
NB: The excuse like "nobody really needs it" is not good enough, it remembers me "The 640K ought to be enough for everybody" and a lot of really useless (to majority) code in current Pike's core modules :)
Regards, /Al
/ Brevbäraren
On Wed, Feb 26, 2003 at 09:30:04PM +0100, Mirar @ Pike developers forum wrote:
0x80000000 is -2147483648 if you cram it inside a signed 2-complement (normal) 32 bit integer.
According to Pike, this is not the case:
int x; x = 0x80000000;
(1) Result: 2147483648
On one hand, I want to have flexibility (64 and more bit ints), on the other hand I don't want to lose performance (say, I've to make a lot of operations on IPv4 addresses).
Bignums are good enough (for my purposes), but I can't do something like:
Bignum a_bignum = 12345;
or:
a_bignum = an_int + another_int;
and expect this to work (there is no `=() in Pike, but that is another story).
Yes, I could compile Pike without autobignums, but... The above prevents me from doing this :)
/Al
int x; x = 0x80000000;
(1) Result: 2147483648
Well, either you have compiled pike with 64 bit ints, or that number *is* represented internally as a bignum, not as a 32 bit int (which is pretty obvious since it doesn't fit in C's *signed* 32 bit integer type). Compare to
int y; y = 0x800000000000000000000;
Result: 9671406556917033397649408
which should be a bignum no matter if you have 32 or 64 bit ints.
I don't know any easy way from within pike to check if a particular int is represented as a bignum or a fixnum. The goal is that it shouldn't matter, even if there certainly are a few function here and there that only supports fixnums.
/ Niels Möller ()
Previous text:
2003-02-26 23:08: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 09:30:04PM +0100, Mirar @ Pike developers forum wrote:
0x80000000 is -2147483648 if you cram it inside a signed 2-complement (normal) 32 bit integer.
According to Pike, this is not the case:
int x; x = 0x80000000;
(1) Result: 2147483648
On one hand, I want to have flexibility (64 and more bit ints), on the other hand I don't want to lose performance (say, I've to make a lot of operations on IPv4 addresses).
Bignums are good enough (for my purposes), but I can't do something like:
Bignum a_bignum = 12345;
or:
a_bignum = an_int + another_int;
and expect this to work (there is no `=() in Pike, but that is another story).
Yes, I could compile Pike without autobignums, but... The above prevents me from doing this :)
/Al
/ Brevbäraren
0x80000000 is -2147483648 if you cram it inside a signed 2-complement (normal) 32 bit integer.
According to Pike, this is not the case:
int x; x = 0x80000000;
(1) Result: 2147483648
Your analysis is flawed. Pike does not "cram it inside a signed 2-complement (normal) 32 bit integer", so it doesn't make any statement about that case at all.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2003-02-26 23:08: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 09:30:04PM +0100, Mirar @ Pike developers forum wrote:
0x80000000 is -2147483648 if you cram it inside a signed 2-complement (normal) 32 bit integer.
According to Pike, this is not the case:
int x; x = 0x80000000;
(1) Result: 2147483648
On one hand, I want to have flexibility (64 and more bit ints), on the other hand I don't want to lose performance (say, I've to make a lot of operations on IPv4 addresses).
Bignums are good enough (for my purposes), but I can't do something like:
Bignum a_bignum = 12345;
or:
a_bignum = an_int + another_int;
and expect this to work (there is no `=() in Pike, but that is another story).
Yes, I could compile Pike without autobignums, but... The above prevents me from doing this :)
/Al
/ Brevbäraren
The problem is that you will lose performance if more integer types are implemented, just because of the added overhead of handling them. Since the time consumed in the arithmetic operations for native integers is so small compared to all the stuff that surrounds them (take a look at f_add in operators.c, for example), I'm certain that it's overall faster to instead use 64 bit integers all over with --with-long-long-int.
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-26 23:08: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 09:30:04PM +0100, Mirar @ Pike developers forum wrote:
0x80000000 is -2147483648 if you cram it inside a signed 2-complement (normal) 32 bit integer.
According to Pike, this is not the case:
int x; x = 0x80000000;
(1) Result: 2147483648
On one hand, I want to have flexibility (64 and more bit ints), on the other hand I don't want to lose performance (say, I've to make a lot of operations on IPv4 addresses).
Bignums are good enough (for my purposes), but I can't do something like:
Bignum a_bignum = 12345;
or:
a_bignum = an_int + another_int;
and expect this to work (there is no `=() in Pike, but that is another story).
Yes, I could compile Pike without autobignums, but... The above prevents me from doing this :)
/Al
/ Brevbäraren
And, to be honest, I hate automatic bignum2int and vice versa conversion - because there is _no control_ over it.
That's clearly as falsified a statement as saying that you have _no control_ over the sign bit.
The excuse like "nobody really needs it" is not good enough, it remembers me "The 640K ought to be enough for everybody" and a lot of really useless (to majority) code in current Pike's core modules
"Nobody really needs it" is many times a good and true statement, but I don't think it has been used before in this thread, has it? The uselessness of several components in the core modules is because there are essentially only core modules at the moment, but you already know this so I wonder why you use an analogy you know is broken. As to how big part of the modules that are useless probably depends on your scope of vision and ability as programmer. I have used most of the modules, so by definition they are not useless to me.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-26 21:18: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 09:05:03PM +0100, David Hedbor @ Pike developers forum wrote:
Most likely gain speed. On 32 bit computers operations on 32 bit integers is faster than of 64 bit ones, but 64 bit ones are faster than bignum.
Exactly. Bignums are too slow in case when maximum that is needed fits in 64 bits. And, to be honest, I hate automatic bignum2int and vice versa conversion - because there is _no control_ over it.
I still cannot understand why 0x80000000 is converted to bignum while it fits in 32 bits. Well, I know the official explanation, but it seems unlogical.
Unsigned ints would be nice as well, BTW :) I can explain why and where, if someone is interested.
NB: The excuse like "nobody really needs it" is not good enough, it remembers me "The 640K ought to be enough for everybody" and a lot of really useless (to majority) code in current Pike's core modules :)
Regards, /Al
/ Brevbäraren
On Wed, Feb 26, 2003 at 10:00:03PM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
That's clearly as falsified a statement as saying that you have _no control_ over the sign bit.
At least, I expect that 0x80000000 will fit in 32 bits and it still will be an integer. This is logical, isn't? :)
"Nobody really needs it" is many times a good and true statement, but I don't think it has been used before in this thread, has it? The
Not in this thread, but looong time ago I discussed this already.
part of the modules that are useless probably depends on your scope of vision and ability as programmer. I have used most of the modules, so by definition they are not useless to me.
For instance, Mird, Math, Parser, SDL, Shuffler, Georgrahy etc - most of them are useless to me, and I bet that for vast majority of Pike users too :)
OTOH, there was no function like gettimeofday() in earlier versions, and there is no (still) strftime() like function (yes, I know about Calendar module but it is too overcomplicated for simple purposes, and much slower), however those two are more frequently used comparing to Parser :)
Regards, /Al
At least, I expect that 0x80000000 will fit in 32 bits and it still will be an integer. This is logical, isn't? :)
Not if you expect the same thing of -0x80000000.
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2003-02-26 23:15: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 10:00:03PM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
That's clearly as falsified a statement as saying that you have _no control_ over the sign bit.
At least, I expect that 0x80000000 will fit in 32 bits and it still will be an integer. This is logical, isn't? :)
"Nobody really needs it" is many times a good and true statement, but I don't think it has been used before in this thread, has it? The
Not in this thread, but looong time ago I discussed this already.
part of the modules that are useless probably depends on your scope of vision and ability as programmer. I have used most of the modules, so by definition they are not useless to me.
For instance, Mird, Math, Parser, SDL, Shuffler, Georgrahy etc - most of them are useless to me, and I bet that for vast majority of Pike users too :)
OTOH, there was no function like gettimeofday() in earlier versions, and there is no (still) strftime() like function (yes, I know about Calendar module but it is too overcomplicated for simple purposes, and much slower), however those two are more frequently used comparing to Parser :)
Regards, /Al
/ Brevbäraren
Generally things are added because someone needed them. You're right that many "basic' unix methods are missing and perhaps they should indeed be added. For example I could see a strftime added to the Locale class (since strftime depends on locale).
One main reason for many of these not being added is that they depend on things outside of Pike's control, something that is generally not desired (i.e strftime depending on locale making it useless except for information messageS).
/ David Hedbor
Previous text:
2003-02-26 23:15: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 10:00:03PM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
That's clearly as falsified a statement as saying that you have _no control_ over the sign bit.
At least, I expect that 0x80000000 will fit in 32 bits and it still will be an integer. This is logical, isn't? :)
"Nobody really needs it" is many times a good and true statement, but I don't think it has been used before in this thread, has it? The
Not in this thread, but looong time ago I discussed this already.
part of the modules that are useless probably depends on your scope of vision and ability as programmer. I have used most of the modules, so by definition they are not useless to me.
For instance, Mird, Math, Parser, SDL, Shuffler, Georgrahy etc - most of them are useless to me, and I bet that for vast majority of Pike users too :)
OTOH, there was no function like gettimeofday() in earlier versions, and there is no (still) strftime() like function (yes, I know about Calendar module but it is too overcomplicated for simple purposes, and much slower), however those two are more frequently used comparing to Parser :)
Regards, /Al
/ Brevbäraren
On Wed, Feb 26, 2003 at 11:14:57PM +0100, Alexander Demenshin wrote:
For instance, Mird, Math, Parser, SDL, Shuffler, Georgrahy etc - most of them are useless to me, and I bet that for vast majority of Pike users too :)
Parser is used in roxen and caudium, it would be hard to see them as just a minority of pike users.
what about Image? i can't imagine many people use that...
the argument has already been made that there is no place yet outside of the core for all of this stuff. so arguing about this is rather moot.
and besides, none of this stuff will actually be loaded if you don't use it. can't you just ignore it? it won't hurt your performance. what is your complaint? should people stop working on these things to concentrate on core functionality that is usefull for everybody?
even if only a minority actually uses these things, i am sure the majority likes the stuff to be there in case they should need it.
greetings, martin.
'Image' is imho one of the nicest features of Pike. :)
/ David Hedbor
Previous text:
2003-02-26 23:32: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 11:14:57PM +0100, Alexander Demenshin wrote:
For instance, Mird, Math, Parser, SDL, Shuffler, Georgrahy etc - most of them are useless to me, and I bet that for vast majority of Pike users too :)
Parser is used in roxen and caudium, it would be hard to see them as just a minority of pike users.
what about Image? i can't imagine many people use that...
the argument has already been made that there is no place yet outside of the core for all of this stuff. so arguing about this is rather moot.
and besides, none of this stuff will actually be loaded if you don't use it. can't you just ignore it? it won't hurt your performance. what is your complaint? should people stop working on these things to concentrate on core functionality that is usefull for everybody?
even if only a minority actually uses these things, i am sure the majority likes the stuff to be there in case they should need it.
greetings, martin.
/ Brevbäraren
Personally I'd rank the modules like this, in importance:
Image.* <---- most important GL SDL GTK Parser SSL3 Crypto Protocols Standards The Rest <--- Least important
/ Per Hedbor ()
Previous text:
2003-02-26 23:37: Subject: Re: 64 bit ints
'Image' is imho one of the nicest features of Pike. :)
/ David Hedbor
I think you might want to bump up Protocols...
/ Peter Lundqvist (disjunkt)
Previous text:
2003-02-26 23:41: Subject: Re: 64 bit ints
Personally I'd rank the modules like this, in importance:
Image.* <---- most important GL SDL GTK Parser SSL3 Crypto Protocols Standards The Rest <--- Least important
/ Per Hedbor ()
I would probably put sprintf and Math closer to the top, and String and Stdio. :)
/ Mirar
Previous text:
2003-02-26 23:41: Subject: Re: 64 bit ints
Personally I'd rank the modules like this, in importance:
Image.* <---- most important GL SDL GTK Parser SSL3 Crypto Protocols Standards The Rest <--- Least important
/ Per Hedbor ()
It's more than that. It's a brilliant example of what a well documented module looks like.
Also, it's a lot of fun to play aro^W^H^Wwork with.
/ Peter Lundqvist (disjunkt)
Previous text:
2003-02-26 23:37: Subject: Re: 64 bit ints
'Image' is imho one of the nicest features of Pike. :)
/ David Hedbor
Until you realize that some code isn't documented and some things that are documented isn't implemented, that is... But it looks good.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-27 03:04: Subject: Re: 64 bit ints
It's more than that. It's a brilliant example of what a well documented module looks like.
Also, it's a lot of fun to play aro^W^H^Wwork with.
/ Peter Lundqvist (disjunkt)
Oh.. Well, I guess that just goes to show that Image is everything (lousy pun intended)...
/ Peter Lundqvist (disjunkt)
Previous text:
2003-02-27 03:07: Subject: Re: 64 bit ints
Until you realize that some code isn't documented and some things that are documented isn't implemented, that is... But it looks good.
/ Martin Nilsson (har bott i google)
On Wed, Feb 26, 2003 at 11:27:30PM +0100, Martin Baehr wrote:
Parser is used in roxen and caudium, it would be hard to see them as just a minority of pike users.
roxen & caudim are just applications but not Pike itself. Many people who use them don't even know Pike (and don't need to).
I am talking about users who program in Pike, not ordinary users who use apps written in Pike.
and besides, none of this stuff will actually be loaded if you don't use it. can't you just ignore it? it won't hurt your performance. what is your complaint?
Distribution size, mainly :) But that's not my complaint, I just trying to understand why some functionality (widely used) is not included while other (used in very few apps) is included.
should people stop working on these things to concentrate on core functionality that is usefull for everybody?
Sure. Otherwise Pike will never be widely used.
even if only a minority actually uses these things, i am sure the majority likes the stuff to be there in case they should need it.
So why there is no unsigned ints in Pike? :)
Regards, /Al
So why there is no unsigned ints in Pike? :)
This has been explained at length.
/ Per Hedbor ()
Previous text:
2003-02-26 23:47: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 11:27:30PM +0100, Martin Baehr wrote:
Parser is used in roxen and caudium, it would be hard to see them as just a minority of pike users.
roxen & caudim are just applications but not Pike itself. Many people who use them don't even know Pike (and don't need to).
I am talking about users who program in Pike, not ordinary users who use apps written in Pike.
and besides, none of this stuff will actually be loaded if you don't use it. can't you just ignore it? it won't hurt your performance. what is your complaint?
Distribution size, mainly :) But that's not my complaint, I just trying to understand why some functionality (widely used) is not included while other (used in very few apps) is included.
should people stop working on these things to concentrate on core functionality that is usefull for everybody?
Sure. Otherwise Pike will never be widely used.
even if only a minority actually uses these things, i am sure the majority likes the stuff to be there in case they should need it.
So why there is no unsigned ints in Pike? :)
Regards, /Al
/ Brevbäraren
On Wed, Feb 26, 2003 at 11:46:06PM +0100, Alexander Demenshin wrote:
Parser is used in roxen and caudium, it would be hard to see them as just a minority of pike users.
I am talking about users who program in Pike, not ordinary users who use apps written in Pike.
most of the people who program in pike came to that through programming for roxen and caudium...
what is your complaint?
Distribution size, mainly :)
fair point. that needs to be worked on.
I just trying to understand why some functionality (widely used) is not included while other (used in very few apps) is included.
ok.
greetings, martin.
And if the majority of people writing programs in Pike does not use the Parser module, I will be surprised.
And the distribution size is rather minimal as it is, why bother getting it smaller?
My personal opinion is that we should also include each and every library we use by linking the modules statically to them, but I guess that convenience and pragmatism will never win over religion.
/ Per Hedbor ()
Previous text:
2003-02-26 23:47: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 11:27:30PM +0100, Martin Baehr wrote:
Parser is used in roxen and caudium, it would be hard to see them as just a minority of pike users.
roxen & caudim are just applications but not Pike itself. Many people who use them don't even know Pike (and don't need to).
I am talking about users who program in Pike, not ordinary users who use apps written in Pike.
and besides, none of this stuff will actually be loaded if you don't use it. can't you just ignore it? it won't hurt your performance. what is your complaint?
Distribution size, mainly :) But that's not my complaint, I just trying to understand why some functionality (widely used) is not included while other (used in very few apps) is included.
should people stop working on these things to concentrate on core functionality that is usefull for everybody?
Sure. Otherwise Pike will never be widely used.
even if only a minority actually uses these things, i am sure the majority likes the stuff to be there in case they should need it.
So why there is no unsigned ints in Pike? :)
Regards, /Al
/ Brevbäraren
I just trying to understand why some functionality (widely used) is not included while other (used in very few apps) is included.
Our policy is to only include modules that exists. Why doesn't the modules you want to use exist? Because it isn't as widely used as you think. At least no one has missed it enough during the last almost ten years to bother implement it.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-26 23:47: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 11:27:30PM +0100, Martin Baehr wrote:
Parser is used in roxen and caudium, it would be hard to see them as just a minority of pike users.
roxen & caudim are just applications but not Pike itself. Many people who use them don't even know Pike (and don't need to).
I am talking about users who program in Pike, not ordinary users who use apps written in Pike.
and besides, none of this stuff will actually be loaded if you don't use it. can't you just ignore it? it won't hurt your performance. what is your complaint?
Distribution size, mainly :) But that's not my complaint, I just trying to understand why some functionality (widely used) is not included while other (used in very few apps) is included.
should people stop working on these things to concentrate on core functionality that is usefull for everybody?
Sure. Otherwise Pike will never be widely used.
even if only a minority actually uses these things, i am sure the majority likes the stuff to be there in case they should need it.
So why there is no unsigned ints in Pike? :)
Regards, /Al
/ Brevbäraren
On Thu, Feb 27, 2003 at 12:05:03AM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
Our policy is to only include modules that exists. Why doesn't the modules you want to use exist?
Some modules which are in core now didn't exist before they were written, isn't? :)
Because it isn't as widely used as you think. At least no one has missed it enough during the last almost ten years to bother implement it.
Following this logic, most software wouldn't be written :) Well, the PCRE module did'n exist some time ago. I needed it so I wrote my own, and folks from Caudium wrote their own too. We now have even Bz2 module, but again, it didn't exist once. So what? :) Why bother to write new modules if everything (what you think is needed) is already implemented? :)
Regards, /Al
At least, I expect that 0x80000000 will fit in 32 bits and it still will be an integer. This is logical, isn't? :)
No, it isn't.
For instance, Mird, Math, Parser, SDL, Shuffler, Georgrahy etc - most of them are useless to me, and I bet that for vast majority of Pike users too :)
I'd say that three of these are essential. One of these are needed to build Pike. One of these are considered so important it is statically loaded into Pike during startup.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-26 23:15: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 10:00:03PM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
That's clearly as falsified a statement as saying that you have _no control_ over the sign bit.
At least, I expect that 0x80000000 will fit in 32 bits and it still will be an integer. This is logical, isn't? :)
"Nobody really needs it" is many times a good and true statement, but I don't think it has been used before in this thread, has it? The
Not in this thread, but looong time ago I discussed this already.
part of the modules that are useless probably depends on your scope of vision and ability as programmer. I have used most of the modules, so by definition they are not useless to me.
For instance, Mird, Math, Parser, SDL, Shuffler, Georgrahy etc - most of them are useless to me, and I bet that for vast majority of Pike users too :)
OTOH, there was no function like gettimeofday() in earlier versions, and there is no (still) strftime() like function (yes, I know about Calendar module but it is too overcomplicated for simple purposes, and much slower), however those two are more frequently used comparing to Parser :)
Regards, /Al
/ Brevbäraren
On Wed, Feb 26, 2003 at 11:30:05PM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
At least, I expect that 0x80000000 will fit in 32 bits and it still will be an integer. This is logical, isn't? :)
No, it isn't.
Why?
I'd say that three of these are essential. One of these are needed to build Pike. One of these are considered so important it is statically loaded into Pike during startup.
Is Pike written in C or in Pike? :) I just can't believe that the language compiler can't be used without anything else. The core functionality (language and basic types) should work without anything else. Or do I miss anything? The only explanation that I can foresee is "we, the developers, used to build Pike this way" :)
Regards, /Al
Why?
Show me the programming language where 0x80000000 fits in an 32bit integer.
Please note that adding unsigned integers as a type requires you to add a signed bit to the data, or simply use it as the sign bit, thus getting 33bit integers. Why not go for 64bit integers while you're at it?
In C and assembly it's possible to have signed and unsigned integers and not use more than 32bits because there is really no difference betwen them there, the programmer has to know if it's signed or unsigned, the compiler can also keep track of them and warn the programmer, because C is a statically typed language.
Pike is not. Thus, adding a new type for unsigned integers is not only pointless, it defeats the purpose (speeding things up) rather nicely, and will with 99.999% probability slow things down even more than the bignum conversions.
/ Per Hedbor ()
Previous text:
2003-02-26 23:51: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 11:30:05PM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
At least, I expect that 0x80000000 will fit in 32 bits and it still will be an integer. This is logical, isn't? :)
No, it isn't.
Why?
I'd say that three of these are essential. One of these are needed to build Pike. One of these are considered so important it is statically loaded into Pike during startup.
Is Pike written in C or in Pike? :) I just can't believe that the language compiler can't be used without anything else. The core functionality (language and basic types) should work without anything else. Or do I miss anything? The only explanation that I can foresee is "we, the developers, used to build Pike this way" :)
Regards, /Al
/ Brevbäraren
I wouldn't go so far as to say that it would be slower than the bignum conversions, but it would surely be slower than using 64 bit integers all over.
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-26 23:57: Subject: Re: 64 bit ints
Why?
Show me the programming language where 0x80000000 fits in an 32bit integer.
Please note that adding unsigned integers as a type requires you to add a signed bit to the data, or simply use it as the sign bit, thus getting 33bit integers. Why not go for 64bit integers while you're at it?
In C and assembly it's possible to have signed and unsigned integers and not use more than 32bits because there is really no difference betwen them there, the programmer has to know if it's signed or unsigned, the compiler can also keep track of them and warn the programmer, because C is a statically typed language.
Pike is not. Thus, adding a new type for unsigned integers is not only pointless, it defeats the purpose (speeding things up) rather nicely, and will with 99.999% probability slow things down even more than the bignum conversions.
/ Per Hedbor ()
I am counting with the overhead in general.
Each and every place that use integers in pike would have to check for unsigned and signed integers. Since integers smaller than 2^31 are presumably more common than integers between 2^31 and 2^32, the percentage of integers that would benefit is rather small compared to the number that would only get the penalties.
/ Per Hedbor ()
Previous text:
2003-02-27 00:00: Subject: Re: 64 bit ints
I wouldn't go so far as to say that it would be slower than the bignum conversions, but it would surely be slower than using 64 bit integers all over.
/ Martin Stjernholm, Roxen IS
On Thu, Feb 27, 2003 at 12:05:03AM +0100, Per Hedbor () @ Pike (-) developers forum wrote:
Each and every place that use integers in pike would have to check for unsigned and signed integers.
Why? And where? Why not to ignore this and let it behave like in C?
/Al
Because Pike, unlike C, isn't statically typed; each value carries its type in Pike. You can put an integer in a variable of type mixed and when you later read the variable there's still an integer in it; you'll get a complait about the incorrect type if you try to index it like a string, while in C you'd get a segfault or worse.
In a (probably fairly far off) future we might have a compiler that is smart enough to exploit the types in the source to at least ignore the type info in the values, but that will never work all the way. There will always be many places where the code needs to check the type in the value to treat it correctly.
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-27 03:04: Subject: Re: 64 bit ints
On Thu, Feb 27, 2003 at 12:05:03AM +0100, Per Hedbor () @ Pike (-) developers forum wrote:
Each and every place that use integers in pike would have to check for unsigned and signed integers.
Why? And where? Why not to ignore this and let it behave like in C?
/Al
/ Brevbäraren
On Thu, Feb 27, 2003 at 12:00:01AM +0100, Per Hedbor () @ Pike (-) developers forum wrote:
Show me the programming language where 0x80000000 fits in an 32bit integer.
Perl. For instance.
Please note that adding unsigned integers as a type requires you to add a signed bit to the data, or simply use it as the sign bit, thus getting 33bit integers. Why not go for 64bit integers while you're at it?
Because I don't want to waste extra 4 bytes.
Pike is not. Thus, adding a new type for unsigned integers is not only pointless, it defeats the purpose (speeding things up) rather nicely, and will with 99.999% probability slow things down even more than the bignum conversions.
Really? Why it doesn't in Perl? In C? Why not to make pure N-bits integers instead of dynamically converted and expanded types? If I need _exactly_ N bits in integers, how do I do this in Pike, without wasting time and memory?
Regards, /Al
Show me the programming language where 0x80000000 fits in an 32bit integer.
Perl. For instance.
It would be interesting to know, since it would affect my coming responses quite a lot, if you actually read the answers that we provide. The information that you want to store is +0x80000000, which in effect is 33 bits. You could either do it explicitly by using 33 bits for data storage or you could flag your storage elsewhere with one bit. The difference is to hide implementational details from the user (programmer), with the obvious drawback that the programmer might get the wrong idea how the environment works. So, no, Perl doesn't do it either.
If I need _exactly_ N bits in integers, how do I do this in Pike, without wasting time and memory?
You don't. Luckily you'll never end up in that situation. If you don't believe me then high level languages isn't the thing for you. Pick one with a lower degree of abstraction that feel comfortable with. Your source will be bigger, your development cost measured in time and effort will be bigger but so will your control over details.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-27 02:58: Subject: Re: 64 bit ints
On Thu, Feb 27, 2003 at 12:00:01AM +0100, Per Hedbor () @ Pike (-) developers forum wrote:
Show me the programming language where 0x80000000 fits in an 32bit integer.
Perl. For instance.
Please note that adding unsigned integers as a type requires you to add a signed bit to the data, or simply use it as the sign bit, thus getting 33bit integers. Why not go for 64bit integers while you're at it?
Because I don't want to waste extra 4 bytes.
Pike is not. Thus, adding a new type for unsigned integers is not only pointless, it defeats the purpose (speeding things up) rather nicely, and will with 99.999% probability slow things down even more than the bignum conversions.
Really? Why it doesn't in Perl? In C? Why not to make pure N-bits integers instead of dynamically converted and expanded types? If I need _exactly_ N bits in integers, how do I do this in Pike, without wasting time and memory?
Regards, /Al
/ Brevbäraren
On Thu, Feb 27, 2003 at 03:20:01AM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
The information that you want to store is +0x80000000, which in effect is 33 bits. You could either do it explicitly by using 33 bits for data storage or you could flag your storage elsewhere with one bit.
Sorry, but it seems that you misunderstood me. When I say "to store 0x80000000" it means, actually, to store 32 bits, without any signs.
How it _may_ be interpreted (in comparision and other operators) is another story.
You don't. Luckily you'll never end up in that situation. If you don't believe me then high level languages isn't the thing for you. Pick one with a lower degree of abstraction that feel comfortable with.
I need a mix, actually. To use higher level of abstraction but to control this level as I wish, and whenever I wish. Something that look like a "black box" is not for me. I would accept Pike unconditionally but... there are a huge performance problems in some cases.
So I resort to write whatever needs performance in C, and controlling of this - in Pike. My dream is embeddable Pike - so I can wrap it into C (no, no, thanks - I dislike to wrap C into Pike).
Your source will be bigger, your development cost measured in time and effort will be bigger but so will your control over details.
Not really. I can choose C++ or even C#. It won't be bigger, and the development cost will be comparable - at least, if we will take two experienced developers, one who knows C++ and one who knows Pike, their efforts to implement something will be comparable. And, in this case, C++ will win - it is more known and more stable (there is a standard, at least - and there is no standard for Pike), and it is _faster_ (remember - we are talking about _experienced_ developers, so they know what they are doing, and Pike will never beat C/C++ in speed).
Regards, /Al
Not really. I can choose C++ or even C#. It won't be bigger, and the development cost will be comparable - at least, if we will take two experienced developers, one who knows C++ and one who knows Pike, their efforts to implement something will be comparable. /.../
No way. Something like Roxen CMS would take at least ten times the effort to implement in C++. It's not that Pike is "great" and C++ "sucks", it's that C++ is statically typed and compiled whereas Pike is not. Languages like Perl, Python and Lisp would be comparable to Pike for an application of that kind.
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-27 03:55: Subject: Re: 64 bit ints
On Thu, Feb 27, 2003 at 03:20:01AM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
The information that you want to store is +0x80000000, which in effect is 33 bits. You could either do it explicitly by using 33 bits for data storage or you could flag your storage elsewhere with one bit.
Sorry, but it seems that you misunderstood me. When I say "to store 0x80000000" it means, actually, to store 32 bits, without any signs.
How it _may_ be interpreted (in comparision and other operators) is another story.
You don't. Luckily you'll never end up in that situation. If you don't believe me then high level languages isn't the thing for you. Pick one with a lower degree of abstraction that feel comfortable with.
I need a mix, actually. To use higher level of abstraction but to control this level as I wish, and whenever I wish. Something that look like a "black box" is not for me. I would accept Pike unconditionally but... there are a huge performance problems in some cases.
So I resort to write whatever needs performance in C, and controlling of this - in Pike. My dream is embeddable Pike - so I can wrap it into C (no, no, thanks - I dislike to wrap C into Pike).
Your source will be bigger, your development cost measured in time and effort will be bigger but so will your control over details.
Not really. I can choose C++ or even C#. It won't be bigger, and the development cost will be comparable - at least, if we will take two experienced developers, one who knows C++ and one who knows Pike, their efforts to implement something will be comparable. And, in this case, C++ will win - it is more known and more stable (there is a standard, at least - and there is no standard for Pike), and it is _faster_ (remember - we are talking about _experienced_ developers, so they know what they are doing, and Pike will never beat C/C++ in speed).
Regards, /Al
/ Brevbäraren
Is Pike written in C or in Pike?
Both.
I just can't believe that the language compiler can't be used without anything else.
Then you lack a fair amount of imagination. The way things are going, more and more of Pike is implemented in Pike.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-26 23:51: Subject: Re: 64 bit ints
On Wed, Feb 26, 2003 at 11:30:05PM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
At least, I expect that 0x80000000 will fit in 32 bits and it still will be an integer. This is logical, isn't? :)
No, it isn't.
Why?
I'd say that three of these are essential. One of these are needed to build Pike. One of these are considered so important it is statically loaded into Pike during startup.
Is Pike written in C or in Pike? :) I just can't believe that the language compiler can't be used without anything else. The core functionality (language and basic types) should work without anything else. Or do I miss anything? The only explanation that I can foresee is "we, the developers, used to build Pike this way" :)
Regards, /Al
/ Brevbäraren
Reminds me of java. I think "more and more in pike" can be dangerous - C is faster. Doing too much in Pike code will have negative impacts. Not that I see Pike having any issues with this.
/ David Hedbor
Previous text:
2003-02-27 00:06: Subject: Re: 64 bit ints
Is Pike written in C or in Pike?
Both.
I just can't believe that the language compiler can't be used without anything else.
Then you lack a fair amount of imagination. The way things are going, more and more of Pike is implemented in Pike.
/ Martin Nilsson (har bott i google)
Some things are way easier to write in pike than in C, and not exactly time-critical. Typical examples includes argument parsing and the resolver, and everything else that resides in the master, such as error handling.
Moving that over to C would be rather pointless.
/ Per Hedbor ()
Previous text:
2003-02-27 00:08: Subject: Re: 64 bit ints
Reminds me of java. I think "more and more in pike" can be dangerous - C is faster. Doing too much in Pike code will have negative impacts. Not that I see Pike having any issues with this.
/ David Hedbor
Of course. I didn't say that it's not the case. However things that would be harmful to Pike's performance would be if datatypes were to be implemented in Pike, such as mappings (as they are in Java). Also those parts have always been written in Pike. I am wondering if Nilsson had any specific examples.
/ David Hedbor
Previous text:
2003-02-27 00:09: Subject: Re: 64 bit ints
Some things are way easier to write in pike than in C, and not exactly time-critical. Typical examples includes argument parsing and the resolver, and everything else that resides in the master, such as error handling.
Moving that over to C would be rather pointless.
/ Per Hedbor ()
Grubbas new iteration of Pike compiler is written in Pike.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-27 00:19: Subject: Re: 64 bit ints
Of course. I didn't say that it's not the case. However things that would be harmful to Pike's performance would be if datatypes were to be implemented in Pike, such as mappings (as they are in Java). Also those parts have always been written in Pike. I am wondering if Nilsson had any specific examples.
/ David Hedbor
Now that does sound scary to me. What is the performance of that? Compilation, which is rather fast compared to many other languages I think, is still quite time consuming for larger applications (although mind you nowhere near where Java seems to be, despite that byte compilation is already done).
/ David Hedbor
Previous text:
2003-02-27 00:21: Subject: Re: 64 bit ints
Grubbas new iteration of Pike compiler is written in Pike.
/ Martin Nilsson (har bott i google)
On Thu, Feb 27, 2003 at 12:25:03AM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
Grubbas new iteration of Pike compiler is written in Pike.
May I ask - what for? It seems to me more like excercise than a real thing.... Something like web server written in bash or postscript... :)
Regards, /Al
The Pike compiler is streching its limits in terms of complexity and maintainability. In order to reduce code size and increase feature set it makes sense to rewrite it in a higher level language than C. Grubba can hopefully provide some more specific reasons.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-27 03:10: Subject: Re: 64 bit ints
On Thu, Feb 27, 2003 at 12:25:03AM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
Grubbas new iteration of Pike compiler is written in Pike.
May I ask - what for? It seems to me more like excercise than a real thing.... Something like web server written in bash or postscript... :)
Regards, /Al
/ Brevbäraren
Another reason is that it's a good way to get the metaprogramming APIs.
Metaprogramming is writing programs that act on programs, say a program that implements a "synchronized" modifier with the necessary mutex handling and so on: It would get called during compilation at specific points, typically when "synchronized" is encountered in the program being compiled, and it would add necessary extra variables in classes and functions and the code to use them.
For that to work there must be a lot deeper interface with the compiler. Right now it's basically a black box where you put in a string that contains source code and get a program back. Metaprograms need to add new modifiers, inspect the symbols in a context, add symbols in programs and functions, manipulate types etc. That's a lot of API. Writing the compiler in the language is a good way to get them since the compiler itself needs those APIs. Parts of the compiler would probably need to be migrated to C again before it's used in production, say the low level type handling and the conversion to bytecode (if they're written in Pike to begin with).
Being able to write metaprograms in a high level language is very important to make it feasible to extend the language with high level features like persistence, transactions and aspects. The current compiler is too heavily hacked up to allow that. Even implementing a comparatively simple thing like the implicit create functions caused many tricky bugs that took too long to solve.
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-27 03:20: Subject: Re: 64 bit ints
The Pike compiler is streching its limits in terms of complexity and maintainability. In order to reduce code size and increase feature set it makes sense to rewrite it in a higher level language than C. Grubba can hopefully provide some more specific reasons.
/ Martin Nilsson (har bott i google)
Yes. The trick is obviously to know what code to move "up" and what code to move "down". But that is fundamental knowledge to everyone that optimizes code. And, as has been stated in this forum before, just because you move code up to a higher level doesn't mean that it will be slower. It is realistic to have as goal that a Pike program should outperform a C program, given equal development cost.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-27 00:08: Subject: Re: 64 bit ints
Reminds me of java. I think "more and more in pike" can be dangerous - C is faster. Doing too much in Pike code will have negative impacts. Not that I see Pike having any issues with this.
/ David Hedbor
It might have positive impacts too; more stuff in Pike might lead to better optimizations for running Pike code. :)
/ Mirar
Previous text:
2003-02-27 00:08: Subject: Re: 64 bit ints
Reminds me of java. I think "more and more in pike" can be dangerous - C is faster. Doing too much in Pike code will have negative impacts. Not that I see Pike having any issues with this.
/ David Hedbor
heh,
i'd love to see the day that pike is selfhosting, that is, that pike can compile itself ;-))))
greetings, martin.
On Thu, Feb 27, 2003 at 12:10:02AM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
Is Pike written in C or in Pike?
Both.
OK, let me ask - is Pike (core language, without any modules) cannot exist? There is a need to bootstrap the core, anyway, so some _basic_ stuff must be implemented in something other than Pike, right?
Then you lack a fair amount of imagination. The way things are going, more and more of Pike is implemented in Pike.
The future is now... Pike OS on Pike Hardware :) Something like this? :)
Regards, /Al
There is a need to bootstrap the core, anyway, so some _basic_ stuff must be implemented in something other than Pike, right?
Yes, the "feature set" of Pike gradualy expands during the different steps in the bootstrap process. In some files, like master.pike, it is fairly obvious that we don't deal with ordinary Pike (due to the various workarounds), while other involved files (like _Charset.pmod) looks like an ordinary Pike file, but has to be modified with care so that no unavailable feature are used.
Since this is fairly unrelated to the issue of uselessness of modules in the module tree, what do you really want to know? Since this is the forum for people developing Pike, this information is already common knowledge. I am happy to enlight interested newcomers, but I have better things to do than reciting Pike internals to someone who just entertains himself with finding syntactical flaws in the presentation.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-27 03:07: Subject: Re: 64 bit ints
On Thu, Feb 27, 2003 at 12:10:02AM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
Is Pike written in C or in Pike?
Both.
OK, let me ask - is Pike (core language, without any modules) cannot exist? There is a need to bootstrap the core, anyway, so some _basic_ stuff must be implemented in something other than Pike, right?
Then you lack a fair amount of imagination. The way things are going, more and more of Pike is implemented in Pike.
The future is now... Pike OS on Pike Hardware :) Something like this? :)
Regards, /Al
/ Brevbäraren
On Thu, Feb 27, 2003 at 03:40:01AM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
knowledge. I am happy to enlight interested newcomers, but I have better things to do than reciting Pike internals to someone who just entertains himself with finding syntactical flaws in the presentation.
Well, I'll make my point clear - I want to know what to expect from Pike development in the future. I like it (the language), but I want some features implemented. Some I can do on my own, of course, but I want to know what to expect from the Pike team - are they willing to accept something that they (personally) don't need? Are they open-minded? Are they will try to convince me that I don't need "unsigned int" or just accept (for instance) my own version of this type for inclusion in future versions? And so on...
In the past, I heard something like "The PCRE is not good enough for Pike, that's why it will never be included into distribution", but now it seems to be a less strict. So - I just want to know, nothing more.
I hope it is clear (and fair) enough? :)
Regards, /Al
No, I don't think you'll be able to find someone willing to rewrite the entire Pike code to facilitate a unsigned int type. Pike still hasn't recovered fully from the automatic int-bignum conversion rewrite. And that extenstion had a functional purpose, which unsigned int doesn't have. But you shouldn't use this specific issue as a lithmus test for Pike development. Just because we don't want to change the most fundamental data type in the langauge throughout the entire core and all modules so that we get a slight memory gain that will be eaten up by the additional code and degrade the overall performance of the language doesn't mean that we aren't up to the next idea.
I'm not saying that we would actually implement it, since we doesn't even have time to realize our own ideas, but we don't forbid changes (as in this case) without evaluating them.
Regarding the PCRE issue the problem has not been that we don't want PCRE. While Pike was governed by Roxen the policy was that the Regexp engine should not be replaced with anything that couldn't handle wide strings. That was first only used to "prevent" internal replacements of the Regexp engine ("If we are to pay you for updating the Regexp engine we want wide string support"), but it was later used to protect the repository from what was percieved as badly designed and badly implemented code. I myself has neither seen the code nor the Pike API for the PCRE PExt, so I don't know if it is true today or ever was.
Regarding Bz2 (mentioned in an earlier message) the implementation of the Pike module Bz2 is not an ignorant act of code duplication. The Bz2 PExt was reviewed before implementation and Andreas Finnman made _two_ additional implementations as part of his thesis work. The module is only a by product, chosen for its small API, while the real work was to evaluate the C module API.
/ Martin Nilsson (har bott i google)
Previous text:
2003-02-27 04:02: Subject: Re: 64 bit ints
On Thu, Feb 27, 2003 at 03:40:01AM +0100, Martin Nilsson (har bott i google) @ Pike (-) developers forum wrote:
knowledge. I am happy to enlight interested newcomers, but I have better things to do than reciting Pike internals to someone who just entertains himself with finding syntactical flaws in the presentation.
Well, I'll make my point clear - I want to know what to expect from Pike development in the future. I like it (the language), but I want some features implemented. Some I can do on my own, of course, but I want to know what to expect from the Pike team - are they willing to accept something that they (personally) don't need? Are they open-minded? Are they will try to convince me that I don't need "unsigned int" or just accept (for instance) my own version of this type for inclusion in future versions? And so on...
In the past, I heard something like "The PCRE is not good enough for Pike, that's why it will never be included into distribution", but now it seems to be a less strict. So - I just want to know, nothing more.
I hope it is clear (and fair) enough? :)
Regards, /Al
/ Brevbäraren
No, I don't think you'll be able to find someone willing to rewrite the entire Pike code to facilitate a unsigned int type.
Wether it's a must for pike core or not I will refrain to comment on. However, Alexander is free to write a class for handling signed integers. Sure - it is a lot of overhead, but if it is designed to primarily work on blocks of unsigned integers it might be of some value (did something similar half a year back).
/ Peter Lundqvist (disjunkt)
Previous text:
2003-02-27 04:50: Subject: Re: 64 bit ints
No, I don't think you'll be able to find someone willing to rewrite the entire Pike code to facilitate a unsigned int type. Pike still hasn't recovered fully from the automatic int-bignum conversion rewrite. And that extenstion had a functional purpose, which unsigned int doesn't have. But you shouldn't use this specific issue as a lithmus test for Pike development. Just because we don't want to change the most fundamental data type in the langauge throughout the entire core and all modules so that we get a slight memory gain that will be eaten up by the additional code and degrade the overall performance of the language doesn't mean that we aren't up to the next idea.
I'm not saying that we would actually implement it, since we doesn't even have time to realize our own ideas, but we don't forbid changes (as in this case) without evaluating them.
Regarding the PCRE issue the problem has not been that we don't want PCRE. While Pike was governed by Roxen the policy was that the Regexp engine should not be replaced with anything that couldn't handle wide strings. That was first only used to "prevent" internal replacements of the Regexp engine ("If we are to pay you for updating the Regexp engine we want wide string support"), but it was later used to protect the repository from what was percieved as badly designed and badly implemented code. I myself has neither seen the code nor the Pike API for the PCRE PExt, so I don't know if it is true today or ever was.
Regarding Bz2 (mentioned in an earlier message) the implementation of the Pike module Bz2 is not an ignorant act of code duplication. The Bz2 PExt was reviewed before implementation and Andreas Finnman made _two_ additional implementations as part of his thesis work. The module is only a by product, chosen for its small API, while the real work was to evaluate the C module API.
/ Martin Nilsson (har bott i google)
No, I don't think you'll be able to find someone willing to rewrite the entire Pike code to facilitate a unsigned int type.
It might be possible to compile Pike using unsigned INT32 as INT_TYPE. Maybe most of the neccesary checks are already there... Hmm. It might not be what one really want, though. :)
/ Mirar
Previous text:
2003-02-27 04:50: Subject: Re: 64 bit ints
No, I don't think you'll be able to find someone willing to rewrite the entire Pike code to facilitate a unsigned int type. Pike still hasn't recovered fully from the automatic int-bignum conversion rewrite. And that extenstion had a functional purpose, which unsigned int doesn't have. But you shouldn't use this specific issue as a lithmus test for Pike development. Just because we don't want to change the most fundamental data type in the langauge throughout the entire core and all modules so that we get a slight memory gain that will be eaten up by the additional code and degrade the overall performance of the language doesn't mean that we aren't up to the next idea.
I'm not saying that we would actually implement it, since we doesn't even have time to realize our own ideas, but we don't forbid changes (as in this case) without evaluating them.
Regarding the PCRE issue the problem has not been that we don't want PCRE. While Pike was governed by Roxen the policy was that the Regexp engine should not be replaced with anything that couldn't handle wide strings. That was first only used to "prevent" internal replacements of the Regexp engine ("If we are to pay you for updating the Regexp engine we want wide string support"), but it was later used to protect the repository from what was percieved as badly designed and badly implemented code. I myself has neither seen the code nor the Pike API for the PCRE PExt, so I don't know if it is true today or ever was.
Regarding Bz2 (mentioned in an earlier message) the implementation of the Pike module Bz2 is not an ignorant act of code duplication. The Bz2 PExt was reviewed before implementation and Andreas Finnman made _two_ additional implementations as part of his thesis work. The module is only a by product, chosen for its small API, while the real work was to evaluate the C module API.
/ Martin Nilsson (har bott i google)
Of course, the consequence would be that all negative numbers would have to be represented as bignums. I kind of doubt that performance would increase... ;-)
/ Marcus Comstedt (ACROSS) (Hail Ilpalazzo!)
Previous text:
2003-02-27 08:11: Subject: Re: 64 bit ints
No, I don't think you'll be able to find someone willing to rewrite the entire Pike code to facilitate a unsigned int type.
It might be possible to compile Pike using unsigned INT32 as INT_TYPE. Maybe most of the neccesary checks are already there... Hmm. It might not be what one really want, though. :)
/ Mirar
I saw you added experimental configure options for it. I suspect the really big problem would be all the builtin functions that take or return (small) negative integers without having any bignum support in them.
/ Martin Stjernholm, Roxen IS
Previous text:
2003-02-27 08:11: Subject: Re: 64 bit ints
No, I don't think you'll be able to find someone willing to rewrite the entire Pike code to facilitate a unsigned int type.
It might be possible to compile Pike using unsigned INT32 as INT_TYPE. Maybe most of the neccesary checks are already there... Hmm. It might not be what one really want, though. :)
/ Mirar
Yes, I didn't get very far. :)
#define MAX_INT et al also only tests the size of the int, not the signedness.
/ Mirar
Previous text:
2003-02-28 01:53: Subject: Re: 64 bit ints
I saw you added experimental configure options for it. I suspect the really big problem would be all the builtin functions that take or return (small) negative integers without having any bignum support in them.
/ Martin Stjernholm, Roxen IS
Regarding Bz2 (mentioned in an earlier message) the implementation of the Pike module Bz2 is not an ignorant act of code duplication. [...] The module is only a by product, chosen for its small API, while the real work was to evaluate the C module API.
And document, non the less. The results of that is available on the pike site since yesterday - see the PDFs and perhaps the tarball at http://pike.ida.liu.se/projects/docs/cmods/downloads.xml for the full story. Some parts of the APIs are still undocumented, but the general situation has much improved.
/ Johan Sundström (a hugging punishment!)
Previous text:
2003-02-27 04:50: Subject: Re: 64 bit ints
No, I don't think you'll be able to find someone willing to rewrite the entire Pike code to facilitate a unsigned int type. Pike still hasn't recovered fully from the automatic int-bignum conversion rewrite. And that extenstion had a functional purpose, which unsigned int doesn't have. But you shouldn't use this specific issue as a lithmus test for Pike development. Just because we don't want to change the most fundamental data type in the langauge throughout the entire core and all modules so that we get a slight memory gain that will be eaten up by the additional code and degrade the overall performance of the language doesn't mean that we aren't up to the next idea.
I'm not saying that we would actually implement it, since we doesn't even have time to realize our own ideas, but we don't forbid changes (as in this case) without evaluating them.
Regarding the PCRE issue the problem has not been that we don't want PCRE. While Pike was governed by Roxen the policy was that the Regexp engine should not be replaced with anything that couldn't handle wide strings. That was first only used to "prevent" internal replacements of the Regexp engine ("If we are to pay you for updating the Regexp engine we want wide string support"), but it was later used to protect the repository from what was percieved as badly designed and badly implemented code. I myself has neither seen the code nor the Pike API for the PCRE PExt, so I don't know if it is true today or ever was.
Regarding Bz2 (mentioned in an earlier message) the implementation of the Pike module Bz2 is not an ignorant act of code duplication. The Bz2 PExt was reviewed before implementation and Andreas Finnman made _two_ additional implementations as part of his thesis work. The module is only a by product, chosen for its small API, while the real work was to evaluate the C module API.
/ Martin Nilsson (har bott i google)
Didn't I post benchmarks earlier? 8 byte svalues are faster then 12 byte svalues; but if svalues are 12 byte anyway (by 64 bit pointers or floats), there is not much performance loss to use 64 bit integers, and much performance gain in any operation with integers >MAXINT32.
/ Mirar
Previous text:
2003-02-26 20:53: Subject: Re: 64 bit ints
What would be gained with that? Waste less memory?
/ Martin Stjernholm, Roxen IS
Yes. The most common operation is probably to use a protocol with 32 bit integers; sscanf "%4c" will create bignums if the result is bigger then MAXINT. (It's one of the benchmark tests now.)
/ Mirar
Previous text:
2003-02-26 20:14: Subject: Re: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
Of course, if you do a log of calculations of itnegers that fit in 64 bits but not 32, using 64 bit ints will be a lot faster even on 32 bit machines. The bignum support isn't fast.
/ David Hedbor
Well, what I see is (at least) two (or three?) separate issues:
* sprintf("%f",...) broken for large floats (but small enough to fit in a double):
Pike v7.4 release 13 running Hilfe v3.5 (Incremental Pike Frontend)
sprintf("%e", (float)"1e300");
(1) Result: "1.000e+300"
sprintf("%f", (float)"1e300");
Segmentation fault (core dumped)
..while "%g" handles this one well.
* no chance (that I could discover) of correctly printing a float that is too big for a double, but fits in a long double (pike compiled with --with-long-double-precision):
sprintf("%e", (float)"1e308");
(13) Result: "1.000e+308"
sprintf("%e", (float)"1e309");
(14) Result: "3.403e+38"
* sprintf("%O",...) fails on big floats (but still within the range of a double)
sprintf("%O", (float)"1e300");
Segmentation fault (core dumped)
FWIW, on my system (debian gnu/linux, x86) float.h defines DBL_MAX_10_EXP as 308, and LDBL_MAX_10_EXP as 4932.
/ rjb
Previous text:
2003-02-26 11:14: Subject: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
Quite possibly. I wonder why it doesn't like it. I still consider that a bug, though, since 1e1000 work.
/ Mirar
Ummm well, as others smarter than myself noticed, at least part of the problem is in casting a string to float:
(float)"1e308" < (float)"1e309";
(3) Result: 0
(float)"1e308" < ((float)"1e308")*10;
(4) Result: 1
(((float)"1e308")*10)/(float)"1e308";
(7) Result: 10.000000
That still leaves the segfaults to be accounted for, though.
/ rjb
Previous text:
2003-02-26 14:06: Subject: Floating point (conversion) bug (affected: v7.4 & v7.5; may be Hilfe only)
Well, what I see is (at least) two (or three?) separate issues:
- sprintf("%f",...) broken for large floats (but small enough to fit
in a double):
Pike v7.4 release 13 running Hilfe v3.5 (Incremental Pike Frontend)
sprintf("%e", (float)"1e300");
(1) Result: "1.000e+300"
sprintf("%f", (float)"1e300");
Segmentation fault (core dumped)
..while "%g" handles this one well.
- no chance (that I could discover) of correctly printing a float that
is too big for a double, but fits in a long double (pike compiled with --with-long-double-precision):
sprintf("%e", (float)"1e308");
(13) Result: "1.000e+308"
sprintf("%e", (float)"1e309");
(14) Result: "3.403e+38"
- sprintf("%O",...) fails on big floats (but still within the range of
a double)
sprintf("%O", (float)"1e300");
Segmentation fault (core dumped)
FWIW, on my system (debian gnu/linux, x86) float.h defines DBL_MAX_10_EXP as 308, and LDBL_MAX_10_EXP as 4932.
/ rjb
pike-devel@lists.lysator.liu.se