On Mon, Sep 15, 2003 at 05:10:04PM -0400, Martin Nilsson (saturator) @ Pike (-) developers forum wrote:
the calculation VERY_BIG_VALUE - VERY_SMALL_VALUE - VERY_BIG_VALUE you will end up with 0.0, and not VERY_SMALL_VALUE, which would be the
Not really. It depends on number of significant digits in both values, and precision of floating point used in calculations (which is irrelevant in our case).
But the same applies to integers (at least 8/16/32/64/128 etc fixed length integers), unless you use MPI and floating precision - even worse, since it is rather strange to get very big value when subtracting one from zero (in case of unsigned ints), or when VERY_BIG_VALUE exceeds precision of integer.
bit-for-bit perspective identical to the constant 0.0, the former is just an approximation.
Not more approximation than 1/3 == 0.
And although 2.0 == 2.0, it is not at all certain that 1.0/7.0 == 10.0/70.0 and 2 == 2.0 in the general case, due to the nature of floats.
It is certain at least for several bits of precision (and it is rare when we have less than 16 bits of precision for floats). The nature (and implementation) of floating-point ops ensures that 10.0/70.0 == 1.0/7.0, even sqrt(x*x) == x (try this with any values that you like - eventually you will find one which doesn't work but again - this will be matter of precision, which is irrelevant),
Well, I'll skip my usual sarcasm and ask directly - is there any pratical reason (and arguments) against my proposal? I already presented several _arguments_, but in return I see only opinions (not only, though, but at least several negative _opinions_) without arguments.
Regards, /Al