On Thu, Nov 03, 2005 at 11:06:57AM +0100, Ralf Stephan wrote:
style: OK (all 18 passed)
Thanks. Apparently Ted was testing on a ppc machine. We tried to figure it out last night, but I'm still mystified.
Update: Have made some more progress today, with Michael's help.
Apparently there is a bug in gcc 4.0 on ppc with -O2: even a simple multiplication of `(double) 0xb28 * 0x66' (which should be exact even with float or unsigned arithmetic) gets the wrong answer.
More precisely:
$ (echo '#include <stdio.h>'; echo 'int main() { printf("%a\n", (double) 0xb28 * 0x66); }') > tst.c \ && gcc-4.0 -O2 tst.c && ./a.out 0x1.1c7c07d8802a6p+18
whereas using -O1 or -O0 gets the correct result of 0x1.1c7cp+18.
However, strangely, g++-4.0 -O2 does get the correct result on that machine. So I don't yet have a definitive answer why we get the wrong answer in style-test, but the bug in Apple's gcc-4.0 -O2 on Darwin/PPC, combined with the below analysis of what should happen, does give a reasonable indication that the style-test failure is caused by a compiler bug.
The remainder of this message was written before discovering this compiler bug. You can probably ignore it.
pjrm.
The calculations involved are as follows, I believe:
unsigned a = unsigned(0.7 * 0xff0000 + 0.5); unsigned b = unsigned(0.4 * 0xff0000 + 0.5); unsigned m = unsigned((double) a * b / 0xff0000 + .5)
where unsigned(expr + .5) is of course an idiom for rounding expr to an integer (assuming expr is in the range of unsigned, as is the case here).
0xff0000 is a multiple of ten, so I'd expect a and b to be calculated as exactly 0xff0000 * 7/10 or 4/10 (in exact arithmetic), even though 0.7 and 0.4 can't be exactly represented as a double. (Update: Michael's just confirmed that these are calculated exactly on his ppc machine.)
a and b are each less than 2**24, so their product in exact arithmetic would in 48 bits, so I'd expect (double) a * b to be calculated exactly assuming IEEE doubles with 53 bits of significand. (Casting has higher precedence than multiplication.)
The a * b / 0xff0000 in exact arithmetic is 0x476666.666... I'd have expected floating point to calculate it as 0x476666.66666668. Adding decimal 0.5 (0x0.8) gives 0x476666.e6666668. (My x86 box does indeed give this.)
Casting to unsigned should then give 0x476666.