From Bugzilla Helper: User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.2.16-22 i686) Description of problem: an expression using > to compare a double quotient to a double value is true when the quotient is equal to the value . behaviour occurs for default setting of optimization. How reproducible: Always Steps to Reproduce: 1. Here is the C code in a file named "divprob5.c" ------------------------------------------------------------- #include "stdio.h" int main( int argc, char* argv[] ) { double tot; double cnt; cnt = 99.0; tot = 100.0; if (cnt/tot>.99) { (void)fprintf(stderr, "%.0f/%.0f > .99 FALSE\n", cnt, tot); } else { (void)fprintf(stderr, "%.0f/%.0f <= .99 TRUE\n", cnt, tot); } return 0; } ------------------------------------------------------------ 2. Here is a script to run the test in a file named "doit5.sh": ------------------------------------------------------------ #!/bin/sh # compile w/o optimization gcc -g -o divprob5N divprob5.c ./divprob5N > divprob5N.out 2>&1 # compile with optimization gcc -g -O -o divprob5O divprob5.c ./divprob5O > divprob5O.out 2>&1 # dissassemble code w/o optimization objdump -d -l -S divprob5N > divprob5N.dmp # dissassemble code witho optimization objdump -d -l -S divprob5O > divprob5O.dmp ------------------------------------------------------------- 3. Put divprob5.c and doit5.sh in a directory, run doit5.sh, and compare the results of the two .out files. Actual Results: [semagon rwh]$ more divprob5N.out 99/100 > .99 FALSE [semagon rwh]$ more divprob5O.out 99/100 <= .99 TRUE Expected Results: Both cases should report 99/100 <= .99 Additional info: From dmesg: CPU: Intel Pentium III (Coppermine) stepping 03 Checking 386/387 coupling... OK, FPU using exception 16 error reporting. From gcc -v: Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs gcc version 2.96 20000731 (Red Hat Linux 7.0)
This is a flaw in the testcase. You should be aware of the limitations of storing a number with period after binary point (such as .99). Also a problem here is a flawed design of Intel FPU where all computation is done in long double precision internally. Particularly in your example, the division is done in long double precision, but you then compare it with 0.99 (which has double precision) which is smaller than 0.99L. If the compiler had to make sure this works, it would have to store the result of division into memory, then load it again and do the comparison. IA-32 would then be totally unusable for use in floating point. BTW: egcs 1.1.x, gcc-2.95.x, gcc-2.96-RH, gcc-3_0-branch and gcc CVS head (3.1) all give the same results. If you want to do such kind of comparisons, you should basically subtract those two numbers and see if the difference is below some epsilon.
Thanks for the explanation. The code in question was distilled from production code that runs identically on AIX, SGI, and Solaris. My problem was really that a regression test was giving different results and I was almost to the point of thinking my machine's FPU was bad. While I'm aware of problems working with floating point, IEEE 754 does advertise features of consistent and predictable results at a given level of precision regardless of architecture. Now I see that there are different levels of precision involved. Your suggestion, that is standard for people working with integral values in real types, isn't applicable here. In this instance, I got consistent results by changing the code from part/total<.99; to to_percent = 100.0/total; part*to_percent<99.0; Thanks, Bill