Bug 39284 - error in relational expression involving divide and double
error in relational expression involving divide and double
Product: Red Hat Linux
Classification: Retired
Component: gcc (Show other bugs)
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Jakub Jelinek
David Lawrence
Depends On:
  Show dependency treegraph
Reported: 2001-05-06 15:40 EDT by Bill Hayman
Modified: 2007-04-18 12:33 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2001-05-06 15:41:01 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Bill Hayman 2001-05-06 15:40:57 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.2.16-22 i686)

Description of problem:
an expression using > to compare a double quotient to a double value is
true when the quotient is equal to the value .   behaviour occurs for
default setting of optimization.

How reproducible:

Steps to Reproduce:
1.  Here is the C code in a file named "divprob5.c"
#include "stdio.h"

   int          argc,
   char*        argv[]
   double       tot;
   double       cnt;

   cnt = 99.0;
   tot = 100.0;
   if (cnt/tot>.99) {
      (void)fprintf(stderr, "%.0f/%.0f > .99    FALSE\n", cnt, tot);
   } else {
      (void)fprintf(stderr, "%.0f/%.0f <= .99   TRUE\n", cnt, tot);

   return 0;

2. Here is a script to run the test in a file named "doit5.sh":

# compile w/o optimization
gcc -g -o divprob5N divprob5.c
./divprob5N > divprob5N.out 2>&1

# compile with optimization
gcc -g -O -o divprob5O divprob5.c
./divprob5O > divprob5O.out 2>&1

# dissassemble code w/o optimization
objdump -d -l -S divprob5N > divprob5N.dmp

# dissassemble code witho optimization
objdump -d -l -S divprob5O > divprob5O.dmp

3. Put divprob5.c and doit5.sh in a directory, run doit5.sh, and 
compare the results of the two .out files.

Actual Results:  [semagon rwh]$ more divprob5N.out
99/100 > .99    FALSE
[semagon rwh]$ more divprob5O.out
99/100 <= .99   TRUE

Expected Results:  Both cases should report 99/100 <= .99

Additional info:

From dmesg:
CPU: Intel Pentium III (Coppermine) stepping 03
Checking 386/387 coupling... OK, FPU using exception 16 error reporting.

From gcc -v:
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs
gcc version 2.96 20000731 (Red Hat Linux 7.0)
Comment 1 Jakub Jelinek 2001-05-07 06:29:53 EDT
This is a flaw in the testcase. You should be aware of the limitations of
storing a number with period after binary point (such as .99).
Also a problem here is a flawed design of Intel FPU where all computation is
done in long double precision internally.
Particularly in your example, the division is done in long double precision,
but you then compare it with 0.99 (which has double precision) which is smaller
than 0.99L.
If the compiler had to make sure this works, it would have to store the result
of division into memory, then load it again and do the comparison. IA-32 would
then be totally unusable for use in floating point.
BTW: egcs 1.1.x, gcc-2.95.x, gcc-2.96-RH, gcc-3_0-branch and gcc CVS head (3.1)
all give the same results.
If you want to do such kind of comparisons, you should basically subtract those
two numbers and see if the difference is below some epsilon.
Comment 2 Bill Hayman 2001-05-07 10:37:50 EDT
Thanks for the explanation.  The code in question was distilled from production
code that 
runs identically on AIX, SGI,  and Solaris.  My problem was really that a
regression test
was giving different results and I was almost to the point of thinking my
machine's FPU 
was bad.  

While I'm aware of problems working with floating point, IEEE 754 does advertise
of consistent and predictable results at a given level of precision regardless
of architecture. 
Now I see that there are different levels of precision involved.  

Your suggestion, that is standard for people working with integral values in
real types,
isn't applicable here.  In this instance, I got consistent results by changing
the code
   to_percent = 100.0/total;


Note You need to log in before you can comment on or make changes to this bug.