Bug 37811
Summary: | GCC gives different math results on sun and intel | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Retired] Red Hat Linux | Reporter: | Wade Minter <minter> | ||||||||
Component: | gcc | Assignee: | Jakub Jelinek <jakub> | ||||||||
Status: | CLOSED UPSTREAM | QA Contact: | David Lawrence <dkl> | ||||||||
Severity: | high | Docs Contact: | |||||||||
Priority: | high | ||||||||||
Version: | 7.1 | CC: | teg | ||||||||
Target Milestone: | --- | ||||||||||
Target Release: | --- | ||||||||||
Hardware: | i386 | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2004-10-01 22:03:38 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Wade Minter
2001-04-26 13:25:33 UTC
Created attachment 16485 [details]
C program to generate the bug - different output on sun vs. intel
Created attachment 16486 [details]
Sun assembly language generated from dan.c on sparc.
Created attachment 16487 [details]
Intel assembly code generated from dan.c
Just for reference, running the compiled version of dan.c on a Sun (gcc 2.95.2) gives: [minter@therock ]$ ./dan width = uWidth * uuToDb = 360 width2 = (double) (uWidth * uuToDb) = 360.00000000000000000000 Running it on an intel (Red Hat Linux 7.1, gcc 2.96) gives: [wminter@stonecold wminter]$ ./dan width = uWidth * uuToDb = 359 width2 = (double) (uWidth * uuToDb) = 360.00000000000000000000 This isn't a bug - it's a demonstration of the fact that floating point numbers in binary aren't (usually) exact (you can't map an infinite amount of numbers onto a finite set of combinations of 0 and 1, so there is a lot of accuracy). And if the result is 359.99999999999999999999999999999999999999999999999999, making an int of it will make it 359 as the operation truncates. That you get different results on different CPUs is to be expected - you could also get different results on different compilers, if the instructions were ordered slightly different (but still correct). Just a clarification before closing. Our engineer asks: ##### The problem is that on Linux the statement i1 = r1 = i*r; gives different answers to r1 = i*r; i1 = r1; where on the Sun it gives the same answer. ------------------------- Looking at the assembly code for Linux we get fmull -16(%ebp) fstl -32(%ebp) for the first, and fmulp %st,%st(1) fstpl -32(%ebp) fldl -32(%ebp) for the second. ------------------------- If we change the assembly code for the second to fmulp %st,%st(1) fstl -32(%ebp) we get the same answer. ------------------------- What we want to know is why the value in the first (from fstl) is different from the second (from fldl), and can the compiler be changed to get over this (Pentium?) problem. ##### So, it appears that the compiler may be generating an incorrect assembly instruction, or possibly a suboptimal one. I can attach the code samples if you need them. The gcc bugzilla tracker for this problem is http://gcc.gnu.org/PR323 This is a known problem with gcc spill code generation. You can work around the problem with -ffloat-store or with -mfpmath=sse -msse2. The later should also get you the same numerical results as the Sun, but does require quite modern hardware (p4 or athlon). |