Bug 770437 - gcc quad precision is broken
Summary: gcc quad precision is broken
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Fedora
Classification: Fedora
Component: gcc
Version: 16
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Jakub Jelinek
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-12-26 14:29 UTC by Need Real Name
Modified: 2011-12-26 19:28 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-12-26 17:42:26 UTC
Type: ---


Attachments (Terms of Use)
test_vars.c (315 bytes, text/x-csrc)
2011-12-26 14:31 UTC, Need Real Name
no flags Details
test_vars.f (172 bytes, text/plain)
2011-12-26 14:32 UTC, Need Real Name
no flags Details
java test (568 bytes, text/plain)
2011-12-26 14:33 UTC, Need Real Name
no flags Details

Description Need Real Name 2011-12-26 14:29:48 UTC
C has probably broken printf quad/double arithmetics
e.g. 2/7 is exactly 2.85714(285714) : 285714 is in period.
in C:
quad2/7=2.85714285714285714281842135098266055592830525711178779602050781250000e-01
dble2/7=2.85714285714285698425385362497763708233833312988281250000000000000000e-01

in gFortran:
quad 2/7=0.28571428571428571428571428571428570052910000000000000000E+00
in java BigDecimal 128 bit double:
quad 2/7=0.2857142857142857142857142857142857

The problem - printf adds to many incorrect digits.
Fortran produces much fewer incorrect digits.
Java has none.

Code attached

Comment 1 Need Real Name 2011-12-26 14:31:02 UTC
Created attachment 549598 [details]
test_vars.c

gcc -Wall test_vars.c -lm

Comment 2 Need Real Name 2011-12-26 14:32:03 UTC
Created attachment 549599 [details]
test_vars.f

gfortran -Wall test_vars.f

Comment 3 Need Real Name 2011-12-26 14:33:36 UTC
Created attachment 549600 [details]
java test

javac -g test_java_arbitr_precision.java
java test_java_arbitr_precision

Comment 4 Jakub Jelinek 2011-12-26 17:42:26 UTC
Why do you think so?  285714 * 7 is 1999998, not 2000000, so obviously 2.0 / 7.0 is not 2.85714 in infinite precision.  Long double on x86_64/i686 is not IEEE quad format, so no idea why you talk about quad precision at all, if you want quad precision, you'd need to use __float128 type instead and link with -lquadmath.  And printf prints the chosen number of digits you are asking for.  You can use %La etc. format strings to print the number in hexadecimal to see all the bits exactly.

Comment 5 Need Real Name 2011-12-26 18:42:07 UTC
1. I wrote 
2.85714(285714) this mean infinite periodic series:
2.85714285714285714285714285714285714285714 ....
standard definition of brackets () see e.g. 
http://en.wikipedia.org/wiki/Repeating_decimal#Notation

2. Sorry I thought that long double is
IEEE 754R Decimal128 format, 34 digits.

3. I still think printf is wrong.
Most other languages (Java, fortran, etc)
start printing 0-s when the required number of digits to show is significantly
large than correct value.
in C double 2/7
are printed as 
2.85714285714285698425385362497763708233833312988281250000000000000000e-01
2.857142857142857xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

quad 2/7
2.85714285714285714281842135098266055592830525711178779602050781250000e-01
2.857142857142857142857142857142857xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

in double case I see no point printing 36 incorrect digits 
before starting printing 0-s. In quad case I see no point to print 31 incorrect 
digit before starting to print 0-s.

Comment 6 Jakub Jelinek 2011-12-26 19:28:12 UTC
Ok, I wasn't aware of the bracket notation for period.
Anyway, neither in IEEE 754 double, nor in IEEE 854 extended double 2/7.0 is obviously a periodic number,
in double it is 0x1.2492492492492p-2 and in the x86_64 long double
0x1.2492492492492492p-2.  If you ask for printf for more decimal digits, it just keeps dividing the remainder by 10 and adding more digits, while there are algorithms for generating shortest possible decimal number that still represents the given binary number, glibc printf doesn't use them, no standard requires it, it slows things down and you just get some decimal number that represents the given binary number.  IEEE 754R _Decimal{32,64,128} are completely different types, are also supported by gcc, but on most CPUs are completely software emulated and thus much slower.


Note You need to log in before you can comment on or make changes to this bug.