Hide Forgot
C has probably broken printf quad/double arithmetics e.g. 2/7 is exactly 2.85714(285714) : 285714 is in period. in C: quad2/7=2.85714285714285714281842135098266055592830525711178779602050781250000e-01 dble2/7=2.85714285714285698425385362497763708233833312988281250000000000000000e-01 in gFortran: quad 2/7=0.28571428571428571428571428571428570052910000000000000000E+00 in java BigDecimal 128 bit double: quad 2/7=0.2857142857142857142857142857142857 The problem - printf adds to many incorrect digits. Fortran produces much fewer incorrect digits. Java has none. Code attached
Created attachment 549598 [details] test_vars.c gcc -Wall test_vars.c -lm
Created attachment 549599 [details] test_vars.f gfortran -Wall test_vars.f
Created attachment 549600 [details] java test javac -g test_java_arbitr_precision.java java test_java_arbitr_precision
Why do you think so? 285714 * 7 is 1999998, not 2000000, so obviously 2.0 / 7.0 is not 2.85714 in infinite precision. Long double on x86_64/i686 is not IEEE quad format, so no idea why you talk about quad precision at all, if you want quad precision, you'd need to use __float128 type instead and link with -lquadmath. And printf prints the chosen number of digits you are asking for. You can use %La etc. format strings to print the number in hexadecimal to see all the bits exactly.
1. I wrote 2.85714(285714) this mean infinite periodic series: 2.85714285714285714285714285714285714285714 .... standard definition of brackets () see e.g. http://en.wikipedia.org/wiki/Repeating_decimal#Notation 2. Sorry I thought that long double is IEEE 754R Decimal128 format, 34 digits. 3. I still think printf is wrong. Most other languages (Java, fortran, etc) start printing 0-s when the required number of digits to show is significantly large than correct value. in C double 2/7 are printed as 2.85714285714285698425385362497763708233833312988281250000000000000000e-01 2.857142857142857xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx quad 2/7 2.85714285714285714281842135098266055592830525711178779602050781250000e-01 2.857142857142857142857142857142857xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx in double case I see no point printing 36 incorrect digits before starting printing 0-s. In quad case I see no point to print 31 incorrect digit before starting to print 0-s.
Ok, I wasn't aware of the bracket notation for period. Anyway, neither in IEEE 754 double, nor in IEEE 854 extended double 2/7.0 is obviously a periodic number, in double it is 0x1.2492492492492p-2 and in the x86_64 long double 0x1.2492492492492492p-2. If you ask for printf for more decimal digits, it just keeps dividing the remainder by 10 and adding more digits, while there are algorithms for generating shortest possible decimal number that still represents the given binary number, glibc printf doesn't use them, no standard requires it, it slows things down and you just get some decimal number that represents the given binary number. IEEE 754R _Decimal{32,64,128} are completely different types, are also supported by gcc, but on most CPUs are completely software emulated and thus much slower.