Bug 42991 - libm routines give erroneous results
libm routines give erroneous results
Status: CLOSED NOTABUG
Product: Red Hat Raw Hide
Classification: Retired
Component: glibc (Show other bugs)
1.0
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Jakub Jelinek
Aaron Brown
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2001-05-31 00:58 EDT by Need Real Name
Modified: 2016-11-24 09:55 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2001-05-31 00:59:44 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
Source code example of libm problems (429 bytes, text/plain)
2001-05-31 00:59 EDT, Need Real Name
no flags Details

  None (edit)
Description Need Real Name 2001-05-31 00:58:38 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.4.4 i686)

Description of problem:
When compiling a simple program using the lgamma and tgamma routines 
erroneous results occured when using gcc. Instead, if compiled using kgcc
the expected results are obtained.



How reproducible:
Always

Steps to Reproduce:
Sample program tg.c enclosed separately
1. gcc -c tg.c
2. gcc -o gcc-tg tg.o -lm
3. ./gcc-tg
Enter the following numbers to the program:
1
2
3
4
10.5


Actual Results:   x= 1
exp[lgamma(1)]=1
tgamma(1)=1

 x= 2
exp[lgamma(2)]=1
tgamma(2)=1

 x= 3
exp[lgamma(3)]=2
tgamma(3)=1

 x= 4
exp[lgamma(4)]=6
tgamma(4)=1

 x= 10.5
lgamma(10.5)= HUGE_VAL
tgamma(10.5)=1



Expected Results:  [smeds@filippan slask]$ ./a.out 
 x= 1
exp[lgamma(1)]=1
tgamma(1)=1

 x= 2
exp[lgamma(2)]=1
tgamma(2)=1

 x= 3
exp[lgamma(3)]=2
tgamma(3)=2

 x= 4
exp[lgamma(4)]=6
tgamma(4)=6

 x= 10.5
exp[lgamma(10.5)]=1.13328e+06
tgamma(10.5)=1.13328e+06



Additional info:

The error has been found on more than one system. The one that I have
most close control over is a RedHat 7.0 system. It was tested using both
kgcc-1.1.2-40, gcc 2.96-69, glibc-2.2-12 (As from updates.redhat.com for
7.0)
and gcc 2.96-81, glibc 2.2.2-10 (As from rawhide.redhat.com)

One theory locally is that glibc might have been compiled using  the
compiler option "-ffast-math". This could be a reason for the gamma 
function not to work properly. As I've been told this could possibly
have effects in the IEEE numerics (NaN/Inf etc) that needs to be properly
treated by the libm library.

Interestingly enough is that if the program is compiled using kgcc and
linked
with gcc it works as expected. If this indicates that the error is in the
gcc compiler
rather than glibc I am not able to tell.
Comment 1 Need Real Name 2001-05-31 00:59:40 EDT
Created attachment 20010 [details]
Source code example of libm problems
Comment 2 Jakub Jelinek 2001-06-01 03:58:56 EDT
glibc has not been compiled with -ffast-math.
The problem is elsewhere, I'd suggest you compile your programs with -Wall.
The thing is that tgamma function is part of C99 standard only, which is
a feature set you don't get by default (glibc headers attempt to be namespace
clean). To select C99 standard feature set you can use e.g. -std=c99,
but then the program will not work either, since ISO C99 does not define
signgam variable. You can use e.g. -D_GNU_SOURCE which will include all
non-deprecated non-conflicting feature sets, so you'll get both signgam
and tgamma prototype.
The program above misbehaves simply because with the options you gave to gcc
there was no prototype for tgamma function, so it was assumed to be
int tgamma();

Note You need to log in before you can comment on or make changes to this bug.