Bug 42991

Summary: libm routines give erroneous results
Product: [Retired] Red Hat Raw Hide Reporter: Need Real Name <smeds>
Component: glibcAssignee: Jakub Jelinek <jakub>
Status: CLOSED NOTABUG QA Contact: Aaron Brown <abrown>
Severity: medium Docs Contact:
Priority: medium    
Version: 1.0CC: fweimer
Target Milestone: ---   
Target Release: ---   
Hardware: i386   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2001-05-31 04:59:44 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Source code example of libm problems none

Description Need Real Name 2001-05-31 04:58:38 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.4.4 i686)

Description of problem:
When compiling a simple program using the lgamma and tgamma routines 
erroneous results occured when using gcc. Instead, if compiled using kgcc
the expected results are obtained.



How reproducible:
Always

Steps to Reproduce:
Sample program tg.c enclosed separately
1. gcc -c tg.c
2. gcc -o gcc-tg tg.o -lm
3. ./gcc-tg
Enter the following numbers to the program:
1
2
3
4
10.5


Actual Results:   x= 1
exp[lgamma(1)]=1
tgamma(1)=1

 x= 2
exp[lgamma(2)]=1
tgamma(2)=1

 x= 3
exp[lgamma(3)]=2
tgamma(3)=1

 x= 4
exp[lgamma(4)]=6
tgamma(4)=1

 x= 10.5
lgamma(10.5)= HUGE_VAL
tgamma(10.5)=1



Expected Results:  [smeds@filippan slask]$ ./a.out 
 x= 1
exp[lgamma(1)]=1
tgamma(1)=1

 x= 2
exp[lgamma(2)]=1
tgamma(2)=1

 x= 3
exp[lgamma(3)]=2
tgamma(3)=2

 x= 4
exp[lgamma(4)]=6
tgamma(4)=6

 x= 10.5
exp[lgamma(10.5)]=1.13328e+06
tgamma(10.5)=1.13328e+06



Additional info:

The error has been found on more than one system. The one that I have
most close control over is a RedHat 7.0 system. It was tested using both
kgcc-1.1.2-40, gcc 2.96-69, glibc-2.2-12 (As from updates.redhat.com for
7.0)
and gcc 2.96-81, glibc 2.2.2-10 (As from rawhide.redhat.com)

One theory locally is that glibc might have been compiled using  the
compiler option "-ffast-math". This could be a reason for the gamma 
function not to work properly. As I've been told this could possibly
have effects in the IEEE numerics (NaN/Inf etc) that needs to be properly
treated by the libm library.

Interestingly enough is that if the program is compiled using kgcc and
linked
with gcc it works as expected. If this indicates that the error is in the
gcc compiler
rather than glibc I am not able to tell.

Comment 1 Need Real Name 2001-05-31 04:59:40 UTC
Created attachment 20010 [details]
Source code example of libm problems

Comment 2 Jakub Jelinek 2001-06-01 07:58:56 UTC
glibc has not been compiled with -ffast-math.
The problem is elsewhere, I'd suggest you compile your programs with -Wall.
The thing is that tgamma function is part of C99 standard only, which is
a feature set you don't get by default (glibc headers attempt to be namespace
clean). To select C99 standard feature set you can use e.g. -std=c99,
but then the program will not work either, since ISO C99 does not define
signgam variable. You can use e.g. -D_GNU_SOURCE which will include all
non-deprecated non-conflicting feature sets, so you'll get both signgam
and tgamma prototype.
The program above misbehaves simply because with the options you gave to gcc
there was no prototype for tgamma function, so it was assumed to be
int tgamma();