Bug 42991 - libm routines give erroneous results
Summary: libm routines give erroneous results
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Raw Hide
Classification: Retired
Component: glibc
Version: 1.0
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Jakub Jelinek
QA Contact: Aaron Brown
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2001-05-31 04:58 UTC by Need Real Name
Modified: 2016-11-24 14:55 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2001-05-31 04:59:44 UTC
Embargoed:


Attachments (Terms of Use)
Source code example of libm problems (429 bytes, text/plain)
2001-05-31 04:59 UTC, Need Real Name
no flags Details

Description Need Real Name 2001-05-31 04:58:38 UTC
From Bugzilla Helper:
User-Agent: Mozilla/4.76 [en] (X11; U; Linux 2.4.4 i686)

Description of problem:
When compiling a simple program using the lgamma and tgamma routines 
erroneous results occured when using gcc. Instead, if compiled using kgcc
the expected results are obtained.



How reproducible:
Always

Steps to Reproduce:
Sample program tg.c enclosed separately
1. gcc -c tg.c
2. gcc -o gcc-tg tg.o -lm
3. ./gcc-tg
Enter the following numbers to the program:
1
2
3
4
10.5


Actual Results:   x= 1
exp[lgamma(1)]=1
tgamma(1)=1

 x= 2
exp[lgamma(2)]=1
tgamma(2)=1

 x= 3
exp[lgamma(3)]=2
tgamma(3)=1

 x= 4
exp[lgamma(4)]=6
tgamma(4)=1

 x= 10.5
lgamma(10.5)= HUGE_VAL
tgamma(10.5)=1



Expected Results:  [smeds@filippan slask]$ ./a.out 
 x= 1
exp[lgamma(1)]=1
tgamma(1)=1

 x= 2
exp[lgamma(2)]=1
tgamma(2)=1

 x= 3
exp[lgamma(3)]=2
tgamma(3)=2

 x= 4
exp[lgamma(4)]=6
tgamma(4)=6

 x= 10.5
exp[lgamma(10.5)]=1.13328e+06
tgamma(10.5)=1.13328e+06



Additional info:

The error has been found on more than one system. The one that I have
most close control over is a RedHat 7.0 system. It was tested using both
kgcc-1.1.2-40, gcc 2.96-69, glibc-2.2-12 (As from updates.redhat.com for
7.0)
and gcc 2.96-81, glibc 2.2.2-10 (As from rawhide.redhat.com)

One theory locally is that glibc might have been compiled using  the
compiler option "-ffast-math". This could be a reason for the gamma 
function not to work properly. As I've been told this could possibly
have effects in the IEEE numerics (NaN/Inf etc) that needs to be properly
treated by the libm library.

Interestingly enough is that if the program is compiled using kgcc and
linked
with gcc it works as expected. If this indicates that the error is in the
gcc compiler
rather than glibc I am not able to tell.

Comment 1 Need Real Name 2001-05-31 04:59:40 UTC
Created attachment 20010 [details]
Source code example of libm problems

Comment 2 Jakub Jelinek 2001-06-01 07:58:56 UTC
glibc has not been compiled with -ffast-math.
The problem is elsewhere, I'd suggest you compile your programs with -Wall.
The thing is that tgamma function is part of C99 standard only, which is
a feature set you don't get by default (glibc headers attempt to be namespace
clean). To select C99 standard feature set you can use e.g. -std=c99,
but then the program will not work either, since ISO C99 does not define
signgam variable. You can use e.g. -D_GNU_SOURCE which will include all
non-deprecated non-conflicting feature sets, so you'll get both signgam
and tgamma prototype.
The program above misbehaves simply because with the options you gave to gcc
there was no prototype for tgamma function, so it was assumed to be
int tgamma();


Note You need to log in before you can comment on or make changes to this bug.