Bug 84383 - hex denorm floats not interpreted correctly
hex denorm floats not interpreted correctly
Status: CLOSED CURRENTRELEASE
Product: Red Hat Linux
Classification: Retired
Component: gcc (Show other bugs)
9
i686 Linux
medium Severity medium
: ---
: ---
Assigned To: Jakub Jelinek
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2003-02-15 04:26 EST by Ulrich Drepper
Modified: 2007-04-18 12:51 EDT (History)
2 users (show)

See Also:
Fixed In Version: 3.2.2-5
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2003-08-05 12:19:09 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ulrich Drepper 2003-02-15 04:26:26 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3b) Gecko/20030213

Description of problem:
With gcc-3.2.2-1 (and probably earlier versions):

long double d = 0x0.0000003ffffffff00000p-16385L;

gcc expands this to all zeros.  The correct representation is

  + 0000 00000003 ffffffff

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.echo "long double d = 0x0.0000003ffffffff00000p-16385L;" > u.c
2.gcc -S u.c
3.inspec u.s
    

Actual Results:  'd' is all zeros

Expected Results:  + 0000 00000003 ffffffff

Additional info:
Comment 1 Richard Henderson 2003-02-15 18:15:37 EST
The rewrite of real.c in gcc 3.4 cvs begun last September
was done explcitly to fix this bug.  It may be possible to
bring this code back from mainline, but it'll be a large patch.

Note You need to log in before you can comment on or make changes to this bug.