Bug 84383 - hex denorm floats not interpreted correctly
Summary: hex denorm floats not interpreted correctly
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: gcc
Version: 9
Hardware: i686
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Jakub Jelinek
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2003-02-15 09:26 UTC by Ulrich Drepper
Modified: 2007-04-18 16:51 UTC (History)
2 users (show)

Fixed In Version: 3.2.2-5
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2003-08-05 16:19:09 UTC
Embargoed:


Attachments (Terms of Use)

Description Ulrich Drepper 2003-02-15 09:26:26 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3b) Gecko/20030213

Description of problem:
With gcc-3.2.2-1 (and probably earlier versions):

long double d = 0x0.0000003ffffffff00000p-16385L;

gcc expands this to all zeros.  The correct representation is

  + 0000 00000003 ffffffff

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.echo "long double d = 0x0.0000003ffffffff00000p-16385L;" > u.c
2.gcc -S u.c
3.inspec u.s
    

Actual Results:  'd' is all zeros

Expected Results:  + 0000 00000003 ffffffff

Additional info:

Comment 1 Richard Henderson 2003-02-15 23:15:37 UTC
The rewrite of real.c in gcc 3.4 cvs begun last September
was done explcitly to fix this bug.  It may be possible to
bring this code back from mainline, but it'll be a large patch.


Note You need to log in before you can comment on or make changes to this bug.