Bug 84383

Summary: hex denorm floats not interpreted correctly
Product: [Retired] Red Hat Linux Reporter: Ulrich Drepper <drepper>
Component: gccAssignee: Jakub Jelinek <jakub>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 9CC: mitr, rth
Target Milestone: ---   
Target Release: ---   
Hardware: i686   
OS: Linux   
Whiteboard:
Fixed In Version: 3.2.2-5 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2003-08-05 16:19:09 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ulrich Drepper 2003-02-15 09:26:26 UTC
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3b) Gecko/20030213

Description of problem:
With gcc-3.2.2-1 (and probably earlier versions):

long double d = 0x0.0000003ffffffff00000p-16385L;

gcc expands this to all zeros.  The correct representation is

  + 0000 00000003 ffffffff

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1.echo "long double d = 0x0.0000003ffffffff00000p-16385L;" > u.c
2.gcc -S u.c
3.inspec u.s
    

Actual Results:  'd' is all zeros

Expected Results:  + 0000 00000003 ffffffff

Additional info:

Comment 1 Richard Henderson 2003-02-15 23:15:37 UTC
The rewrite of real.c in gcc 3.4 cvs begun last September
was done explcitly to fix this bug.  It may be possible to
bring this code back from mainline, but it'll be a large patch.