From Bugzilla Helper: User-Agent: Mozilla/4.79C-CCK-MCD [en] (X11; U; SunOS 5.9 sun4u) Description of problem: This problem occurs because of denormalized values. Consider the code #include "stdio.h" double zero = 0.0; void test(double z) { printf( "z: %.16g\n", z ); } int main() { float x = 1.175494351e-38; float y = 1.40129846e-45; printf( "x: %.7g %08x\n", (x+0.0), *((int*)&x) ); printf( "y: %.7g %08x\n", (y+0.0), *((int*)&y) ); test(y); test(y + zero); return 0; } The float value y is not explicitly converted from float to double. Most of the time, this is ok, but for floating constant denorms, this results in the ia64 taking the float format bits as double format bits, which results in the following output: x: 1.175494e-38 00800000 y: 1.401298e-45 00000001 z: 2.65249473870659e-315 z: 1.401298464324817e-45 An explicit convert (fnorm.d) is required. The compiler version is gcc version 2.96 20000731 (Red Hat Linux 7.1 2.96-101) Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Compile and link the (small) program included in the description 2. Run it 3. Actual Results: x: 1.175494e-38 00800000 y: 1.401298e-45 00000001 z: 2.65249473870659e-315 z: 1.401298464324817e-45 The 1st z, which does not have a convert from float to double (I examined the assembly code), prints the wrong value. The second z, which is converted to double by forcing an addition with a variable containing the value 0, prints the correct value. Expected Results: The 1st z should have printed the same value as the 2nd z: x: 1.175494e-38 00800000 y: 1.401298e-45 00000001 z: 1.401298464324817e-45 z: 1.401298464324817e-45 Additional info: The compiler version is gcc version 2.96 20000731 (Red Hat Linux 7.1 2.96-101) This is an Itanium 1 dual processor system.
See http://gcc.gnu.org/ml/gcc-patches/2001-04/msg00736.html
Jim's patch is included in current sources.