This bug report has both been sent bug-gcc and posted to RedHat's bugzilla. I have reproduced this bug with a completely vanilla gcc-3.0.2 configured as follows: Reading specs from /usr/local/gcc/lib/gcc-lib/i686-pc-linux-gnu/3.0.2/specs Configured with: ../gcc-3.0.2/configure --prefix=/usr/local/gcc --enable-threads=posix Thread model: posix gcc version 3.0.2 as well as with the gcc3 packages included with RedHat 7.2. My system is a GNU/Linux (RedHat 7.2) ix86 system, freshly installed on a PII-400 with 512 MB RAM. I have also reproduced this on an identically configured system but with a 1.4 GHz Athalon processor and 512 MB RAM. The following program segfaults when compiled with optimization but works correctly when compiled without optimization. This program works properly with and without optimization in gcc 3.0 and 3.0.1. It also works with the gcc3 packages from Roswell 2 (RedHat's most recent beta prior to the release of 7.2). For some reason, the exception being thrown is not being propagated. There are two cases: one in which the exception violates the throw () clause of the function, and one in which it doesn't. Each cases segfaults. The first one should result in the unexpected handler being called, and the second should propagate the exception. The attached C++ source file illustrates the problem. It is as minimal as I can get it. The attached Makefile build executables for both cases both with and without optimization. On my system, bug1 and bug3 produce the expected output, and bug2 and bug4 segfault. % make ./bug1 exception: 5 ./bug2 make: *** [all] Segmentation fault (core dumped) ./bug3 unexpected: 5 ./bug4 make: *** [all] Segmentation fault (core dumped) Here is the source file and Makefile. Replace CXX with whatever you need to run gcc 3.0.2. (This would be g++3 on RedHat 7.2.) This problem is almost certainly due to something being freed that wasn't allocated. This is based on the stack trace when compiled with -g -O and also from running with the NJAMD malloc debugger. In fact, declaring the variable v1 in A's constructor and then assigning it the value returned by f() masks this problem. I'm guessing the optimizer must be eliminating initialization of something that isn't going to be used but not eliminating its cleanup, but I'll leave it to more experienced people to debug this!
Created attachment 35301 [details] source code
Created attachment 35302 [details] Makefile for reproducing bug
I do not get core dumps with gcc 3.0.3, I get: ./bug1 exception: 5 ./bug2 exception: 5 ./bug3 unexpected: 5 ./bug4 unexpected: 5
This bug should be closed. I have also verified that the bug does not happen with the current RedHat 7.2 gcc3 (3.0.4) updated after the zlib bug. As the poster, I'm taking the liberty of closing it.