The following program segfaults when you send SIGUSR1 to it. I'm able to reproduce this on an AMD64 FC4 system with kernel-smp-2.6.13-1.1526_FC4, and glibc-2.3.5-10.3, but only when compiled without -m32 (the 32-bit compiled version works fine). AFAICT, the call to setcontext jumps to the PLT, enters into ld.so for symbol resolution, returns, and then jumps to nowhere. Somebody else I talked to reported a segfault when signalling the program on an i686 Debian system with kernel 2.6.12-1-686 and libc 2.3.5-6. setcontext called with ucontext_t initialized by getcontext works fine, so I'm not sure if this is actually a glibc problem or if the kernel-created ucontext is bad. The fact that (I think) I'm seeing the program crash before it even enters setcontext makes me suspect glibc. --- #define _GNU_SOURCE #include <signal.h> #include <ucontext.h> #include <stddef.h> void handle_sig(int signo, siginfo_t *siginfo, void * context) { ucontext_t *c = context; setcontext(c); } int main(int argc, char* argv[]) { sigset_t ss; sigemptyset(&ss); struct sigaction sa = { .sa_sigaction = handle_sig, .sa_mask = ss, .sa_flags = SA_SIGINFO }; sigaction(SIGUSR1, &sa, NULL); while (1); return 0; }
Looking at the AMD64 setcontext.S, it appears that RAX is zeroed and R10 and R11 aren't restored at all, which means that setcontext doesn't actually restore the complete machine context. I don't see how that would matter in this case, but it's bound to screw up others.
hmm. apparently this is deliberate, which is stupid, but...