Hide Forgot
==412== 1 errors in context 1 of 10: ==412== Thread 2: ==412== Syscall param rt_sigtimedwait(set) points to uninitialised byte(s) ==412== at 0x389DE0EBF7: do_sigwait (in /lib64/libpthread-2.14.so) ==412== by 0x389DE0EC78: sigwait (in /lib64/libpthread-2.14.so) ==412== by 0x40619F: glusterfs_sigwaiter (glusterfsd.c:1241) ==412== by 0x389DE07AF0: start_thread (in /lib64/libpthread-2.14.so) ==412== by 0x389DADFB7C: clone (in /lib64/libc-2.14.so) ==412== Address 0x6422e10 is on thread 2's stack ==412== ==412== ==412== 1 errors in context 2 of 10: ==412== Conditional jump or move depends on uninitialised value(s) ==412== at 0x389DE0EBCD: do_sigwait (in /lib64/libpthread-2.14.so) ==412== by 0x389DE0EC78: sigwait (in /lib64/libpthread-2.14.so) ==412== by 0x40619F: glusterfs_sigwaiter (glusterfsd.c:1241) ==412== by 0x389DE07AF0: start_thread (in /lib64/libpthread-2.14.so) ==412== by 0x389DADFB7C: clone (in /lib64/libc-2.14.so) Found in 3.2.1, still exists in 3.2.2 and head of the master branch. Trivial to fix, maybe just truth-and-beauty; it'd be nice to eliminate the noise from valgrind output. diff --git a/glusterfsd/src/glusterfsd.c b/glusterfsd/src/glusterfsd.c index 35cb286..fb7df3f 100644 --- a/glusterfsd/src/glusterfsd.c +++ b/glusterfsd/src/glusterfsd.c @@ -1278,7 +1278,7 @@ glusterfs_sigwaiter (void *arg) int ret = 0; int sig = 0; - + sigemptyset (&set); sigaddset (&set, SIGINT); /* cleanup_and_exit */ sigaddset (&set, SIGTERM); /* cleanup_and_exit */ sigaddset (&set, SIGHUP); /* reincarnate */
CHANGE: http://review.gluster.com/129 (Thanks to kkeithle for pointing out.) merged in master by Anand Avati (avati)
CHANGE: http://review.gluster.com/130 (Thanks to kkeithle for pointing out.) merged in release-3.2 by Anand Avati (avati)
CHANGE: http://review.gluster.com/131 (Thanks to kkeithle for pointing out.) merged in release-3.1 by Anand Avati (avati)
I ran valgrid for all the glusterfsd processes of the volume in realease 3.2.3, i didn't get any above mentioned errors in the logs. But when i performed same test in release 3.2.1, i got similar error messages in the logs. So it is fixed in release 3.2.3.