Description of problem: When glusterfs processes are running in valgrind mode (option given to glusterd), if graph changes are happening because of which nfs server has to be restarted, then while restarting when it tries to start nlm service, it kills the previous instance of rpc.statd running (by issuing the command killall -9 rpc.statd) and restart it. But when the process is running in valgrind mode killall command is not able to kill the process, thus new instance of rpc.statd will be started without killing the old instance leading to increase in the processes. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. start glusterd with the option --xlator-option *.run-with-valgrind=yes 2. Create and start a volume 3. Start doing graph changes (xlator on/off) Actual results: Lots of instances of rpc.statd running via valgrind Expected results: Old instances of rpc.statd should not be running. Additional info: ps aux | grep gluster root 12094 0.3 1.9 193120 76380 ? Ssl 17:20 0:05 /usr/bin/valgrind.bin --leak-check=full --log-file=/root/glusterd_valgrind.log glusterd --xlator-option *.run-with-valgrind=yes root 12126 29.9 3.2 271036 130876 ? Ssl 17:21 9:44 /usr/bin/valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/bricks/valgrind-mirror-mnt-sda7-export3.log /usr/local/sbin/glusterfsd -s localhost --volfile-id mirror.hyperspace.mnt-sda7-export3 -p /etc/glusterd/vols/mirror/run/hyperspace-mnt-sda7-export3.pid -S /tmp/5df0c6a56f771a5fe2bd686d0665e565.socket --brick-name /mnt/sda7/export3 -l /usr/local/var/log/glusterfs/bricks/mnt-sda7-export3.log --xlator-option *-posix.glusterd-uuid=68ce4525-9b4b-4805-9dae-7b89bb2e4598 --brick-port 24009 --xlator-option mirror-server.listen-port=24009 root 12135 30.6 3.3 273136 132300 ? Ssl 17:21 9:58 /usr/bin/valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/bricks/valgrind-mirror-mnt-sda8-export3.log /usr/local/sbin/glusterfsd -s localhost --volfile-id mirror.hyperspace.mnt-sda8-export3 -p /etc/glusterd/vols/mirror/run/hyperspace-mnt-sda8-export3.pid -S /tmp/ddd89a89b616cd0768e4877e53480a07.socket --brick-name /mnt/sda8/export3 -l /usr/local/var/log/glusterfs/bricks/mnt-sda8-export3.log --xlator-option *-posix.glusterd-uuid=68ce4525-9b4b-4805-9dae-7b89bb2e4598 --brick-port 24010 --xlator-option mirror-server.listen-port=24010 root 12143 31.8 3.3 273136 132128 ? Ssl 17:21 10:21 /usr/bin/valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/bricks/valgrind-mirror-mnt-sda10-export3.log /usr/local/sbin/glusterfsd -s localhost --volfile-id mirror.hyperspace.mnt-sda10-export3 -p /etc/glusterd/vols/mirror/run/hyperspace-mnt-sda10-export3.pid -S /tmp/5cd718d438cd0903a7de92ee648fb8a0.socket --brick-name /mnt/sda10/export3 -l /usr/local/var/log/glusterfs/bricks/mnt-sda10-export3.log --xlator-option *-posix.glusterd-uuid=68ce4525-9b4b-4805-9dae-7b89bb2e4598 --brick-port 24013 --xlator-option mirror-server.listen-port=24013 root 12159 0.1 2.4 238364 99084 ? Ssl 17:21 0:02 /usr/bin/valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-glustershd.log /usr/local/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /etc/glusterd/glustershd/run/glustershd.pid -l /usr/local/var/log/glusterfs/glustershd.log -S /tmp/9dd0787a54d2242195def149ac15be23.socket --xlator-option *replicate*.node-uuid=68ce4525-9b4b-4805-9dae-7b89bb2e4598 statd 12184 0.0 0.9 93496 36580 ? Ss 17:21 0:00 valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-nfs.log /sbin/rpc.statd root 12212 65.7 11.3 1102776 452704 ? Ssl 17:21 20:54 /usr/bin/valgrind.bin --leak-check=full --log-file=/root/glusterfs_valgrind.log glusterfs -s hyperspace --volfile-id mirror /mnt/client statd 12275 0.0 0.9 93496 36580 ? Ss 17:22 0:00 valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-nfs.log /sbin/rpc.statd statd 12324 0.0 0.9 93512 36668 ? Ss 17:27 0:00 valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-nfs.log /sbin/rpc.statd statd 12364 0.0 0.9 93496 36580 ? Ss 17:32 0:00 valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-nfs.log /sbin/rpc.statd statd 12443 0.0 0.9 93496 36580 ? Ss 17:37 0:00 valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-nfs.log /sbin/rpc.statd statd 12520 0.0 0.9 93496 36580 ? Ss 17:42 0:00 valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-nfs.log /sbin/rpc.statd statd 12556 0.1 0.9 93496 36580 ? Ss 17:47 0:00 valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-nfs.log /sbin/rpc.statd root 12681 2.5 2.5 230920 102532 ? Ssl 17:52 0:01 /usr/bin/valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-nfs.log /usr/local/sbin/glusterfs -s localhost --volfile-id gluster/nfs -p /etc/glusterd/nfs/run/nfs.pid -l /usr/local/var/log/glusterfs/nfs.log -S /tmp/0cdccc1190746d065dc99bc52ca6a5b4.socket statd 12698 0.6 0.9 93496 36580 ? Ss 17:52 0:00 valgrind.bin --leak-check=full --trace-children=yes --log-file=/usr/local/var/log/glusterfs/valgrind-nfs.log /sbin/rpc.statd root 12733 0.0 0.0 13128 1068 pts/1 S+ 17:53 0:00 grep --color=auto gluster root@hyperspace:/home/raghu# kill
Fixed in the master by the commit http://review.gluster.com/3225. .