From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-GB; rv:1.7.12) Gecko/20050922 Fedora/1.0.7-1.1.fc4 Firefox/1.0.7
Description of problem:
Reported by email@example.com as
"GFS6.1 hangs - after fence_tool join succeeds"
Running a clvmd up/down script on a 3-node cluster causes one to hang after some time.
Ravi can reproduce this - I can't.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Run Ravi's script
Actual Results: One node hangs after several iterations. As it's stuck in recovery, the others get stuck too.
Expected Results: Recovery completing normally.
This is the latest output I was sent after including some debug printks in lowcomms.c. The first entry is the key one.
1. On node that hung(gfs1)
Dec 14 18:03:52 gfs1-pvt kernel: PJC: sock_release before connect
Please note that I noticed the hang on this node that occurred
at 18:03:52, around 19:13:40 and then rebooted this node.
2. On node that fenced the hung node (gfs2)
Dec 14 19:14:26 gfs2-pvt kernel: dlm: clvmd: dlm_dir_rebuild_wait failed -1
Dec 14 19:14:30 gfs2-pvt kernel: PJC: closing connection because node 1 left
3. On the other node(gfs3):
Dec 14 18:04:05 gfs3-pvt kernel: PJC: closing connection after bad send: ret
Dec 14 19:14:29 gfs3-pvt kernel: dlm: clvmd: nodes_reconfig failed -1
Dec 14 19:14:30 gfs3-pvt kernel: PJC: closing connection because node 1 left
Created attachment 122392 [details]
putative fix for the problem
Ravi has been running this patch for 70 hours no with no hangs, but two
reported "incidents" so It looks like it might be the fix.
Checked into -rSTABLE & -rRHEL4 (but not U3)
Checked in for U3:
Checking in lowcomms.c;
/cvs/cluster/cluster/dlm-kernel/src/lowcomms.c,v <-- lowcomms.c
new revision: 188.8.131.52.2.1; previous revision: 184.108.40.206
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.