Description of problem:
when a node is rebooted (manually, or via a forced fence command), clvmd hangs on vgscan after reboot if on the other node (2-node setup) lots of vgscan/pvs commands are running.
Version-Release number of selected component (if applicable):
RHEL6.4, latest updates applied
Steps to Reproduce:
1. launch a loop of vgscan and pvs commands on one node
2. On the same node: fence the other node
3. When the fenced node is back, cman starts on rebooted node, causing all pvs/vgscan commands on the active node to lock (since clvmd is not there yet)
4. On the rebooted node, clvmd then launches and does a vgscan. This hangs too since on the active node all lvm commands are locked.
Lock of fenced node. Very difficult to get it back ok (I managed to do it via launching cman+clvmd manually, stopping clvmd, "dlm_tool leave clvmd", stop cman and fence again, but not even that always worked).
Everything should work, node should not be locked.
Btw: on one testcase I had to kill +start clvmd on the remaining node, because even there all lvm commands were hanging after a successful fence of the other node. So: kill + start of clvmd on the active node, start cman/clvmd on the other node: all is well.
I hope this doesn't interfere with running gfs2 mounts on the active node of course ... but it seems ok.
Well we've quite a few changes to the code coming in rhel 6.5 to improve its robustness in this area, so once that's available, please see if the problem is still there. If it isn't fixed then, let us know and we'll ask for diagnostics and see if we can reproduce it and fix it.
(It doesn't seem worth running diagnostics against the 6.4 code base.)
Is this still reproducible with lvm2-2.02.100-8.el6/lvm2-cluster-2.02.100-8.el6 - the 6.5 update?
I haven't seen the problem after updating to 6.5 (yet), although a collegue found a deadlock somewhere in a 6.5 cluster while I was on holiday. But since no traces were taken, I can't provide more info on this for now.
(In reply to Franky Van Liedekerke from comment #6)
> I haven't seen the problem after updating to 6.5 (yet), although a collegue
> found a deadlock somewhere in a 6.5 cluster while I was on holiday. But
> since no traces were taken, I can't provide more info on this for now.
Is it still the same problem as described in comment #0? If not, let's close this one and let's open new report for the new problem please.
It was probably related to comment #0, but I can't verify it. So I agree this can be closed for now.