This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 970192 - clvmd fails to start on fenced node
clvmd fails to start on fenced node
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.4
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Peter Rajnoha
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-06-03 12:10 EDT by Franky Van Liedekerke
Modified: 2014-07-30 04:21 EDT (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-07-30 04:21:30 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Franky Van Liedekerke 2013-06-03 12:10:11 EDT
Description of problem:

when a node is rebooted (manually, or via a forced fence command), clvmd hangs on vgscan after reboot if on the other node (2-node setup) lots of vgscan/pvs commands are running.


Version-Release number of selected component (if applicable):

RHEL6.4, latest updates applied


How reproducible:


Steps to Reproduce:
1. launch a loop of vgscan and pvs commands on one node
2. On the same node: fence the other node
3. When the fenced node is back, cman starts on rebooted node, causing all pvs/vgscan commands on the active node to lock (since clvmd is not there yet)
4. On the rebooted node, clvmd then launches and does a vgscan. This hangs too since on the active node all lvm commands are locked.

Actual results:
Lock of fenced node. Very difficult to get it back ok (I managed to do it via launching cman+clvmd manually, stopping clvmd, "dlm_tool leave clvmd", stop cman and fence again, but not even that always worked).


Expected results:
Everything should work, node should not be locked.

Additional info:
Comment 2 Franky Van Liedekerke 2013-06-04 05:07:21 EDT
Btw: on one testcase I had to kill +start clvmd on the remaining node, because even there all lvm commands were hanging after a successful fence of the other node. So: kill + start of clvmd on the active node, start cman/clvmd on the other node: all is well.
I hope this doesn't interfere with running gfs2 mounts on the active node of course ... but it seems ok.
Comment 3 Alasdair Kergon 2013-10-09 20:00:48 EDT
Well we've quite a few changes to the code coming in rhel 6.5 to improve its robustness in this area, so once that's available, please see if the problem is still there.  If it isn't fixed then, let us know and we'll ask for diagnostics and see if we can reproduce it and fix it.

(It doesn't seem worth running diagnostics against the 6.4 code base.)
Comment 5 Peter Rajnoha 2014-04-09 07:02:23 EDT
Is this still reproducible with lvm2-2.02.100-8.el6/lvm2-cluster-2.02.100-8.el6 - the 6.5 update?
Comment 6 Franky Van Liedekerke 2014-06-18 10:54:34 EDT
I haven't seen the problem after updating to 6.5 (yet), although a collegue found a deadlock somewhere in a 6.5 cluster while I was on holiday. But since no traces were taken, I can't provide more info on this for now.
Comment 7 Peter Rajnoha 2014-07-30 03:55:31 EDT
(In reply to Franky Van Liedekerke from comment #6)
> I haven't seen the problem after updating to 6.5 (yet), although a collegue
> found a deadlock somewhere in a 6.5 cluster while I was on holiday. But
> since no traces were taken, I can't provide more info on this for now.

Is it still the same problem as described in comment #0? If not, let's close this one and let's open new report for the new problem please.
Comment 8 Franky Van Liedekerke 2014-07-30 04:18:39 EDT
It was probably related to comment #0, but I can't verify it. So I agree this can be closed for now.

Note You need to log in before you can comment on or make changes to this bug.