Bug 594511 - dlm_controld uses wrong fencing time comparison
dlm_controld uses wrong fencing time comparison
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: cluster (Show other bugs)
6.0
All Linux
high Severity medium
: rc
: ---
Assigned To: David Teigland
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-05-20 17:31 EDT by David Teigland
Modified: 2016-04-26 10:17 EDT (History)
8 users (show)

See Also:
Fixed In Version: cluster-3.0.12-5.el6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-11-10 14:59:16 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
dlm_tool dump (17.01 KB, text/plain)
2010-05-20 17:33 EDT, David Teigland
no flags Details
fence_tool dump (7.18 KB, text/plain)
2010-05-20 17:33 EDT, David Teigland
no flags Details
/var/log/messages 01 (2.14 MB, text/plain)
2010-05-20 17:34 EDT, David Teigland
no flags Details
/var/log/messages 03 (1.72 MB, application/octet-stream)
2010-05-20 17:34 EDT, David Teigland
no flags Details
/var/log/messages 04 (1.80 MB, text/plain)
2010-05-20 17:34 EDT, David Teigland
no flags Details

  None (edit)
Description David Teigland 2010-05-20 17:31:28 EDT
Description of problem:

dlm_controld checks that a failed node has been fenced by comparing the time it was fenced to the time it properly joined the cpg.  In one of Nate's QE tests, he found a case where the failed node rejoined the cpg just before the fencing time was recorded.  This causes dlm_controld to get stuck waiting for a fencing to occur when it already has.

1274304702 dlm:ls:clvmd conf 2 0 1 memb 1 4 join left 3
1274304702 clvmd add_change cg 2 remove nodeid 3 reason 3
1274304702 clvmd add_change cg 2 counts member 2 joined 0 remove 1 failed 1
1274304702 clvmd stop_kernel cg 2
1274304702 write "0" to "/sys/kernel/dlm/clvmd/control"
1274304702 clvmd check_fencing 3 not fenced add 1274304347 fence 0
1274304702 dlm:ls:clvmd conf 1 0 1 memb 1 join left 4
1274304702 clvmd add_change cg 3 remove nodeid 4 reason 3
1274304702 clvmd add_change cg 3 counts member 1 joined 0 remove 1 failed 1
1274304702 clvmd check_fencing 4 not fenced add 1274304347 fence 0
....
1274304741 cluster node 3 added
...
1274304752 dlm:ls:clvmd conf 2 1 0 memb 1 3 join 3 left
1274304752 clvmd add_change cg 4 joined nodeid 3
1274304752 clvmd add_change cg 4 counts member 2 joined 1 remove 0 failed 0
1274304754 clvmd receive_start 3:1 len 80
1274304754 clvmd match_change 3:1 skip 2 sender not member
1274304754 clvmd match_change 3:1 skip 3 sender not member
1274304754 clvmd match_change 3:1 matches cg 4
1274304754 clvmd check_fencing 3 not fenced add 1274304754 fence 1274304753
1274304754 clvmd check_fencing 4 1274304347 fenced at 1274304753
...
1274304832 dlm:ls:clvmd conf 3 1 0 memb 1 3 4 join 4 left
1274304832 clvmd add_change cg 5 joined nodeid 4
1274304832 clvmd add_change cg 5 counts member 3 joined 1 remove 0 failed 0
1274304832 clvmd receive_start 4:1 len 84
1274304832 clvmd match_change 4:1 skip cg 2 created 1274304702 cluster add 12743
1274304832 clvmd match_change 4:1 skip 3 sender not member
1274304832 clvmd match_change 4:1 skip 4 sender not member
1274304832 clvmd match_change 4:1 matches cg 5
1274304832 clvmd receive_start 3:2 len 84
1274304832 clvmd match_change 3:2 skip 2 sender not member
1274304832 clvmd match_change 3:2 skip 3 sender not member
1274304832 clvmd match_change 3:2 skip 4 already start
1274304832 clvmd match_change 3:2 matches cg 5

This line: clvmd check_fencing 3 not fenced add 1274304754 fence 1274304753
shows that 3 was added to the cpg one second after fenced recorded that fencing was done.  The fenced log confirms that this is when it received the message indicating the 3 was fenced:

1274304753 receive_victim_done 1:3 flags 2 len 80
1274304753 receive_victim_done 1:3 remove victim 3 time 1274304753 how 2

I've not been able to reproduce this, but I believe the fix is for dlm_controld to check that the fencing time is later than the time it saw the node fail instead of later than the time it saw the node most recently join.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:
Comment 1 David Teigland 2010-05-20 17:33:26 EDT
Created attachment 415521 [details]
dlm_tool dump
Comment 2 David Teigland 2010-05-20 17:33:44 EDT
Created attachment 415522 [details]
fence_tool dump
Comment 3 David Teigland 2010-05-20 17:34:11 EDT
Created attachment 415523 [details]
/var/log/messages 01
Comment 4 David Teigland 2010-05-20 17:34:29 EDT
Created attachment 415524 [details]
/var/log/messages 03
Comment 5 David Teigland 2010-05-20 17:34:50 EDT
Created attachment 415525 [details]
/var/log/messages 04
Comment 6 RHEL Product and Program Management 2010-05-20 17:36:35 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.
Comment 9 Nate Straz 2010-05-27 10:18:19 EDT
I ran into an issue on west nodes which Dave determined was a problem with this patch.
Comment 10 David Teigland 2010-05-27 11:35:20 EDT
pushed a second patch to RHEL6 branch
http://git.fedorahosted.org/git/?p=cluster.git;a=commitdiff;h=f7a3fe0e4375d08f545ef6b3f7617295b56373c6
Comment 11 Fabio Massimo Di Nitto 2010-05-27 23:35:28 EDT
Nate: back to you now, I don´t seem to be able to reset the FailedQA flag tho.
Comment 14 releng-rhel@redhat.com 2010-11-10 14:59:16 EST
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.