Bug 594511

Summary: dlm_controld uses wrong fencing time comparison
Product: Red Hat Enterprise Linux 6 Reporter: David Teigland <teigland>
Component: clusterAssignee: David Teigland <teigland>
Status: CLOSED CURRENTRELEASE QA Contact: Cluster QE <mspqa-list>
Severity: medium Docs Contact:
Priority: high    
Version: 6.0CC: ccaulfie, cluster-maint, djansa, fdinitto, lhh, nstraz, rpeterso, teigland
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: cluster-3.0.12-5.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-11-10 19:59:16 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
dlm_tool dump
none
fence_tool dump
none
/var/log/messages 01
none
/var/log/messages 03
none
/var/log/messages 04 none

Description David Teigland 2010-05-20 21:31:28 UTC
Description of problem:

dlm_controld checks that a failed node has been fenced by comparing the time it was fenced to the time it properly joined the cpg.  In one of Nate's QE tests, he found a case where the failed node rejoined the cpg just before the fencing time was recorded.  This causes dlm_controld to get stuck waiting for a fencing to occur when it already has.

1274304702 dlm:ls:clvmd conf 2 0 1 memb 1 4 join left 3
1274304702 clvmd add_change cg 2 remove nodeid 3 reason 3
1274304702 clvmd add_change cg 2 counts member 2 joined 0 remove 1 failed 1
1274304702 clvmd stop_kernel cg 2
1274304702 write "0" to "/sys/kernel/dlm/clvmd/control"
1274304702 clvmd check_fencing 3 not fenced add 1274304347 fence 0
1274304702 dlm:ls:clvmd conf 1 0 1 memb 1 join left 4
1274304702 clvmd add_change cg 3 remove nodeid 4 reason 3
1274304702 clvmd add_change cg 3 counts member 1 joined 0 remove 1 failed 1
1274304702 clvmd check_fencing 4 not fenced add 1274304347 fence 0
....
1274304741 cluster node 3 added
...
1274304752 dlm:ls:clvmd conf 2 1 0 memb 1 3 join 3 left
1274304752 clvmd add_change cg 4 joined nodeid 3
1274304752 clvmd add_change cg 4 counts member 2 joined 1 remove 0 failed 0
1274304754 clvmd receive_start 3:1 len 80
1274304754 clvmd match_change 3:1 skip 2 sender not member
1274304754 clvmd match_change 3:1 skip 3 sender not member
1274304754 clvmd match_change 3:1 matches cg 4
1274304754 clvmd check_fencing 3 not fenced add 1274304754 fence 1274304753
1274304754 clvmd check_fencing 4 1274304347 fenced at 1274304753
...
1274304832 dlm:ls:clvmd conf 3 1 0 memb 1 3 4 join 4 left
1274304832 clvmd add_change cg 5 joined nodeid 4
1274304832 clvmd add_change cg 5 counts member 3 joined 1 remove 0 failed 0
1274304832 clvmd receive_start 4:1 len 84
1274304832 clvmd match_change 4:1 skip cg 2 created 1274304702 cluster add 12743
1274304832 clvmd match_change 4:1 skip 3 sender not member
1274304832 clvmd match_change 4:1 skip 4 sender not member
1274304832 clvmd match_change 4:1 matches cg 5
1274304832 clvmd receive_start 3:2 len 84
1274304832 clvmd match_change 3:2 skip 2 sender not member
1274304832 clvmd match_change 3:2 skip 3 sender not member
1274304832 clvmd match_change 3:2 skip 4 already start
1274304832 clvmd match_change 3:2 matches cg 5

This line: clvmd check_fencing 3 not fenced add 1274304754 fence 1274304753
shows that 3 was added to the cpg one second after fenced recorded that fencing was done.  The fenced log confirms that this is when it received the message indicating the 3 was fenced:

1274304753 receive_victim_done 1:3 flags 2 len 80
1274304753 receive_victim_done 1:3 remove victim 3 time 1274304753 how 2

I've not been able to reproduce this, but I believe the fix is for dlm_controld to check that the fencing time is later than the time it saw the node fail instead of later than the time it saw the node most recently join.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 David Teigland 2010-05-20 21:33:26 UTC
Created attachment 415521 [details]
dlm_tool dump

Comment 2 David Teigland 2010-05-20 21:33:44 UTC
Created attachment 415522 [details]
fence_tool dump

Comment 3 David Teigland 2010-05-20 21:34:11 UTC
Created attachment 415523 [details]
/var/log/messages 01

Comment 4 David Teigland 2010-05-20 21:34:29 UTC
Created attachment 415524 [details]
/var/log/messages 03

Comment 5 David Teigland 2010-05-20 21:34:50 UTC
Created attachment 415525 [details]
/var/log/messages 04

Comment 6 RHEL Program Management 2010-05-20 21:36:35 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 9 Nate Straz 2010-05-27 14:18:19 UTC
I ran into an issue on west nodes which Dave determined was a problem with this patch.

Comment 10 David Teigland 2010-05-27 15:35:20 UTC
pushed a second patch to RHEL6 branch
http://git.fedorahosted.org/git/?p=cluster.git;a=commitdiff;h=f7a3fe0e4375d08f545ef6b3f7617295b56373c6

Comment 11 Fabio Massimo Di Nitto 2010-05-28 03:35:28 UTC
Nate: back to you now, I don“t seem to be able to reset the FailedQA flag tho.

Comment 14 releng-rhel@redhat.com 2010-11-10 19:59:16 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.