RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 594511 - dlm_controld uses wrong fencing time comparison
Summary: dlm_controld uses wrong fencing time comparison
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: cluster
Version: 6.0
Hardware: All
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-05-20 21:31 UTC by David Teigland
Modified: 2016-04-26 14:17 UTC (History)
8 users (show)

Fixed In Version: cluster-3.0.12-5.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-11-10 19:59:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
dlm_tool dump (17.01 KB, text/plain)
2010-05-20 21:33 UTC, David Teigland
no flags Details
fence_tool dump (7.18 KB, text/plain)
2010-05-20 21:33 UTC, David Teigland
no flags Details
/var/log/messages 01 (2.14 MB, text/plain)
2010-05-20 21:34 UTC, David Teigland
no flags Details
/var/log/messages 03 (1.72 MB, application/octet-stream)
2010-05-20 21:34 UTC, David Teigland
no flags Details
/var/log/messages 04 (1.80 MB, text/plain)
2010-05-20 21:34 UTC, David Teigland
no flags Details

Description David Teigland 2010-05-20 21:31:28 UTC
Description of problem:

dlm_controld checks that a failed node has been fenced by comparing the time it was fenced to the time it properly joined the cpg.  In one of Nate's QE tests, he found a case where the failed node rejoined the cpg just before the fencing time was recorded.  This causes dlm_controld to get stuck waiting for a fencing to occur when it already has.

1274304702 dlm:ls:clvmd conf 2 0 1 memb 1 4 join left 3
1274304702 clvmd add_change cg 2 remove nodeid 3 reason 3
1274304702 clvmd add_change cg 2 counts member 2 joined 0 remove 1 failed 1
1274304702 clvmd stop_kernel cg 2
1274304702 write "0" to "/sys/kernel/dlm/clvmd/control"
1274304702 clvmd check_fencing 3 not fenced add 1274304347 fence 0
1274304702 dlm:ls:clvmd conf 1 0 1 memb 1 join left 4
1274304702 clvmd add_change cg 3 remove nodeid 4 reason 3
1274304702 clvmd add_change cg 3 counts member 1 joined 0 remove 1 failed 1
1274304702 clvmd check_fencing 4 not fenced add 1274304347 fence 0
....
1274304741 cluster node 3 added
...
1274304752 dlm:ls:clvmd conf 2 1 0 memb 1 3 join 3 left
1274304752 clvmd add_change cg 4 joined nodeid 3
1274304752 clvmd add_change cg 4 counts member 2 joined 1 remove 0 failed 0
1274304754 clvmd receive_start 3:1 len 80
1274304754 clvmd match_change 3:1 skip 2 sender not member
1274304754 clvmd match_change 3:1 skip 3 sender not member
1274304754 clvmd match_change 3:1 matches cg 4
1274304754 clvmd check_fencing 3 not fenced add 1274304754 fence 1274304753
1274304754 clvmd check_fencing 4 1274304347 fenced at 1274304753
...
1274304832 dlm:ls:clvmd conf 3 1 0 memb 1 3 4 join 4 left
1274304832 clvmd add_change cg 5 joined nodeid 4
1274304832 clvmd add_change cg 5 counts member 3 joined 1 remove 0 failed 0
1274304832 clvmd receive_start 4:1 len 84
1274304832 clvmd match_change 4:1 skip cg 2 created 1274304702 cluster add 12743
1274304832 clvmd match_change 4:1 skip 3 sender not member
1274304832 clvmd match_change 4:1 skip 4 sender not member
1274304832 clvmd match_change 4:1 matches cg 5
1274304832 clvmd receive_start 3:2 len 84
1274304832 clvmd match_change 3:2 skip 2 sender not member
1274304832 clvmd match_change 3:2 skip 3 sender not member
1274304832 clvmd match_change 3:2 skip 4 already start
1274304832 clvmd match_change 3:2 matches cg 5

This line: clvmd check_fencing 3 not fenced add 1274304754 fence 1274304753
shows that 3 was added to the cpg one second after fenced recorded that fencing was done.  The fenced log confirms that this is when it received the message indicating the 3 was fenced:

1274304753 receive_victim_done 1:3 flags 2 len 80
1274304753 receive_victim_done 1:3 remove victim 3 time 1274304753 how 2

I've not been able to reproduce this, but I believe the fix is for dlm_controld to check that the fencing time is later than the time it saw the node fail instead of later than the time it saw the node most recently join.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 David Teigland 2010-05-20 21:33:26 UTC
Created attachment 415521 [details]
dlm_tool dump

Comment 2 David Teigland 2010-05-20 21:33:44 UTC
Created attachment 415522 [details]
fence_tool dump

Comment 3 David Teigland 2010-05-20 21:34:11 UTC
Created attachment 415523 [details]
/var/log/messages 01

Comment 4 David Teigland 2010-05-20 21:34:29 UTC
Created attachment 415524 [details]
/var/log/messages 03

Comment 5 David Teigland 2010-05-20 21:34:50 UTC
Created attachment 415525 [details]
/var/log/messages 04

Comment 6 RHEL Program Management 2010-05-20 21:36:35 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 9 Nate Straz 2010-05-27 14:18:19 UTC
I ran into an issue on west nodes which Dave determined was a problem with this patch.

Comment 10 David Teigland 2010-05-27 15:35:20 UTC
pushed a second patch to RHEL6 branch
http://git.fedorahosted.org/git/?p=cluster.git;a=commitdiff;h=f7a3fe0e4375d08f545ef6b3f7617295b56373c6

Comment 11 Fabio Massimo Di Nitto 2010-05-28 03:35:28 UTC
Nate: back to you now, I don´t seem to be able to reset the FailedQA flag tho.

Comment 14 releng-rhel@redhat.com 2010-11-10 19:59:16 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.