Bug 618806

Summary: dlm_controld: fix plock checkpoint signatures
Product: Red Hat Enterprise Linux 6 Reporter: David Teigland <teigland>
Component: clusterAssignee: David Teigland <teigland>
Status: CLOSED CURRENTRELEASE QA Contact: Cluster QE <mspqa-list>
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: ccaulfie, cluster-maint, fdinitto, lhh, rpeterso, syeghiay, teigland
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: cluster-3.0.12-19.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2010-11-10 20:00:25 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
the patch none

Description David Teigland 2010-07-27 19:00:03 UTC
Description of problem:

Commit e2ccbf90543cf1d163d1a067bf5a8ce049a9c134 for bz 578625
was not correct to use "p_count" (a count of plocks) in the
signature calculation.  When plock_ownership is on, the plocks
under an owned resource are not copied into the checkpoint.
However, the node writing the checkpoint counts all these 
owned plocks and factors the count into the signature.  The
node reading the checkpoint does not get the plocks, so its
count of plocks is different, causing the signature calculation
to be different.  It will then disable plock operations.
It would be very common for this to occur in practice, so the
impact is very high.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 David Teigland 2010-07-27 19:01:06 UTC
Created attachment 434805 [details]
the patch

Comment 2 David Teigland 2010-07-27 20:42:03 UTC
Test to verify bug (fix not included):

1. <dlm plock_ownership="1"> in cluster.conf

2. service cman start on node1 and node2

3. node1: mount /gfs; cd /gfs; lock_load

4. node2: mount /gfs; cd /gfs; lock_load

5. on node2 lock_load output includes err 38, e.g.
000000 file0054 ino 205b5 U 00-04 pid 2353 err 38 sec 0.000051

6. on node2 /var/log/messages includes the error
dlm_controld[2172]: lockspace g plock disabled our sig ff nodeid 1 sig ce

7. on node1, dlm_tool dump | grep store_plocks should show sig from log
g store_plocks r_count 47 p_count 49 total_size 1960 max_section_size 80
g store_plocks open ckpt handle 3855585c00000000
g store_plocks first 132429 last 132509 r_count 47 p_count 49 sig ce

Test to verify fix: run the same; err 38 should not appear on node2, and there should be no 'plock disabled' message in node2 /var/log/messages.

Comment 5 Nate Straz 2010-08-25 22:42:22 UTC
Verified using steps Dave outlined in comment #2.

Comment 6 releng-rhel@redhat.com 2010-11-10 20:00:25 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.