Description of problem:
Commit e2ccbf90543cf1d163d1a067bf5a8ce049a9c134 for bz 578625
was not correct to use "p_count" (a count of plocks) in the
signature calculation. When plock_ownership is on, the plocks
under an owned resource are not copied into the checkpoint.
However, the node writing the checkpoint counts all these
owned plocks and factors the count into the signature. The
node reading the checkpoint does not get the plocks, so its
count of plocks is different, causing the signature calculation
to be different. It will then disable plock operations.
It would be very common for this to occur in practice, so the
impact is very high.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Created attachment 434805 [details]
Test to verify bug (fix not included):
1. <dlm plock_ownership="1"> in cluster.conf
2. service cman start on node1 and node2
3. node1: mount /gfs; cd /gfs; lock_load
4. node2: mount /gfs; cd /gfs; lock_load
5. on node2 lock_load output includes err 38, e.g.
000000 file0054 ino 205b5 U 00-04 pid 2353 err 38 sec 0.000051
6. on node2 /var/log/messages includes the error
dlm_controld: lockspace g plock disabled our sig ff nodeid 1 sig ce
7. on node1, dlm_tool dump | grep store_plocks should show sig from log
g store_plocks r_count 47 p_count 49 total_size 1960 max_section_size 80
g store_plocks open ckpt handle 3855585c00000000
g store_plocks first 132429 last 132509 r_count 47 p_count 49 sig ce
Test to verify fix: run the same; err 38 should not appear on node2, and there should be no 'plock disabled' message in node2 /var/log/messages.
pushed to RHEL6 branch
Verified using steps Dave outlined in comment #2.
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.