Bug 782609

Summary: dlm recovery proportional to N*N, where N is the number of locks to recover
Product: Red Hat Enterprise Linux 6 Reporter: Devin Bougie <devin.bougie>
Component: clusterAssignee: David Teigland <teigland>
Status: CLOSED DUPLICATE QA Contact: Cluster QE <mspqa-list>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 6.1CC: ccaulfie, cluster-maint, devin.bougie, lhh, rpeterso, teigland
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-03-05 17:11:01 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Devin Bougie 2012-01-17 22:15:25 UTC
Description of problem:

With 300,000 locks out on one node, it takes about 3 minutes to mount or unmount the GFS2 file systems, during which the node with all the locks is 100% in dlm_recv and dlm_recoverd.  This is pretty much independent of the current IO load and solely depends on the # of locks out.  

While recovery is in progress, new dlm locks are blocked out.  Files that a system already has a lock for are still accessible, but anything new has to wait for recovery to finish, including accessing the mount lock for the group.  It also happens regardless of whether the FS was cleanly unmounted or the node crashed.

For a detailed description of this problem and a patch that appears to resolve the issue, please see:
https://www.redhat.com/archives/linux-cluster/2011-December/msg00055.html

For a patched version of dlm, please see:
http://www.bosson.eu/temp/dlm-kmod-1.0-1.el6.src.rpm

With 300,000 locks out on a GFS2 file system in our cluster, the lock recovery time went from 3 minutes without the patch to 3 seconds with.

Comment 2 David Teigland 2012-01-17 22:31:44 UTC
Thanks, I'll try to find some time to look at this more closely in the next few
days.  There's an upstream patch I'm working on (which I'll try to push out
somewhere) which just looks the rsb up in rsbtbl, which may be an alternative
to backport also.

FWIW, that many locks have never taken so long to recover for me before, so I'm
not sure why it is for you.

Comment 3 David Teigland 2012-01-17 23:09:03 UTC
The upstream patch I'm working on is here
https://github.com/teigland/linux-dlm/commits/rsbdir2

It won't be backported to any rhel versions, but we might be able to use
the same find_rsb_root improvement.

Comment 4 David Teigland 2012-02-21 16:33:42 UTC
Will look at this for 6.4.

Comment 5 David Teigland 2012-03-05 17:11:01 UTC

*** This bug has been marked as a duplicate of bug 772376 ***