Bug 782609 - dlm recovery proportional to N*N, where N is the number of locks to recover
Summary: dlm recovery proportional to N*N, where N is the number of locks to recover
Keywords:
Status: CLOSED DUPLICATE of bug 772376
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: cluster
Version: 6.1
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-01-17 22:15 UTC by Devin Bougie
Modified: 2012-03-05 17:11 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-03-05 17:11:01 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Devin Bougie 2012-01-17 22:15:25 UTC
Description of problem:

With 300,000 locks out on one node, it takes about 3 minutes to mount or unmount the GFS2 file systems, during which the node with all the locks is 100% in dlm_recv and dlm_recoverd.  This is pretty much independent of the current IO load and solely depends on the # of locks out.  

While recovery is in progress, new dlm locks are blocked out.  Files that a system already has a lock for are still accessible, but anything new has to wait for recovery to finish, including accessing the mount lock for the group.  It also happens regardless of whether the FS was cleanly unmounted or the node crashed.

For a detailed description of this problem and a patch that appears to resolve the issue, please see:
https://www.redhat.com/archives/linux-cluster/2011-December/msg00055.html

For a patched version of dlm, please see:
http://www.bosson.eu/temp/dlm-kmod-1.0-1.el6.src.rpm

With 300,000 locks out on a GFS2 file system in our cluster, the lock recovery time went from 3 minutes without the patch to 3 seconds with.

Comment 2 David Teigland 2012-01-17 22:31:44 UTC
Thanks, I'll try to find some time to look at this more closely in the next few
days.  There's an upstream patch I'm working on (which I'll try to push out
somewhere) which just looks the rsb up in rsbtbl, which may be an alternative
to backport also.

FWIW, that many locks have never taken so long to recover for me before, so I'm
not sure why it is for you.

Comment 3 David Teigland 2012-01-17 23:09:03 UTC
The upstream patch I'm working on is here
https://github.com/teigland/linux-dlm/commits/rsbdir2

It won't be backported to any rhel versions, but we might be able to use
the same find_rsb_root improvement.

Comment 4 David Teigland 2012-02-21 16:33:42 UTC
Will look at this for 6.4.

Comment 5 David Teigland 2012-03-05 17:11:01 UTC

*** This bug has been marked as a duplicate of bug 772376 ***


Note You need to log in before you can comment on or make changes to this bug.