Hide Forgot
Description of problem: With 300,000 locks out on one node, it takes about 3 minutes to mount or unmount the GFS2 file systems, during which the node with all the locks is 100% in dlm_recv and dlm_recoverd. This is pretty much independent of the current IO load and solely depends on the # of locks out. While recovery is in progress, new dlm locks are blocked out. Files that a system already has a lock for are still accessible, but anything new has to wait for recovery to finish, including accessing the mount lock for the group. It also happens regardless of whether the FS was cleanly unmounted or the node crashed. For a detailed description of this problem and a patch that appears to resolve the issue, please see: https://www.redhat.com/archives/linux-cluster/2011-December/msg00055.html For a patched version of dlm, please see: http://www.bosson.eu/temp/dlm-kmod-1.0-1.el6.src.rpm With 300,000 locks out on a GFS2 file system in our cluster, the lock recovery time went from 3 minutes without the patch to 3 seconds with.
Thanks, I'll try to find some time to look at this more closely in the next few days. There's an upstream patch I'm working on (which I'll try to push out somewhere) which just looks the rsb up in rsbtbl, which may be an alternative to backport also. FWIW, that many locks have never taken so long to recover for me before, so I'm not sure why it is for you.
The upstream patch I'm working on is here https://github.com/teigland/linux-dlm/commits/rsbdir2 It won't be backported to any rhel versions, but we might be able to use the same find_rsb_root improvement.
Will look at this for 6.4.
*** This bug has been marked as a duplicate of bug 772376 ***